Memory Hierarchies for Quantum Data Dean Copsey, Mark Oskin, Frederic T. Chong, Isaac Chaung and Khaled Abdel-Ghaffar Presented by Greg Gerou.
Published byModified over 4 years ago
Presentation on theme: "Memory Hierarchies for Quantum Data Dean Copsey, Mark Oskin, Frederic T. Chong, Isaac Chaung and Khaled Abdel-Ghaffar Presented by Greg Gerou."— Presentation transcript:
Memory Hierarchies for Quantum Data Dean Copsey, Mark Oskin, Frederic T. Chong, Isaac Chaung and Khaled Abdel-Ghaffar Presented by Greg Gerou
Introduction Environmental noise is a big problem: qubits are easily influenced by factors within and without the computer. Threshold theorem: As long as the probability n of error of each operations on a quantum computer is less than some constant (estimated to be as high as 10 -4 ), scalable quantum computers can be built using faulty components.
Introduction (cont’d) Error correction codes have been developed to establish different levels of reliability, but there are overhead trade- offs. The goal of this paper is to reduce the overhead of error correction for the memory system.
Controlled Entanglement This figure demonstrates the entanglement of two bits. The value of the two qubits are linked, ensuring that the bits will be either 11 or 00 (the probability amplitudes for 01 and 10 are zero). The interaction between the two bits determines their probability amplitudes. Similarly, the outside environment has a significant impact on the probability amplitudes of our qubits.
Uncontrolled Entanglement Electrons emit and absorb photons, changing their orbitals. Magnetic spin states of nuclei can be flipped by external magnetic fields. Due to entanglement with the environment, it’s impossible to isolate a system to the point where it is completely stable. This introduction of error due to uncontrolled entanglement is termed decoherence.
Quantum Error Correction A logical qubit can be encoded using a number of physical qubits. Encoding size constraints are driven by the two types of error correction: Amplitude correction Phase correction Three bit error correction (“Shor code”):
Quantum Error Correction Given that: We must correct for error in both phase and amplitude. Using Shor code, 3 bits are required to perform either phase or amplitude correction. Once we perform an error correction, our source bits are put into a different state. Shor’s code requires that one logical qubit be encoded into 3 bits for error correction, and those three bits each need to be encoded into three bits for amplitude correction.
Quantum Error Correction Logical qubit (corrected for both phase and amplitude) Phase corrected qubits Uncorrected qubit vector
Quantum Error Correction Shor’s code is termed a [[9,1,3]] code: Nine physical qubits One logical qubit Three is the Hamming distance: A code with a Hamming distance of d is able to correct (d-1)/2 errors. In this case, one error can be corrected.
Other Encodings Stabalizer code: [[5,1,3]] (densest known way to encode a single qubit) [[8,3,3]] (densest known three qubit code)’ Steane’s: [[7,1,3]]. This code is nice: Operators can be applied to the logical bits by applying simple operators on the physical bits. For example, to perform a NOT on a logical bit, it is only necessary to perform a NOT on each of the physical bits.
Error Calculations As long as the probability, p, of an error is below a certain threshold, c (10 -14 in the case of Steane’s code), any number of operations can be performed with the probability of error: cp 2
Concatenation If a single logical qubit is encoded by seven (Steane’s code) physical qubits, what happens to the error if we encode each of those seven? c(cp 2 ) 2 << cp 2
Concatenation Example [[7,1,3]] concatenated once: This logical qubit… … is encoded by these seven qubits… … each of which is encoded by its own seven physical qubits.
Concatenation The circuit size and time complexity is growing exponentially! Say we concatenate k times: Time: t k Circuit size: d k However, error is reduced significantly also:
Concatenation Overheads for different recursion levels of [[7,1,3]]:
Teleportation Definition: “The re-creation of a quantum state at a destination using some classical bits that must be communicated along conventional wires or other mediums.” Teleportation is key in converting between different types of encodings, and in transferring memory.
Memory Hierarchies Idea: Use different encodings at different levels of memory: Large encodings are good for “CPU” memory Disadvantages: Take a lot of space (many physical qubits) Advantages: Better error correction Smaller encodings are good for storage Disadvantages: Worse error correction Advantages: Much more dense (fewer physical qubits)
Memory Hierarchy: Encoding levels EncodingPhysical qubits [[343,1,15]]343 [[245,1,15]]245 [[392,3,15]]131 Overhead per logical qubit Note also that teleportation is relatively slow. This implies that there is a time penalty when data is moved from one level of memory to another.
Memory Hierarchy We can take advantage of temporal and spatial locality. For instance, take the following nine-bit Quantum Fourier Transfer (QFT): Cost = 9 logical qubits * 343 physical qubits per bit = 3,087 physical qubits
Memory Hierarchy Now let’s reorder the operations and use a cache: Cost = (6 logical * 343 physical) + (3 logical * 131 physical) = 2,451 physical
Memory Hierarchy 2,451 physical qubits may not seem like a huge advantage over 3,087, but another way to look at it is the processor will contain 60% fewer physical bits. Take also into account that the data in the cache will not be operated on nearly as much as the data in the CPU, implying much less decoherence (and so smaller error correction requirements).
Future Work There also exist non-concatenated codes that offer improved density and possibly improved performance. An dependency on what codes are used for each of the memory hierarchies is the physical properties of the quantum system: How much error is introduced by the environment? How fast can it operate?