Fault-Tolerant Quantum Computation in Multi-Qubit Block Codes Todd A. Brun University of Southern California QEC 2014, Zurich, Switzerland With Ching-Yi.

Slides:



Advertisements
Similar presentations
Pretty-Good Tomography Scott Aaronson MIT. Theres a problem… To do tomography on an entangled state of n qubits, we need exp(n) measurements Does this.
Advertisements

Quantum Computation and Quantum Information – Lecture 2
Computing with adversarial noise Aram Harrow (UW -> MIT) Matt Hastings (Duke/MSR) Anup Rao (UW)
University of Queensland
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Cyclic Code.
Error Control Code.
Variance reduction techniques. 2 Introduction Simulation models should be coded such that they are efficient. Efficiency in terms of programming ensures.
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
F AULT T OLERANT Q UANTUM C OMPUTATION December 21 st, 2009 Mirmojtaba Gharibi.
Quantum Error Correction Joshua Kretchmer Gautam Wilkins Eric Zhou.
ECIV 201 Computational Methods for Civil Engineers Richard P. Ray, Ph.D., P.E. Error Analysis.
A Universal Operator Theoretic Framework for Quantum Fault Tolerance Yaakov S. Weinstein MITRE Quantum Information Science Group MITRE Quantum Error Correction.
QEC’07-1 ASF 6/13/2015 MIT Lincoln Laboratory Channel-Adapted Quantum Error Correction Andrew Fletcher QEC ‘07 21 December 2007.
Chien Hsing James Wu David Gottesman Andrew Landahl.
Quantum Error Correction and Fault Tolerance Daniel Gottesman Perimeter Institute.
Quantum Error Correction Michele Mosca. Quantum Error Correction: Bit Flip Errors l Suppose the environment will effect error (i.e. operation ) on our.
Superdense coding. How much classical information in n qubits? Observe that 2 n  1 complex numbers apparently needed to describe an arbitrary n -qubit.
Quantum Computation and Error Correction Ali Soleimani.
5 Qubits Error Correcting Shor’s code uses 9 qubits to encode 1 qubit, but more efficient codes exist. Given our error model where errors can be any of.
University of Queensland
Local Fault-tolerant Quantum Computation Krysta Svore Columbia University FTQC 29 August 2005 Collaborators: Barbara Terhal and David DiVincenzo, IBM quant-ph/
Beyond the DiVincenzo Criteria: Requirements and Desiderata for Fault-Tolerance Daniel Gottesman.
Review from last lecture: A Simple Quantum (3,1) Repetition Code
The Integration Algorithm A quantum computer could integrate a function in less computational time then a classical computer... The integral of a one dimensional.
Fitting a Model to Data Reading: 15.1,
Code and Decoder Design of LDPC Codes for Gbps Systems Jeremy Thorpe Presented to: Microsoft Research
Quantum Computing Lecture 22 Michele Mosca. Correcting Phase Errors l Suppose the environment effects error on our quantum computer, where This is a description.
Error-correcting the IBM qubit error-correcting the IBM qubit panos aliferis IBM.
General Entanglement-Assisted Quantum Error-Correcting Codes Todd A. Brun, Igor Devetak and Min-Hsiu Hsieh Communication Sciences Institute QEC07.
BB84 Quantum Key Distribution 1.Alice chooses (4+  )n random bitstrings a and b, 2.Alice encodes each bit a i as {|0>,|1>} if b i =0 and as {|+>,|->}
Memory Hierarchies for Quantum Data Dean Copsey, Mark Oskin, Frederic T. Chong, Isaac Chaung and Khaled Abdel-Ghaffar Presented by Greg Gerou.
Lo-Chau Quantum Key Distribution 1.Alice creates 2n EPR pairs in state each in state |  00 >, and picks a random 2n bitstring b, 2.Alice randomly selects.
A Fault-tolerant Architecture for Quantum Hamiltonian Simulation Guoming Wang Oleg Khainovski.
Quantum Computation and Quantum Information – Lecture 2 Part 1 of CS406 – Research Directions in Computing Dr. Rajagopal Nagarajan Assistant: Nick Papanikolaou.
EECS 598 Fall ’01 Quantum Cryptography Presentation By George Mathew.
ROM-based computations: quantum versus classical B.C. Travaglione, M.A.Nielsen, H.M. Wiseman, and A. Ambainis.
Hamming Code Rachel Ah Chuen. Basic concepts Networks must be able to transfer data from one device to another with complete accuracy. Data can be corrupted.
Requirements and Desiderata for Fault-Tolerant Quantum Computing Daniel Gottesman Perimeter Institute for Theoretical Physics Beyond the DiVincenzo Criteria.
Quantum Error Correction Jian-Wei Pan Lecture Note 9.
Alice and Bob’s Excellent Adventure
Quantum Error Correction and Fault-Tolerance Todd A. Brun, Daniel A. Lidar, Ben Reichardt, Paolo Zanardi University of Southern California.
Review of modern noise proof coding methods D. Sc. Valeri V. Zolotarev.
Quantum Error Correction Daniel Gottesman Perimeter Institute.
A Few Simple Applications to Cryptography Louis Salvail BRICS, Aarhus University.
1 CSI5388: Functional Elements of Statistics for Machine Learning Part I.
CS1Q Computer Systems Lecture 8
SPANISH CRYPTOGRAPHY DAYS (SCD 2011) A Search Algorithm Based on Syndrome Computation to Get Efficient Shortened Cyclic Codes Correcting either Random.
Security in Computing Chapter 12, Cryptography Explained Part 7 Summary created by Kirk Scott 1.
Information Coding in noisy channel error protection:-- improve tolerance of errors error detection: --- indicate occurrence of errors. Source.
Michael A. Nielsen Fault-tolerant quantum computation with cluster states School of Physical Sciences Chris Dawson (UQ) Henry Haselgrove (UQ) The University.
1 Introduction to Quantum Information Processing CS 667 / PH 767 / CO 681 / AM 871 Richard Cleve DC 2117 Lecture 20 (2009)
September 12, 2014 Martin Suchara Andrew Cross Jay Gambetta Supported by ARO W911NF S IMULATING AND C ORRECTING Q UBIT L EAKAGE.
Quantum Computing Reversibility & Quantum Computing.
© 2015 AT&T Intellectual Property. All rights reserved. AT&T, the AT&T logo and all other AT&T marks contained herein are trademarks of AT&T Intellectual.
8.4.2 Quantum process tomography 8.5 Limitations of the quantum operations formalism 量子輪講 2003 年 10 月 16 日 担当:徳本 晋
CS1Q Computer Systems Lecture 8
Turbo Codes. 2 A Need for Better Codes Designing a channel code is always a tradeoff between energy efficiency and bandwidth efficiency. Lower rate Codes.
Multipartite Entanglement and its Role in Quantum Algorithms Special Seminar: Ph.D. Lecture by Yishai Shimoni.
Page 1 COMPSCI 290.2: Computer Security “Quantum Cryptography” including Quantum Communication Quantum Computing.
Richard Cleve DC 2117 Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871 Lecture (2011)
1 Introduction to Quantum Information Processing CS 467 / CS 667 Phys 667 / Phys 767 C&O 481 / C&O 681 Richard Cleve DC 2117 Lecture.
1 Introduction to Quantum Information Processing CS 467 / CS 667 Phys 467 / Phys 767 C&O 481 / C&O 681 Richard Cleve DC 3524 Course.
Quantum Bits (qubit) 1 qubit probabilistically represents 2 states
Certification of Reusable Software Artifacts
A low cost quantum factoring algorithm
Using surface code experimental output correctly and effectively
RAID Redundant Array of Inexpensive (Independent) Disks
Improving Quantum Circuit Dependability
Error Correction Coding
Presentation transcript:

Fault-Tolerant Quantum Computation in Multi-Qubit Block Codes Todd A. Brun University of Southern California QEC 2014, Zurich, Switzerland With Ching-Yi Lai, Yi-Cong Zheng, Kung-Chuan Hsu

Why large block codes? Many schemes for fault-tolerant quantum error correction have been proposed. All require relatively low rates of error, though the ability to tolerate errors has slowly been improved by a long string of theoretical developments. Most also require a large amount of overhead—in some cases, a very large amount. A logical qubit is encoded in hundreds or thousands of physical qubits, if not more. It was long ago observed (by Steane, and others) that codes encoding multiple qubits can achieve significantly higher rates for the same level of protection from errors. But performing logical gates in such block codes is difficult. I will briefly sketch a scheme for computation where multi-qubit block codes are used for storage (storage blocks). Clifford gates and logical teleportation are done by measuring logical operators, and a different code (the processor block) is used with a non-Clifford transversal gate for universality. Syndromes and logical operators are measured by Steane extraction, so this scheme uses only ancilla preparation, transversal circuits, and single-qubit measurements. Steane, Nature 399, (1999).

Outline of the scheme 1.When not being processed, logical qubits are stored in [[n,k,d]] block codes. These codes are corrected by repeatedly measuring the stabilizer generators via Steane extraction. 2.Logical Clifford gates can be done by measuring sequences of logical operators. This is also done by Steane extraction with a modified ancilla state. 3.A non-Clifford gate is done by teleporting the selected logical qubits into the processor block, and teleporting back after the gate. Teleportation is also used between storage blocks. 4.The processor blocks use codes that allow transversal circuits for the encoded gates. E.g., for the T gate we could use the concatenated [[15,1,3]] shortened Reed-Muller code. 5.Logical teleportation is also done by measuring a sequence of logical operators.

Outline of the scheme

Correcting Errors The storage blocks should use a code that stores multiple qubits with high distance, and a rate that is better than that achieved by encoding the qubits separately (as in a concatenated scheme or topological code). The logical error rate must be sufficiently low that we expect no logical errors in any block for the entire duration of the quantum computation. The processor block must also have a very low rate of logical errors, but the demands are not quite as stringent as for the storage blocks. The rate must be low enough that the probability of an uncorrectable error during any of the non-Clifford gates is low. How large a computation can be done in this scheme? Clearly, for a fixed size of code block, one cannot do computations of unlimited size. However, it may be possible to protect information for sufficiently long to do a nontrivially large quantum computation. If the error rate is low enough, it may be possible to carry out any computation we could conceivably do in practice.

Let p S be the logical error rate of the storage block, and p P be the logical error rate of the processor block, per “round” of computation. One storage block encodes k logical qubits. Suppose we wish to do a computation involving N logical qubits with circuit depth D (measured in rounds of computation), and using M non-Clifford gates. Then, roughly speaking, we expect to be able to complete this computation if and A “round” of computation is essentially one iteration of Steane syndrome extraction. We call 1/p S and 1/p P the code lifetimes.

Steane Syndrome Extraction Steane syndrome extraction can be done on any CSS code. All the X- type stabilizer generators are measured in one shot, and so are all the Z-type stabilizer generators. This requires two ancilla states, each of which is a copy of the same code as the codeword. The first is prepared with all the logical qubits in the |+> state, the second all in the |0> state. After the circuit of transversal CNOTs, all the ancilla qubits are measured in the Z or X basis, respectively. The syndrome bits are parities of these measurement outcomes.

Measuring logical operators A simple variant of the Steane extraction procedure will also let us measure logical operators. Which logical operators are measured is determined by the logical state of the ancillas. Suppose we want to measure the logical operator Z j for the jth logical qubit. Then we prepare the jth logical qubit of the Z ancilla state in the state |0> instead of |+>. We determine the value of Z j by taking a parity of qubit measurements, after first carrying out a classical error correction step. Similarly, suppose we want to measure the logical operator X j for the jth logical qubit. Then we prepare the jth logical qubit of the X ancilla state in the state |+> instead of |0>. Two very nice features of this way of measuring logical operators: 1.Because the ancillas are themselves error-correcting codes, we can correct some errors in the measurement procedure. That makes this protocol robust against noise. 2.In the course of measuring the logical operators, we also extract all the error syndromes. So this can just be substituted for the usual round of syndrome extraction.

Measuring products of logical operators We are not limited to measuring single X’s or Z’s. Suppose I wish to measure Z i Z j. Prepare logical qubits i and j of the Z ancilla in the state. Then one can extract the product operator Z i Z j in an exactly analogous way. One can similarly measure products of logical X operators. What about measuring logical Y operators? Or, more generally, arbitrary products of X’s and Z’s? This is done by preparing both ancillas in an entangled state. As mentioned above, this procedure has some robustness against noise. But if that is not sufficient, one can repeat the measurement a few times. While measuring logical operators is a useful thing to be able to do, we will use it as a building block for two key operations: Clifford gates, and logical teleportation.

Clifford gates If we have a buffer qubit in addition to the qubit(s) we wish to act on, we can do arbitrary Clifford gates by measuring a sequence of logical operators. Suppose we have two qubits and a buffer qubit in the state We can carry out a CNOT gate by measuring this sequence of logical operators: X 2 X 3, Z 1 Z 2, and X 1. This leaves the three qubits in the state where P is a (known) Pauli operator acting on the three logical qubits. This can either be corrected, or kept track of, along with the Pauli operators from the syndrome extraction. See, e.g., D. Gottesman, Caltech Ph.D. Thesis, 1997.

We can similarly do a Hadamard gate by measuring ZX and XI, and a Phase gate by measuring XY and ZI. One can also do SWAPs, and reset the buffer qubit in the Z or X basis. Notice that the CNOT can be done by measuring pure X and pure Z operators, but the Hadamard and Phase gates require us to measure products of both. One Clifford gate, by this method, takes 2 or 3 “rounds” of the computation—more, if we need to repeat measurements. Some overhead may also be necessary if we must return the buffer qubit to its original state and location.

Performing non-Clifford logical gates The above scheme allows us perform an arbitrary Clifford operation on the logical qubits within a single storage block. But it does not allow us to transfer logical qubits between blocks. We also need the ability to do at least one non- Clifford gate to get universality. For both of these purposes we use logical teleportation. We can teleport a qubit to another storage block to allow CNOTs between blocks. For a non-Clifford gate, we teleport the qubit into a processor block, which allows a transversal gate outside the Clifford group. For example, the concatenated [[15,1,3]] truncated Reed-Muller code allows a transversal T gate.

Logical teleportation We can teleport logical qubits between code blocks by measuring an appropriate set of logical operators fault-tolerantly. Suppose our [[n,k,d]] code has k pairs of logical operators (X 1,Z 1 ), (X 2,Z 2 ), …, (X k,Z k ), and we want to teleport into another code block with a pair of logical operators (X 0,Z 0 ). We will use logical qubit 1 as a buffer to hold half of an entangled pair, and logical qubits 2,…,k hold actual qubits to be stored. Suppose we wish to act on logical qubit 2. 1.Measure logical operators X 0 X 1 and Z 0 Z 1 to prepare a logical Bell state. 2.Measure logical operators X 1 X 2 and Z 1 Z 2 to teleport. 3.Do a transversal circuit on the second codeword to correct (if necessary) and apply the desired logical gate. 4.Measure logical operators X 0 X 1 and Z 0 Z 1 to teleport back. 5.Do a transversal circuit on the first codeword to correct (if necessary).

Logical teleportation, as we see, can also be done by measuring logical operators. But now we must measure products of logical operators on different code blocks. We do this in exactly the same way as measuring any other logical operator, but now we must prepare the ancillas of two different code blocks in an entangled state. (We can also use this trick to directly do CNOTS between logical qubits of two different storage blocks.)

Performance of the scheme How well does this scheme perform in practice? To see, we must choose particular codes for the storage and processor blocks, and do a combination of analysis and numerical simulation. For a first start, we are trying to estimate the code lifetimes for different choices of codes. Such an analysis should, in general, include all the different sources of noise in the system. These include: Gate errors Memory errors Measurement errors Ancilla preparation errors To simplify the analysis, we first transform all these noise sources into a single effective error process, which we can treat as acting only between rounds of the computation.

Effective Errors The individual qubits in Steane extraction undergo transversal circuits like the one above. We have marked the locations where errors can occur. Crucially, we assume that these errors are uncorrelated between qubits. The effective error process acts before and after the circuit, which we treat as ideal. The errors, however, are now generally correlated in time. These correlations are potentially useful in diagnosing errors.

Ancilla preparation errors can produce correlated errors.

Gate errors...

Measurement errors...

One-time effective error process If we ignore the time correlations, we can get an effective error process for a single time: We could make use of the correlations to do soft-decision decoding. But if the probability of an uncorrectable error at a single time is sufficiently low, that is already enough to get a lower bound on the code lifetime. From the above, we see that the effective error rate is going to be a multiple (but not necessarily a large multiple) of the largest basic error process (e.g., CNOT errors).

Storage blocks To build storage blocks that compromise between high distance, high rate, and efficient decoding, we looked at concatenations of pairs of codes. Bottom level: [[23,1,7]] quantum Golay code. Top level: [[89,23,9]], [[127,57,11]] or [[255,143,15]] quantum BCH code. These combinations would give us storage blocks with parameters [[2047,23,63]], [[2921,57,77]], or [[5865,143,105]], respectively. At least one logical qubit in each block must be set aside to act as a buffer qubit. So these three combinations represent a logical qubit by approximately 93, 52, or 41 physical qubits. Very good rates! The Golay code can be decoded by the Kasami error-trapping algorithm, and the BCH codes by the Berlekamp-Massey algorithm.

Estimated Performance [[2047,23,63]] (blue), [[2921,57,77]] (red), and [[5865,143,105]] (green). These curves are generated by Monte Carlo simulations and linear extrapolation. However, the extrapolation at low error rates can be backed up by an upper bound calculated purely from the distances of the code. For p eff = 0.007, we get bounds on the error rates of approximately , 2.5x10 -19, and 7x

Processor blocks For the processor block, we looked at two and three levels of concatenation of the [[15,1,3]] code. With hard decision decoding the performance of this code is very poor at high error rates, largely because it is very limited in its ability to correct phase errors. Using Poulin’s soft decision decoding algorithm the performance markedly improved, even at effective error rates above The upper bound used for the storage blocks is no use for soft decision decoding, and extrapolating from Monte Carlo is unlikely to be reliable in this case. But we were able to find rough bounds at moderately low error rates. D. Poulin, Phys. Rev. A 74, (2006).

The [[15,1,3]] code at two (blue) and three (red) levels of concatenation. A combination of Monte Carlo simulations at higher errors, a rough bound at lower errors, and (not to be relied upon) extrapolation. At p eff = (all contributing error processes below 5x10 -4 ), the block error rate is estimated to be roughly 2x This is certainly small enough to carry out highly nontrivial quantum computations.

Ancilla preparation We have only begun to explore the problem of ancilla preparation for this approach. The first idea we are studying is a type of ancilla distillation: prepare M imperfect copies of the ancilla, then perform a transversal error correction step that yields a smaller fraction FM of high-quality ancillas. A very important requirement is that error correlations within each ancilla should be very small. (Across ancillas is not a problem.) How does this affect the resource requirements for this scheme? If the code is [[n,k,d]], and the distillation takes T d rounds with yield fraction F, then the physical qubits needed per logical qubit are increased by a factor of If our available gates are local—as they almost certainly will be— then ancilla distribution will also increase the demand on the resources. That is yet a further question to explore.

Threshold theorems? Does a scheme like this have a threshold? The short answer: I don’t know. It certainly does not have a threshold with a fixed size of storage blocks. To prove a threshold theorem, we would have to find a family of sets of code blocks where we could prove that lifetimes can be scaled up with problem size for sufficiently low probabilities. This would be a very desirable result—but not for practical purposes a necessary one. A single storage block code could have a lifetime sufficiently long to do any conceivable quantum computation in practice. We are using fault-tolerant methods to avoid propagating errors, etc., without doing computations of arbitrary size.

Open questions and future work The number of open questions is large, including: does this really work? Here are some big ones: Can we better codes for block storage? Perhaps quantum LDPC codes? Can we do the decoding efficiently in real time during a computation? It is encouraging that without trying very hard to find good goods, we got decent performance. What are the resource requirements for the construction, verification, and distribution of the ancilla states? While these are all stabilizer states, and can in principle be prepared fault-tolerantly, the resources needed for them will probably exceed those for the codewords by a factor of “a lot.” Currently we are investigating (hopefully) efficient distillation algorithms for these ancillas. If we include locality and communication costs, how much does this performance degrade? These costs are mainly in preparing and distributing the ancillas.

Are there better (or additional) choices of codes for the processing blocks? We have looked a little at 3D color codes, but haven’t (so far) found an example that outperforms the concatenated [[15,1,3]] code. Can we find families of codes where a threshold theorem can be proven for this scheme? Thank you for your attention!