Download presentation

Presentation is loading. Please wait.

Published bySara Fletcher Modified over 4 years ago

1
The Learnability of Quantum States Scott Aaronson University of Waterloo

2
Quantum State Tomography Suppose we have a physical process that produces a quantum mixed state By applying the process repeatedly, we can prepare as many copies of as we want To each copy, we then apply a binary measurement E, obtaining 1 with probability Tr(E ) and 0 otherwise Our goal is to learn an approximate description of EXPERIMENTALISTS ACTUALLY DO THIS To learn about chemical reactions (Skovsen et al. 2003), test equipment (DAriano et al. 2002), study decoherence mechanisms (Resch et al. 2005), …

3
But theres a problem… To do tomography on an entangled state of n qubits, we need (4 n ) measurements The current record: 8 qubits (Häffner et al. 2005), requiring 656,100 experiments (!) Does this mean that a generic 10,000-particle state can never be learned within the lifetime of the universe? If so, would call into question the operational status of quantum states themselves (and make quantum computing skeptics extremely happy)… Fear not! Why would he be raising this problem if he wasnt gonna demolish it?

4
Let be an n-qubit mixed state. Let D be a distribution over two-outcome measurements. Suppose we draw m measurements E 1,…,E m independently from D, and then output a hypothesis state such that |Tr(E i )-Tr(E i )| for all i. Then provided /10 and well have with probability at least 1- over E 1,…,E m The Quantum Occams Razor Theorem

5
Remarks Can make dependence and and more reasonable, at the cost of a log 2 n factor: The above bound is nearly tight: Result says nothing about the computational complexity of preparing a hypothesis state that agrees with measurement results Implies that we can do pretty good tomography, using a number of measurements that grows only linearly (!) with the number of qubits n

6
Fat-Shattering Dimension Let C be a class of functions from S to [0,1]. We say a set {x 1,…,x k } S is -shattered by C if there exist reals a 1,…,a k such that, for all 2 k possible statements of the form f(x 1 ) a 1 - f(x 2 ) a 2 + … f(x k ) a k -, theres some f C that satisfies the statement. To prove the theorem, we need a notion introduced by Kearns and Schapire called Then fat C ( ), the -fat-shattering dimension of C, is the size of the largest set -shattered by C.

7
Let C be a class of functions from S to [0,1], and let f C. Suppose we draw m elements x 1,…,x m independently from some distribution D, and then output a hypothesis h C such that |h(x i )-f(x i )| for all i. Then provided /7 and well have with probability at least 1- over x 1,…,x m. Small Fat-Shattering Dimension Implies Small Sample Complexity Proof uses a 1996 result of Bartlett and Longbuilding on Alon et al., building on Blumer et al., building on Valiant

8
Nayak 1999: If we want to encode k classical bits into n qubits, in such a way that any bit can be recovered with probability 1-p, then we need n (1-H(p))k Corollary (turning Nayaks result on its head): Let C n be the set of functions that map an n-qubit measurement E to Tr(E ), for some. Then Quantum Occams Razor Theorem follows easily… Upper-Bounding the Fat-Shattering Dimension of Quantum States No need to thank me!

9
f(x,y) Simple Application of Quantum Occams Razor Theorem to Communication Complexity f: Boolean function mapping Alices N-bit string x and Bobs M-bit string y to a binary output D 1 (f), R 1 (f), Q 1 (f): Deterministic, randomized, and quantum one-way communication complexities of f How much can quantum communication save? Its known that D 1 (f)=O(M Q 1 (f)) for all total f In 2004 I showed that for all f, D 1 (f)=O(M Q 1 (f)logQ 1 (f)) xy GBUSTERS L AliceBob

10
Theorem: R 1 (f)=O(M Q 1 (f)) for all f, partial or total Proof: Fix Alices input x By Yaos minimax principle, Alice can consider a worst- case distribution D over Bobs input y Alices classical message will consist of y 1,…,y T drawn from D, together with f(x,y 1 ),…,f(x,y T ), where T= (Q 1 (f)) Bob searches for a quantum message that yields the right answers on y 1,…,y T By the Quantum Occams Razor Theorem, with high probability such a yields the right answers on most y drawn from D

11
What about computational complexity? BQP/qpoly: Class of problems solvable in quantum polynomial time, with help from poly-size quantum advice state that depends only on input length n Can this result be improved to BQP/qpoly QMA/poly? (QMA: Quantum Merlin-Arthur) Theorem: HeurBQP/qpoly HeurQMA/poly Or in English: We can use trusted classical advice to verify that untrusted quantum advice will work on most inputs A. 2004: BQP/qpoly PP/poly Classical advice can always simulate quantum advice, provided we use exponentially more computation

12
Proof Idea: The classical advice to the HeurQMA/poly verifier will consist of training inputs x 1,…,x m where m=poly(n), as well as whether x i L for all i Given a purported quantum advice state |, the verifier first checks that | yields the right answers on the training inputs, and only then uses it on its real input x Stronger Result: HeurBQP/qpoly = HeurYQP/poly Here YQP (Yoda Quantum Polynomial-Time) is like QMA coQMA, except that a single witness must work for all inputs of length n By the Quantum Occams Razor Theorem, if | passes the initial test, then w.h.p. it works on most inputs Technical part is to do the verification without destroying |

13
Open Problems Computationally-efficient learning algorithms Experimental implementation! Tighter bounds on number of measurements Does BQP/qpoly = YQP/poly? Is D 1 (f) = O(M Q 1 (f))? DUNCE DUNCE

Similar presentations

OK

Northwestern University Winter 2007 Machine Learning EECS 395-22 Machine Learning Lecture 13: Computational Learning Theory.

Northwestern University Winter 2007 Machine Learning EECS 395-22 Machine Learning Lecture 13: Computational Learning Theory.

© 2018 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google