Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013.

Similar presentations


Presentation on theme: "Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013."— Presentation transcript:

1 Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013

2 Title is too broad! So, Ill just give two tales from the trenches: -One about BosonSampling, one about black holes What you should take away: 1990s: Computational Complexity Quantum Physics Shor & Grover Today: Computational Complexity Quantum Physics

3 BosonSampling (A.-Arkhipov 2011) A rudimentary type of quantum computing, involving only non-interacting photons Classical counterpart: Galtons Board Replacing the balls by photons leads to famously counterintuitive phenomena, like the Hong-Ou-Mandel dip

4 In general, we consider a network of beamsplitters, with n input modes (locations) and m>>n output modes n identical photons enter, one per input mode Assume for simplicity they all leave in different modesthere are possibilities The beamsplitter network defines a column-orthonormal matrix A C m n, such that where is the matrix permanent n n submatrix of A corresponding to S For simplicity, Im ignoring outputs with 2 photons per mode

5 Example For Hong-Ou-Mandel experiment, In general, an n n complex permanent is a sum of n! terms, almost all of which cancel How hard is it to estimate the tiny residue left over? Answer: #P-complete, even for constant-factor approx (Contrast with nonnegative permanents!)

6 So, Can We Use Quantum Optics to Solve a #P-Complete Problem? Explanation: If X is sub-unitary, then |Per(X)| 2 will usually be exponentially small. So to get a reasonable estimate of |Per(X)| 2 for a given X, wed generally need to repeat the optical experiment exponentially many times That sounds way too good to be true…

7 Better idea: Given A C m n as input, let BosonSampling be the problem of merely sampling from the same distribution D A that the beamsplitter network samples fromthe one defined by Pr[S]=|Per(A S )| 2 Upshot: Compared to (say) Shors factoring algorithm, we get different/stronger evidence that a weaker system can do something classically hard Theorem (A.-Arkhipov 2011): Suppose BosonSampling is solvable in classical polynomial time. Then P #P =BPP NP Better Theorem: Suppose we can sample D A even approximately in classical polynomial time. Then in BPP NP, its possible to estimate Per(X), with high probability over a Gaussian random matrix

8 Experiments # of experiments > # of photons! Last year, groups in Brisbane, Oxford, Rome, and Vienna reported the first 3-photon BosonSampling experiments, confirming that the amplitudes were given by 3x3 permanents

9 Goal (in our view): Scale to 10-30 photons Dont want to scale much beyond thatboth because (1)you probably cant without fault-tolerance, and (2)a classical computer probably couldnt even verify the results! Obvious Challenges for Scaling Up: -Reliable single-photon sources (optical multiplexing?) -Minimizing losses -Getting high probability of n-photon coincidence Theoretical Challenge: Argue that, even with photon losses and messier initial states, youre still solving a classically-intractable sampling problem

10 Recent Criticisms of Gogolin et al. ( arXiv:1306.3995 ) Suppose you ignore which actual photodetectors light up, and count only the number of times each output configuration occurs. In that case, the BosonSampling distribution D A is exponentially-close to the uniform distribution U Response: Why would you ignore which detectors light up?? The output of almost any algorithm is also gobbledygook if you ignore the order of the output bits…

11 Recent Criticisms of Gogolin et al. ( arXiv:1306.3995 ) OK, so maybe D A isnt close to uniform. Still, the very same arguments we gave for why polynomial-time classical algorithms cant sample D A, suggest that they cant even distinguish D A from U! Response: Thats why we said to focus on 10-30 photonsa range where a classical computer can verify a BosonSampling devices output, but the BosonSampling device might be faster! (And 10-30 photons is probably the best you can do anyway, without quantum fault-tolerance)

12 More Decisive Responses (A.-Arkhipov, arXiv:1309.7460) Theorem: Let A C m n be a Haar-random BosonSampling matrix, where mn 5.1 /. Then with 1-O( ) probability over A, the BosonSampling distribution D A has (1) variation distance from the uniform distribution U Under U Histogram of (normalized) probabilities under D A Necessary, though not sufficient, for approximately sampling D A to be hard

13 Theorem (A. 2013): Let A C m n be Haar-random, where m>>n. Then there is a classical polynomial-time algorithm C(A) that distinguishes D A from U (with high probability over A and constant bias, and using only O(1) samples) Strategy: Let A S be the n n submatrix of A corresponding to output S. Let P be the product of squared 2-norms of A S s rows. If P>E[P], then guess S was drawn from D A ; otherwise guess S was drawn from U P under uniform distribution (a lognormal random variable) P under a BosonSampling distribution A ASAS

14 Using Quantum Optics to Prove that the Permanent is #P-Complete [A., Proc. Roy. Soc. 2011] Valiant showed that the permanent is #P-completebut his proof required strange, custom-made gadgets We gave a new, arguably more transparent proof by combining three facts: (1)n-photon amplitudes correspond to n n permanents (2) Postselected quantum optics can simulate universal quantum computation [Knill-Laflamme-Milburn 2001] (3) Quantum computations can encode #P-complete quantities in their amplitudes

15 Black Holes and Computational Complexity?? YES!! Amazing connection made this year by Harlow & Hayden But first, we need to review 40 years of black hole history SZK QSZK BPP BQP AM QAM

16 Bekenstein 1972, Hawking 1975: Black holes have entropy and temperature! They emit radiation The Information Loss Problem: Calculations suggest that Hawking radiation is thermaluncorrelated with whatever fell in. So, is infalling information lost forever? Would violate the unitarity / reversibility of QM OK then, assume the information somehow gets out! The Xeroxing Problem: How could the same qubit | fall inexorably toward the singularity, and emerge in Hawking radiation? Would violate the No-Cloning Theorem Black Hole Complementarity (Susskind, t Hooft): An external observer can describe everything unitarily without including the interior at all! Interior should be seen as just a scrambled re-encoding of the exterior degrees of freedom

17 Violates monogamy of entanglement! The same qubit cant be maximally entangled with 2 things The Firewall Paradox (AMPS 2012) B = Interior of Old Black Hole R = Faraway Hawking Radiation H = Just-Emitted Hawking Radiation Near-maximal entanglement Also near-maximal entanglement

18 Harlow-Hayden 2013 (arXiv:1301.4504) : Striking argument that Alices decoding task would require exponential time Complexity theory to the rescue of quantum field theory?? The HH decoding problem: Given an n-qubit pure state | BHR produced by a known, poly-size quantum circuit. Promised that, by acting only on R (the Hawking radiation), its possible to distill an EPR pair (|00 +|11 )/2 between R and B (the black hole interior). Distill such a pair, by applying a unitary transformation U R to the qubits in R. Theorem (HH): This decoding problem is QSZK-hard. So presumably intractable, unless QSZK=BQP (which we have oracle evidence against)!

19 Improvement (A. 2013): Suppose there are poly-size quantum circuits for the HH decoding problem. Then any OWF f:{0,1} n {0,1} p(n) can be inverted in quantum polynomial time. Proof: Assume for simplicity that f is injective. Consider Suppose applying U R to R decodes an EPR pair between R and B. Then for some {| x } x, {| x } x, we must have Furthermore, to get perfect entanglement, we need | x =| x for all x! So from U R, we can get unitaries V,W such that Generalizing to arbitrary OWFs requires techniques similar to those used in HILL Theorem!

20 Other Exciting Recent Complexity/Physics Connections Quantum PCP Conjecture. Is approximate quantum MAX-k-SAT hard for quantum NP? If so, will require the construction exotic condensed matter systems, exhibiting room-temperature entanglement! Beautiful recent survey by Aharonov et al. (arXiv:1309.7495) Using Entanglement to Steer Quantum Systems. Can use Bell inequality violation for guaranteed randomness expansion (Vazirani-Vidick), blind and authenticated QC with a classical polynomial-time verifier, QMIP=MIP* (Reichardt-Unger-Vazirani)… I hope for, and expect, many more such striking connections! Indeed, making these connections possible might be the most important result of quantum computing researcheven if useful QCs eventually get built!


Download ppt "Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013."

Similar presentations


Ads by Google