Presentation is loading. Please wait.

Presentation is loading. Please wait.

Efficient Simulation of Quantum Mechanics Collapses the Polynomial Hierarchy Scott Aaronson Alex Arkhipov MIT (yes, really)

Similar presentations

Presentation on theme: "Efficient Simulation of Quantum Mechanics Collapses the Polynomial Hierarchy Scott Aaronson Alex Arkhipov MIT (yes, really)"— Presentation transcript:

1 Efficient Simulation of Quantum Mechanics Collapses the Polynomial Hierarchy Scott Aaronson Alex Arkhipov MIT (yes, really)

2 In 1994, something big happened in our field, whose meaning is still debated today… Why exactly was Shors algorithm important? Boosters: Because it means well build QCs! Skeptics: Because it means we wont build QCs! Me: For reasons having nothing to do with building QCs!

3 Shors algorithm was a hardness result for one of the central computational problems of modern science: Q UANTUM S IMULATION Shors Theorem: Q UANTUM S IMULATION is not in BPP, unless F ACTORING is also Use of DoE supercomputers by area (from a talk by Alán Aspuru-Guzik)

4 Advantages of our result: Based on P #P BPP NP rather than F ACTORING BPP Applies to an extremely weak subset of QC (Non-interacting bosons, or linear optics with a single nonadaptive measurement at the end) Even gives evidence that QCs have capabilities outside PH Today: A completely different kind of hardness result for simulating quantum mechanics Disadvantages: Applies to distributional and relation problems, not to decision problems Harder to convince a skeptic that your QC is really solving the relevant hard problem

5 Let C be a quantum circuit, which acts on n qubits initialized to the all-0 state Certainly this problem is BQP-hard C |0 QS AMPLING : Given C as input, sample a string x from any probability distribution D such that C defines a distribution D C over n-bit output strings

6 More generally: Suppose QS AMPLING 0.01 is in probabilistic polytime with A oracle. Then P #P BPP NP So QS AMPLING cant even be in BPP PH without collapsing PH! A Our Result: Suppose QS AMPLING 0.01 is in probabilistic polytime. Then P #P =BPP NP (so in particular, PH collapses to the third level) Extension to relational problems: Suppose FBQP=FBPP. Then P #P =BPP NP QS AMPLING is #P-hard under BPP NP -reductions (Provided the BPP NP machine gets to pick the random bits used by the QS AMPLING oracle)

7 Warmup: Why Exact QS AMPLING Is Hard Let f:{0,1} n {-1,1} be any efficiently computable function. Suppose we apply the following quantum circuit: H H H H H H f |0 Then the probability of observing the all-0 string is

8 Claim 1: p is #P-hard to estimate (up to a constant factor) Related to my result that PostBQP=PP Proof: If we can estimate p, then we can also compute x f(x) using binary search and padding Claim 2: Suppose QS AMPLING was classically easy. Then we could estimate p in BPP NP Proof: Let M be a classical algorithm for QS AMPLING, and let r be its randomness. Use approximate counting to estimate Conclusion: Suppose QS AMPLING 0 is easy. Then P #P =BPP NP

9 So Why Arent We Done? Ultimately, our goal is to show that Nature can actually perform computations that are hard to simulate classically, thereby overthrowing the Extended Church-Turing Thesis But any real quantum system is subject to noisemeaning we cant actually sample from D C, but only from some distribution D such that Could that be easy, even if sampling from D C itself was hard? To rule that out, we need to show that even a fast classical algorithm for QS AMPLING would imply P #P =BPP NP

10 The Problem Suppose M knew that all we cared about was the final amplitude of |0 0 (i.e., thats where we shoehorned a hard #P-complete instance) Then it could adversarially choose to be wrong about that one, exponentially-small amplitude and still be a good sampler So we need a quantum computation that more robustly encodes a #P-complete problem Hmm … robust #P-complete problem … you mean like the P ERMANENT ? Indeed. But to bring the permanent into quantum computing, we need a brief detour into particle physics (!) Well have to work harder … but as a bonus, well not only rule out approximate samplers, but approximate samplers for an extremely weak kind of QC

11 Particle Physics In One Slide There are two types of particles in Nature… BOSONS Force-carriers: photons, gluons… Swap two identical bosons quantum state | is unchanged Bosons can pile on top of each other (and do: lasers, Bose- Einstein condensates…) FERMIONS Matter: quarks, electrons… Swap two identical fermions quantum state picks up -1 phase Pauli exclusion principle: no two fermions can occupy same state

12 Consider a system of n identical, non-interacting particles… Let a ij C be the amplitude for transitioning from initial state i to final state j Then whats the total amplitude for the above process? if the particles are bosonsif theyre fermions Let All I can say is, the bosons got the harder job… t initial t final

13 The B OSON S AMPLING Problem Input: An m n complex matrix A, whose n columns are orthonormal vectors in C m (here m n 2 ) Let a configuration be a list S=(s 1,…,s m ) of nonnegative integers with s 1 +…+s m =n Task: Sample each configuration S with probability Neat Fact: The p S s sum to 1 where A S is an n n matrix containing s i copies of the i th row of A

14 Physical Interpretation: Were simulating a unitary evolution of n identical bosons, each of which can be in m=poly(n) modes. Initially, modes 1 to n have one boson each and modes n+1 to m are unoccupied. After applying the unitary, we measure the number of bosons in each mode. Example:

15 Theorem (implicit in Lloyd 1996) : B OSON S AMPLING QS AMPLING Proof Sketch: We need to simulate a system of n bosons on a conventional quantum computer The basis states |s 1,…,s m (s 1 +…+s m =n) just record the occupation number of each mode Given any scattering matrix U C m m on the m modes, we can decompose U as a product U 1 …U T, where T=O(m 2 ) and each U t acts only on 2-dimensional subspaces of the form for some (i,j)

16 Theorem (Valiant 2001, Terhal-DiVincenzo 2002): F ERMION S AMPLING BPP In stark contrast, we prove the following: Suppose B OSON S AMPLING BPP. Then given an arbitrary matrix X C n n, one can approximate |Per(X)| 2 in BPP NP But I thought we could approximate the permanent in BPP anyway, by Jerrum-Sinclair-Vigoda! Yes, for nonnegative matrices. For general matrices, approximating |Per(X)| 2 is #P-complete.

17 Outline of Proof Given a matrix X C n n, with every entry satisfying |x ij | 1, we want to approximate |Per(X)| 2 to within n! This is already #P-complete (proof: standard padding tricks) Notice that |Per(X)| 2 is a degree-2n polynomial in the entries of X (as well as their complex conjugates) As in Lipton/LFKN, we can let V be some random curve in C n n that passes through X, and let Y 1,…,Y k C n n be other matrices on V (where k n 2 ) If we can estimate |Per(Y i )| 2 for most i, then we can estimate |Per(X)| 2 using noisy polynomial interpolation

18 But Linear Interpolation Doesnt Work! We need to redo Lipton/LFKN to work over the complex numbers rather than finite fields A random line through X C n n retains too much information about X X Solution: Choose a matrix Y(t) of random trigonometric polynomials, such that Y(0)=X

19 Questions: How do we sample Y(t) and Y 1,…,Y k efficiently? How do we do the noisy polynomial interpolation? Lazy answer: Since were a BPP NP machine, just use rejection sampling! For sufficiently large L and t>>0, each y ij (t) will look like an independent Gaussian, uncorrelated with x ij : Furthermore, Per(Y(t)) is a univariate polynomial in e 2 it of degree at most Ln

20 The problem reduces to estimating |Per(Y)| 2, for a matrix Y C n n of (essentially) independent N(0,1) Gaussians To do this, generate a random m n column-orthonormal matrix A that contains Y/m as an n n submatrix (i.e., such that A S =Y/m for some random configuration S) Let M be our BPP algorithm for approximate B OSON S AMPLING, and let r be Ms randomness Use approximate counting (in BPP NP ) to estimate Intuition: M has no way to determine which configuration S we care about. So if its right about most configurations, then w.h.p. we must have

21 Problem: Bosons like to pile on top of each other! Call a configuration S=(s 1,…,s m ) good if every s i is 0 or 1 (i.e., there are no collisions between bosons), and bad otherwise We assumed for simplicity that all configurations were good But suppose bad configurations dominated. Then M could be wrong on all good configurations, yet still work Furthermore, the bosonic birthday paradox is even worse than the classical one! rather than ½ as with classical particles Fortunately, we show that with n bosons and m kn 2 boxes, the probability of a collision is still at most (say) ½

22 Experimental Prospects What would it take to implement B OSON S AMPLING with photonics? Reliable phase-shifters Reliable beamsplitters Reliable single-photon sources Reliable photodetectors But crucially, no nonlinear optics or postselected measurements! Problem: The output will be a collection of n n matrices B 1,…,B k with unusually large permanentsbut how would a classical skeptic verify that |Per(B i )| 2 was large? Our Proposal: Concentrate on (say) n=30 photons, so that classical simulation is difficult but not impossible

23 Open Problems Does our result relativize? (Conjecture: No) Can we use B OSON S AMPLING to do universal QC? Can we use it to solve any decision problem outside BPP? Can you convince a skeptic (who isnt a BPP NP machine) that your QC is indeed doing B OSON S AMPLING ? Can we get unlikely complexity collapses from P=BQP or PromiseP=PromiseBQP? Would a nonuniform sampling algorithm (one that was different for each scattering matrix A) have unlikely complexity consequences? Is P ERMANENT #P-complete for +1/-1 matrices (with no 0s)?

24 Conclusion I like to say that we have three choices: either (1)The Extended Church-Turing Thesis is false, (2)Textbook quantum mechanics is false, or (3)QCs can be efficiently simulated classically. For all intents and purposes

Download ppt "Efficient Simulation of Quantum Mechanics Collapses the Polynomial Hierarchy Scott Aaronson Alex Arkhipov MIT (yes, really)"

Similar presentations

Ads by Google