Download presentation

Presentation is loading. Please wait.

1
**Scott Aaronson Alex Arkhipov MIT**

Efficient Simulation of Quantum Mechanics Collapses the Polynomial Hierarchy (yes, really) Scott Aaronson Alex Arkhipov MIT

2
**In 1994, something big happened in our field, whose meaning is still debated today…**

Why exactly was Shor’s algorithm important? Boosters: Because it means we’ll build QCs! Skeptics: Because it means we won’t build QCs! Me: For reasons having nothing to do with building QCs!

3
Shor’s algorithm was a hardness result for one of the central computational problems of modern science: Quantum Simulation Use of DoE supercomputers by area (from a talk by Alán Aspuru-Guzik) Shor’s Theorem: Quantum Simulation is not in BPP, unless Factoring is also

4
**Today: A completely different kind of hardness result for simulating quantum mechanics**

Advantages of our result: Based on P#PBPPNP rather than FactoringBPP Applies to an extremely weak subset of QC (“Non-interacting bosons,” or linear optics with a single nonadaptive measurement at the end) Even gives evidence that QCs have capabilities outside PH Disadvantages: Applies to distributional and relation problems, not to decision problems Harder to convince a skeptic that your QC is really solving the relevant hard problem

5
**Let C be a quantum circuit, which acts on n qubits initialized to the all-0 state**

|0 C defines a distribution DC over n-bit output strings QSampling: Given C as input, sample a string x from any probability distribution D such that Certainly this problem is BQP-hard

6
**(so in particular, PH collapses to the third level)**

Our Result: Suppose QSampling0.01 is in probabilistic polytime. Then P#P=BPPNP (so in particular, PH collapses to the third level) More generally: Suppose QSampling0.01 is in probabilistic polytime with A oracle. Then P#PBPPNP So QSampling can’t even be in BPPPH without collapsing PH! A Extension to relational problems: Suppose FBQP=FBPP. Then P#P=BPPNP “QSampling is #P-hard under BPPNP-reductions” (Provided the BPPNP machine gets to pick the random bits used by the QSampling oracle)

7
**Warmup: Why Exact QSampling Is Hard**

Let f:{0,1}n{-1,1} be any efficiently computable function. Suppose we apply the following quantum circuit: H f |0 Then the probability of observing the all-0 string is

8
**Conclusion: Suppose QSampling0 is easy. Then P#P=BPPNP**

Claim 1: p is #P-hard to estimate (up to a constant factor) Related to my result that PostBQP=PP Proof: If we can estimate p, then we can also compute xf(x) using binary search and padding Claim 2: Suppose QSampling was classically easy. Then we could estimate p in BPPNP Proof: Let M be a classical algorithm for QSampling, and let r be its randomness. Use approximate counting to estimate Conclusion: Suppose QSampling0 is easy. Then P#P=BPPNP

9
So Why Aren’t We Done? Ultimately, our goal is to show that Nature can actually perform computations that are hard to simulate classically, thereby overthrowing the Extended Church-Turing Thesis But any real quantum system is subject to noise—meaning we can’t actually sample from DC, but only from some distribution D such that Could that be easy, even if sampling from DC itself was hard? To rule that out, we need to show that even a fast classical algorithm for QSampling would imply P#P=BPPNP

10
**Hmm … robust #P-complete problem … you mean like the Permanent?**

The Problem Suppose M “knew” that all we cared about was the final amplitude of |00 (i.e., that’s where we shoehorned a hard #P-complete instance) Then it could adversarially choose to be wrong about that one, exponentially-small amplitude and still be a good sampler So we need a quantum computation that more “robustly” encodes a #P-complete problem Indeed. But to bring the permanent into quantum computing, we need a brief detour into particle physics (!) We’ll have to work harder … but as a bonus, we’ll not only rule out approximate samplers, but approximate samplers for an extremely weak kind of QC Hmm … robust #P-complete problem … you mean like the Permanent?

11
**Particle Physics In One Slide**

There are two types of particles in Nature… BOSONS Force-carriers: photons, gluons… Swap two identical bosons quantum state | is unchanged Bosons can “pile on top of each other” (and do: lasers, Bose-Einstein condensates…) FERMIONS Matter: quarks, electrons… Swap two identical fermions quantum state picks up -1 phase Pauli exclusion principle: no two fermions can occupy same state

12
**Consider a system of n identical, non-interacting particles…**

Let aijC be the amplitude for transitioning from initial state i to final state j 1 1 2 2 All I can say is, the bosons got the harder job… 3 Let 3 tinitial tfinal Then what’s the total amplitude for the above process? if the particles are bosons if they’re fermions

13
**The BosonSampling Problem**

Input: An mn complex matrix A, whose n columns are orthonormal vectors in Cm (here mn2) Let a configuration be a list S=(s1,…,sm) of nonnegative integers with s1+…+sm=n Task: Sample each configuration S with probability where AS is an nn matrix containing si copies of the ith row of A Neat Fact: The pS’s sum to 1

14
Physical Interpretation: We’re simulating a unitary evolution of n identical bosons, each of which can be in m=poly(n) “modes.” Initially, modes 1 to n have one boson each and modes n+1 to m are unoccupied. After applying the unitary, we measure the number of bosons in each mode. Example:

15
**Theorem (implicit in Lloyd 1996): BosonSampling QSampling**

Proof Sketch: We need to simulate a system of n bosons on a conventional quantum computer The basis states |s1,…,sm (s1+…+sm=n) just record the occupation number of each mode Given any “scattering matrix” UCmm on the m modes, we can decompose U as a product U1…UT, where T=O(m2) and each Ut acts only on 2-dimensional subspaces of the form for some (i,j)

16
**Theorem (Valiant 2001, Terhal-DiVincenzo 2002): FermionSamplingBPP**

In stark contrast, we prove the following: Suppose BosonSamplingBPP. Then given an arbitrary matrix XCnn, one can approximate |Per(X)|2 in BPPNP But I thought we could approximate the permanent in BPP anyway, by Jerrum-Sinclair-Vigoda! Yes, for nonnegative matrices. For general matrices, approximating |Per(X)|2 is #P-complete.

17
Outline of Proof Given a matrix XCnn , with every entry satisfying |xij|1, we want to approximate |Per(X)|2 to within n! This is already #P-complete (proof: standard padding tricks) Notice that |Per(X)|2 is a degree-2n polynomial in the entries of X (as well as their complex conjugates) As in Lipton/LFKN, we can let V be some random curve in Cnn that passes through X, and let Y1,…,YkCnn be other matrices on V (where kn2) If we can estimate |Per(Yi)|2 for most i, then we can estimate |Per(X)|2 using noisy polynomial interpolation

18
**But Linear Interpolation Doesn’t Work!**

X A random line through XCnn “retains too much information” about X We need to redo Lipton/LFKN to work over the complex numbers rather than finite fields Solution: Choose a matrix Y(t) of random trigonometric polynomials, such that Y(0)=X

19
For sufficiently large L and t>>0, each yij(t) will look like an independent Gaussian, uncorrelated with xij: Furthermore, Per(Y(t)) is a univariate polynomial in e2it of degree at most Ln Questions: How do we sample Y(t) and Y1,…,Yk efficiently? How do we do the noisy polynomial interpolation? Lazy answer: Since we’re a BPPNP machine, just use rejection sampling!

20
The problem reduces to estimating |Per(Y)|2, for a matrix YCnn of (essentially) independent N(0,1) Gaussians To do this, generate a random mn column-orthonormal matrix A that contains Y/m as an nn submatrix (i.e., such that AS=Y/m for some random configuration S) Let M be our BPP algorithm for approximate BosonSampling, and let r be M’s randomness Use approximate counting (in BPPNP) to estimate Intuition: M has no way to determine which configuration S we care about. So if it’s right about most configurations, then w.h.p. we must have

21
**Problem: Bosons like to pile on top of each other!**

Call a configuration S=(s1,…,sm) good if every si is 0 or 1 (i.e., there are no collisions between bosons), and bad otherwise We assumed for simplicity that all configurations were good But suppose bad configurations dominated. Then M could be wrong on all good configurations, yet still “work” Furthermore, the “bosonic birthday paradox” is even worse than the classical one! rather than ½ as with classical particles Fortunately, we show that with n bosons and mkn2 boxes, the probability of a collision is still at most (say) ½

22
**Experimental Prospects**

What would it take to implement BosonSampling with photonics? Reliable phase-shifters Reliable beamsplitters Reliable single-photon sources Reliable photodetectors But crucially, no nonlinear optics or postselected measurements! Problem: The output will be a collection of nn matrices B1,…,Bk with “unusually large permanents”—but how would a classical skeptic verify that |Per(Bi)|2 was large? Our Proposal: Concentrate on (say) n=30 photons, so that classical simulation is difficult but not impossible

23
**Open Problems Does our result relativize? (Conjecture: No)**

Can we use BosonSampling to do universal QC? Can we use it to solve any decision problem outside BPP? Can you convince a skeptic (who isn’t a BPPNP machine) that your QC is indeed doing BosonSampling? Can we get unlikely complexity collapses from P=BQP or PromiseP=PromiseBQP? Would a nonuniform sampling algorithm (one that was different for each scattering matrix A) have unlikely complexity consequences? Is Permanent #P-complete for +1/-1 matrices (with no 0’s)?

24
**Conclusion For all intents and purposes**

I like to say that we have three choices: either The Extended Church-Turing Thesis is false, Textbook quantum mechanics is false, or QCs can be efficiently simulated classically. For all intents and purposes

Similar presentations

OK

New Evidence That Quantum Mechanics Is Hard to Simulate on Classical Computers Scott Aaronson (MIT) Joint work with Alex Arkhipov.

New Evidence That Quantum Mechanics Is Hard to Simulate on Classical Computers Scott Aaronson (MIT) Joint work with Alex Arkhipov.

© 2018 SlidePlayer.com Inc.

All rights reserved.

Ads by Google