Download presentation

Presentation is loading. Please wait.

Published bySean Walsh Modified over 3 years ago

1
BosonSampling Scott Aaronson (MIT) Talk at BBN, October 30, 2013

2
Shors Theorem: Q UANTUM S IMULATION has no efficient classical algorithm, unless F ACTORING does also The Extended Church- Turing Thesis (ECT) Everything feasibly computable in the physical world is feasibly computable by a (probabilistic) Turing machine

3
So the ECT is false … what more evidence could anyone want? Building a QC able to factor large numbers is damn hard! After 16 years, no fundamental obstacle has been found, but who knows? Cant we meet the physicists halfway, and show computational hardness for quantum systems closer to what they actually work with now? F ACTORING might be have a fast classical algorithm! At any rate, its an extremely special problem Wouldnt it be great to show that if, quantum computers can be simulated classically, then (say) P=NP?

4
BosonSampling (A.-Arkhipov 2011) A rudimentary type of quantum computing, involving only non-interacting photons Classical counterpart: Galtons Board Replacing the balls by photons leads to famously counterintuitive phenomena, like the Hong-Ou-Mandel dip

5
In general, we consider a network of beamsplitters, with n input modes (locations) and m>>n output modes n identical photons enter, one per input mode Assume for simplicity they all leave in different modesthere are possibilities The beamsplitter network defines a column-orthonormal matrix A C m n, such that where is the matrix permanent n n submatrix of A corresponding to S

6
Example For Hong-Ou-Mandel experiment, In general, an n n complex permanent is a sum of n! terms, almost all of which cancel How hard is it to estimate the tiny residue left over? Answer: #P-complete, even for constant-factor approx (Contrast with nonnegative permanents!)

7
So, Can We Use Quantum Optics to Solve a #P-Complete Problem? Explanation: If X is sub-unitary, then |Per(X)| 2 will usually be exponentially small. So to get a reasonable estimate of |Per(X)| 2 for a given X, wed generally need to repeat the optical experiment exponentially many times That sounds way too good to be true…

8
Better idea: Given A C m n as input, let BosonSampling be the problem of merely sampling from the same distribution D A that the beamsplitter network samples fromthe one defined by Pr[S]=|Per(A S )| 2 Theorem (A.-Arkhipov 2011): Suppose BosonSampling is solvable in classical polynomial time. Then P #P =BPP NP Better Theorem: Suppose we can sample D A even approximately in classical polynomial time. Then in BPP NP, its possible to estimate Per(X), with high probability over a Gaussian random matrix Upshot: Compared to (say) Shors factoring algorithm, we get different/stronger evidence that a weaker system can do something classically hard We conjecture that the above problem is already #P-complete. If it is, then a fast classical algorithm for approximate BosonSampling would already have the consequence that P #P =BPP NP

9
Valiant 2001, Terhal-DiVincenzo 2002, folklore: A QC built of noninteracting fermions can be efficiently simulated by a classical computer Related Work Knill, Laflamme, Milburn 2001: Noninteracting bosons plus adaptive measurements yield universal QC Jerrum-Sinclair-Vigoda 2001: Fast classical randomized algorithm to approximate Per(A) for nonnegative A Gurvits 2002: O(n 2 / 2 ) classical randomized algorithm to approximate an n-photon amplitude to ± additive error (also, to compute k-mode marginal distribution in n O(k) time)

10
BosonSampling Experiments # of experiments > # of photons! Last year, groups in Brisbane, Oxford, Rome, and Vienna reported the first 3-photon BosonSampling experiments, confirming that the amplitudes were given by 3x3 permanents

11
Goal (in our view): Scale to 10-30 photons Dont want to scale much beyond thatboth because (1)you probably cant without fault-tolerance, and (2)a classical computer probably couldnt even verify the results! Obvious Challenges for Scaling Up: -Reliable single-photon sources (optical multiplexing?) -Minimizing losses -Getting high probability of n-photon coincidence

12
Scattershot BosonSampling Exciting new idea, proposed by Steve Kolthammer, for sampling a hard distribution even with highly unreliable (but heralded) photon sources, like SPDCs The idea: Say you have 100 sources, of which only 10 (on average) generate a photon. Then just detect which sources succeed, and use those to define your BosonSampling instance! Complexity analysis goes through essentially without change Issues: Increases depth of optical network needed. Also, if some sources generate 2 photons, need a new hardness assumption

13
Recent Criticisms of Gogolin et al. ( arXiv:1306.3995 ) Suppose you ignore which actual photodetectors light up, and count only the number of times each output configuration occurs. In that case, the BosonSampling distribution D A is exponentially-close to the uniform distribution U Response: Why would you ignore which detectors light up?? The output of almost any algorithm is also gobbledygook if you ignore the order of the output bits…

14
Recent Criticisms of Gogolin et al. ( arXiv:1306.3995 ) OK, so maybe D A isnt close to uniform. Still, the very same arguments we gave for why polynomial-time classical algorithms cant sample D A, suggest that they cant even distinguish D A from U! Response: Thats why we said to focus on 10-30 photonsa range where a classical computer can verify a BosonSampling devices output, but the BosonSampling device might be faster! (And 10-30 photons is probably the best you can do anyway, without quantum fault-tolerance)

15
More Decisive Responses (A.-Arkhipov, arXiv:1309.7460) Theorem: Let A C m n be a Haar-random BosonSampling matrix, where mn 5.1 /. Then with 1-O( ) probability over A, the BosonSampling distribution D A has (1) variation distance from the uniform distribution U Under U Histogram of (normalized) probabilities under D A Necessary, though not sufficient, for approximately sampling D A to be hard

16
Theorem (A. 2013): Let A C m n be Haar-random, where m>>n. Then there is a classical polynomial-time algorithm C(A) that distinguishes D A from U (with high probability over A and constant bias, and using only O(1) samples) Strategy: Let A S be the n n submatrix of A corresponding to output S. Let P be the product of squared 2-norms of A S s rows. If P>E[P], then guess S was drawn from D A ; otherwise guess S was drawn from U P under uniform distribution (a lognormal random variable) P under a BosonSampling distribution A ASAS

17
Using Quantum Optics to Prove that the Permanent is #P-Complete [A., Proc. Roy. Soc. 2011] Valiant showed that the permanent is #P-completebut his proof required strange, custom-made gadgets We gave a new, arguably more transparent proof by combining three facts: (1)n-photon amplitudes correspond to n n permanents (2) Postselected quantum optics can simulate universal quantum computation [Knill-Laflamme-Milburn 2001] (3) Quantum computations can encode #P-complete quantities in their amplitudes

18
Open Problems Similar hardness results for other natural quantum systems (besides linear optics)? Bremner, Jozsa, Shepherd 2010: Another system for which exact classical simulation would collapse PH Can the BosonSampling model solve classically-hard decision problems? With verifiable answers? Can one efficiently sample a distribution that cant be efficiently distinguished from BosonSampling? Prove that Gaussian permanent approximation is #P-hard (first step: understand distribution of Gaussian permanents) Are we still sampling a hard distribution with unheralded photon losses, or Gaussian initial states?

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google