Presentation on theme: "New Computational Insights from Quantum Optics Scott Aaronson Based on joint work with Alex Arkhipov."— Presentation transcript:
New Computational Insights from Quantum Optics Scott Aaronson Based on joint work with Alex Arkhipov
The Extended Church- Turing Thesis (ECT) Everything feasibly computable in the physical world is feasibly computable by a (probabilistic) Turing machine But building a QC able to factor n>>143 is damn hard! Cant CS meet physics halfway on this one? I.e., show computational hardness in more easily-accessible quantum systems? Also, factoring is a special problem
BOSONS FERMIONS Our Starting Point In P#P-complete [Valiant] All I can say is, the bosons got the harder job
Can We Use Bosons to Calculate the Permanent? Explanation: Amplitudes arent directly observable. To get a reasonable estimate of Per(A), you might need to repeat the experiment exponentially many times That sounds way too good to be trueit would let us solve NP-complete problems and more using QC! So if n-boson amplitudes correspond to permanents…
Basic Result: Suppose there were a polynomial-time classical randomized algorithm that took as input a description of a noninteracting-boson experiment, and that output a sample from the correct final distribution over n-boson states. Then P #P =BPP NP and the polynomial hierarchy collapses. Motivation: Compared to (say) Shors algorithm, we get stronger evidence that a weaker system can do interesting quantum computations
Crucial step we take: switching attention to sampling problems BQP P #P BPP BPP NP PH F ACTORING P ERMANENT 3SAT X Y … SampP SampBQP
Valiant 2001, Terhal-DiVincenzo 2002, folklore: A QC built of noninteracting fermions can be efficiently simulated by a classical computer Related Work A. 2011: Generalization of Gurvitss algorithm that gives better approximation if A has repeated rows or columns Knill, Laflamme, Milburn 2001: Noninteracting bosons plus adaptive measurements yield universal QC Jerrum-Sinclair-Vigoda 2001: Fast classical randomized algorithm to approximate Per(A) for nonnegative A Gurvits 2005: O(n 2 / 2 ) classical randomized algorithm to approximate Per(A) to additive error |A| n Bremner, Jozsa, Shepherd 2011: Commuting Hamiltonians
The Quantum Optics Model A rudimentary subset of quantum computing, involving only non-interacting bosons, and not based on qubits Classical counterpart: Galtons Board, on display at the Boston Museum of Science Using only pegs and non- interacting balls, you probably cant build a universal computer but you can do some interesting computations, like generating the binomial distribution!
The Quantum Version Lets replace the balls by identical single photons, and the pegs by beamsplitters Then we see strange things like the Hong-Ou-Mandel dip The two photons are now correlated, even though they never interacted! Explanation involves destructive interference of amplitudes: Final amplitude of non-collision is
Getting Formal The basis states have the form |S =|s 1,…,s m, where s i is the number of photons in the i th mode Well never create or destroy photons. So s 1 +…+s m =n is constant. Initial state: |I =|1,…,1,0,……,0 For us, m=poly(n) U
You get to apply any m m unitary matrix U If n=1 (i.e., theres only one photon, in a superposition over the m modes), U acts on that photon in the obvious way In general, there are ways to distribute n identical photons into m modes U induces an M M unitary (U) on the n-photon states as follows: Here U S,T is an n n submatrix of U (possibly with repeated rows and columns), obtained by taking s i copies of the i th row of U and t j copies of the j th column for all i,j
OK, so why is it hard to sample the distribution over photon numbers classically? Given any matrix A C n n, we can construct an m m unitary U (where m 2n) as follows: Suppose we start with |I =|1,…,1,0,…,0 (one photon in each of the first n modes), apply U, and measure. Then the probability of observing |I again is
Claim 1: p is #P-complete to estimate (up to a constant factor) Idea: Valiant proved that the P ERMANENT is #P-complete. Can use a classical reduction to go from a multiplicative approximation of |Per(A)| 2 to Per(A) itself. Claim 2: Suppose we had a fast classical algorithm for boson sampling. Then we could estimate p in BPP NP Idea: Let M be our classical sampling algorithm, and let r be its randomness. Use approximate counting to estimate Conclusion: Suppose we had a fast classical algorithm for boson sampling. Then P #P =BPP NP.
The previous result hinged on the difficulty of estimating a single, exponentially-small probability pbut what about noise and error? The Elephant in the Room The right question: can a classical computer efficiently sample a distribution with 1/poly(n) variation distance from the boson distribution? Our Main Result: Suppose it can. Then theres a BPP NP algorithm to estimate |Per(A)| 2, with high probability over a Gaussian matrix
Estimating |Per(A)| 2, for most Gaussian matrices A, is a #P-hard problem Our Main Conjecture We can prove it if you replace estimating by calculating, or most by all If the conjecture holds, then even a noisy n-photon experiment could falsify the Extended Church Thesis, assuming P #P BPP NP ! Most of our paper is devoted to giving evidence for this conjecture
There exist constants C,D and >0 such that for all n and >0, First step: understand the distribution of |Per(A)| 2 for random A Empirically true! Also, we can prove it with determinant in place of permanent Conjecture:
The Equivalence of Sampling and Searching [A., CSR 2011] [A.-Arkhipov] gave a sampling problem solvable using quantum optics that seems hard classicallybut does that imply anything about more traditional problems? Recently, I found a way to convert any sampling problem into a search problem of equivalent difficulty Basic Idea: Given a distribution D, the search problem is to find a string x in the support of D with large Kolmogorov complexity
Using Quantum Optics to Prove that the Permanent is #P-Hard [A., Proc. Roy. Soc. 2011] Valiant famously showed that the permanent is #P-hard but his proof required strange, custom-made gadgets We gave a new, more transparent proof by combining three facts: (1)n-photon amplitudes correspond to n n permanents (2) Postselected quantum optics can simulate universal quantum computation [Knill-Laflamme-Milburn 2001] (3) Quantum computations can encode #P-hard quantities in their amplitudes
Experimental Issues Basically, were asking for the n-photon generalization of the Hong-Ou-Mandel dip, where n = big as possible Our results suggest that such a generalized HOM dip would be evidence against the Extended Church-Turing Thesis Physicists: Sounds hard! But not as hard as full QC Remark: If n is too large, a classical computer couldnt even verify the answers! Not a problem yet though… Several experimental groups (Bristol, U. of Queensland) are currently working to do our experiment with 3 photons. Largest challenge they face: reliable single-photon sources
Summary Thinking about quantum optics led to: - A new experimental quantum computing proposal - New evidence that QCs are hard to simulate classically - A new classical randomized algorithm for estimating permanents - A new proof of Valiants result that the permanent is #P-hard - (Indirectly) A new connection between sampling and searching
Open Problems Find more ways for quantum complexity theory to meet the experimentalists halfway Bremner, Jozsa, Shepherd 2011: QC with commuting Hamiltonians can sample hard distributions Can our model solve classically-intractable decision problems? Prove our main conjecture: that Per(A) is #P-hard to approximate for a matrix A of i.i.d. Gaussians ($1,000)! Fault-tolerance in the quantum optics model?