Presentation is loading. Please wait.

Presentation is loading. Please wait.

Shengyu Zhang The Chinese University of Hong Kong.

Similar presentations


Presentation on theme: "Shengyu Zhang The Chinese University of Hong Kong."— Presentation transcript:

1 Shengyu Zhang The Chinese University of Hong Kong

2 Quantum Computing Communication Complexity Question: What’s the largest gap between classical and quantum communication complexities? Algorithms Info. theory crypto games … Circuit lb Streaming Algorithms VLSI Data Structures …

3 Communication complexity [Yao79] Two parties, Alice and Bob, jointly compute a function f(x,y) with x known only to Alice and y only to Bob. Communication complexity: how many bits are needed to be exchanged? AliceBob f(x,y) xy

4 Various protocols Deterministic: D(f) Randomized: R(f) –A bounded error probability is allowed. –Private or public coins? Differ by ±O(log n). Quantum: Q(f) –A bounded error probability is allowed. –Assumption: No shared Entanglement. (Does it help? Open.)

5 Communication complexity: one-way model One-way: Alice sends a message to Bob. --- D 1 (f), R 1 (f), Q 1 (f) AliceBob x y f(x,y)

6 About one-way model Power: –Efficient protocols for specific functions such as Equality, Hamming Distance, and in general, all symmetric XOR functions. Applications: –Lower bound for space complexity of streaming algorithms. Lower bound? Can be quite hard, especially for quantum. As efficient as the best two-way protocol.

7 Question Question: What’s the largest gap between classical and quantum communication complexities? Partial functions, relations: exponential. Total functions, two-way: –Largest gap: Q(Disj) = Θ( √n), R(Disj) = Θ( n). –Best bound: R(f) = exp(Q(f)). Conjecture: R(f) = poly(Q(f)).

8 Question Question: What’s the largest gap between classical and quantum communication complexities? Partial functions, relations: exponential. Total functions, one-way: –Largest gap: R 1 (EQ) = 2∙Q 1 (EQ), –Best bound: R 1 (f) = exp(Q 1 (f)). Conjecture: R 1 (f) = poly(Q 1 (f)), –or even R 1 (f) = O(Q 1 (f)).

9 Approaches Approach 1: Directly simulate a quantum protocol by classical one. –[Aaronson] R 1 (f) = O(m∙Q 1 (f)). Approach 2: L(f) ≤ Q 1 (f) ≤ R 1 (f) ≤ poly(L(f)). –[Nayak99; Jain, Z.’09] R 1 (f) = O(I μ ∙VC(f)), where I μ is the mutual info of any hard distribution μ. Note: For the approach 2 to be possibly succeed, the quantum lower bound L(f) has to be polynomially tight for Q 1 (f).

10 Main result There are three lower bound techniques known for Q 1 (f). –Nayak’99: Partition Tree –Aaronson’05: Trace Distance –The two-way complexity Q(f) [Thm] All of these lower bounds can be arbitrarily weak. Actually, random functions have Q(f) = Ω(n), but the first two lower bounds only give O(1).

11 Next Closer look at the Partition Tree bound. Compare Q and Partition Tree (PT) and Trace Distance (TD) bounds.

12 Nayak’s info. theo. argument [Nayak’99] Q 1 (Index) = Ω(n). –ρ x contains Ω(1) info of x 1, since i may be 1. –Regardless of x 1, ρ x contains Ω(1) info of x 2. –And so on. AliceBob x  {0,1} n i  [n] Index(x,i) = x i ρxρx

13 Nayak’s info. theo. argument ρ = ∑ x p x ∙ρ x S( ρ) = S(½ρ 0 +½ρ 1 ) // ρ b = 2 ∑ x:x_1=b p x ∙ρ x ≥ I(X 1,M 1 ) + ½S(ρ 0 )+½S(ρ 1 ) // Holevo bound. M 1 : Bob’s conclusion about X 1 ≥ 1 – H(ε) + ½S(ρ 0 )+½S(ρ 1 ) // Fano’s Inequ. ≥ … ≥ n(1 – H(ε)). AliceBob x  {0,1} n i  [n] Index(x,i) = x i ρxρx

14 Partition tree ρ = ∑ x p x ∙ρ x ρ b = 2 ∑ x:x1=b p x ∙ρ x ρ b1b2 = 4 ∑ x:x1=b1,x2=b2 p x ∙ρ x 000 001 010 011 100 101 110 111 ρ 00 ρ 01 ρ 10 ρ 11 ρ0ρ0 ρ1ρ1 ρ AliceBob x  {0,1} n i  [n] Index(x,i) = x i ρxρx

15 Partition tree ρ = ∑ x p x ∙ρ x In general: –Distri. p on {0,1} n –Partition tree for {0,1} n –Gain H( δ)-H(ε) at v v is partitioned by (δ,1-δ) AliceBob x  {0,1} n i  [n] Index(x,i) = x i ρxρx 000001111000001111 0011100111 00110011

16 Issue [Fano’s inequality] I(X;Y) ≥ H(X) – H(ε). –X,Y over {0,1}. – ε = Pr[X ≠ Y]. What if H(δ) < H(ε)? Idea 1: use success amplification to decrease ε to ε*. Idea 2: give up those vertices v with small H(X). Bound: max T,p,ε* log(1/ε*)∙ ∑ v p(v)[H(X v )- H(ε*)] + Question: How to calculate this? H(δ)

17 Picture clear max T,p,ε* log(1/ε*)∙ ∑ v p(v)[H(X v )- H(ε*)] + Very complicated. Compare to Index where the tree is completely binary and each H(δ v ) = 1 (i.e. δ v =1/2). [Thm] the maximization is achieved by a complete binary tree with δ v =1/2 everywhere.

18 Two interesting comparisons Comparison to decision tree: –Decision tree complexity: make the longest path short –Here: make the shortest path long. Comparison to VC-dim lower bound: [Thm] The value is exactly the extensive equivalence query complexity. –A measure in learning theory. –Strengthen the VC-dim lower bound by Nayak.

19 Trace distance bound [Aaronson’05] –μ is a distri on 1-inputs –D 1 : (x, y) ← μ. –D 2 : y ← μ, x 1, x 2 ← μ y. Then Q 1 (f) = Ω(log ∥D 2 -D 1 2 ∥ 1 -1 )

20 Separation [Thm] Take a random graph G(N,p) with ω(log 4 N/N) ≤ p ≤ 1- Ω(1). Its adjacency matrix, as a bi-variate function f, has the following w.p. 1-o(1) Q(f) = Ω(log(pN)). Q*(f) ≥ Q(f) = Ω(log(1/disc(f))). disc(f) is related to σ 2 ( D -1/2 AD -1/2 ), which can be bounded by O(1/√pN) for a random graph.

21 [Thm] For p = N - Ω(1), PT(f) = O(1) w.h.p. –By our characterization, it’s enough to consider complete binary tree. –For p = N - Ω(1), each layer of tree shrinks the #1’s by a factor of p. pN → p 2 N → p 3 N → … → 0: Only O(1) steps. [Thm] For p = o(N -6/7 ), TD(f) = O(1) w.h.p. –Quite technical, omitted here.

22 Putting together [Thm] Take a random graph G(N,p) with ω(log 4 N/N) ≤ p ≤ 1- Ω(1). Its adjacency matrix, as a bi-variate function f, has the following w.p. 1-o(1) Q(f) = Ω(log(pN)). [Thm] For p = o(N -6/7 ), TD(f) = O(1) w.h.p. [Thm] For p = N - Ω(1), PT(f) = O(1) w.h.p. Taking p between ω(log 4 N/N) and o(N -6/7 ) gives the separation.

23 Discussions Negative results on the tightness of known quantum lower bound methods. Calls for new method. Somehow combine the advantages of these methods? –Hope the paper shed some light on this by identifying their weakness.


Download ppt "Shengyu Zhang The Chinese University of Hong Kong."

Similar presentations


Ads by Google