Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hartmut Klauck Centre for Quantum Technologies Nanyang Technological University Singapore.

Similar presentations


Presentation on theme: "Hartmut Klauck Centre for Quantum Technologies Nanyang Technological University Singapore."— Presentation transcript:

1 Hartmut Klauck Centre for Quantum Technologies Nanyang Technological University Singapore

2 Two players Alice and Bob want to cooperatively compute a function f(x,y) Alice knows x, Bob knows y How much communication is needed? f(x,y)

3 Equality: EQ(x,y)=1 iff x=y Disjointness: DISJ(x,y)=1 iff x and y are disjoint sets Inner Product mod 2: IP(x,y)=1 iff  i x i Æ y i is odd Representation as matrices: M(x,y) contains entries f(x,y)

4 The set of inputs that share the same message sequence form a combinatorial rectangle in the communication matrix Proof sketch: Alice’s first message depends on her input only, partitions the rows Then Bobs message partitions the columns etc. The messages partition the communication matrix into combinatorial rectangles

5 For any integer matrix M the partition number is the minimum number of monochromatic rectangles needed to partition M Clearly: D(M) ¸ log part(M) The covering number is the minimum number of rectangles needed to cover the entries of M with monochromatic rectangles This corresponds to nondeterministic protocols

6 D(EQ)=n+1 Consider the inputs x,x No two inputs x,x and y,y can be in the same combinatorial rectangle Otherwise x,y is also in the same rectangle! Hence we need at least 2 n 1-rectangles to cover the 1- inputs of EQ On the other hand n rectangles are enough to cover the 0-inputs of EQ

7 It is known that log part(f) cannot be too far from D(f) for Boolean f But part(f) is not always easy to determine We will consider different relaxations of part(f) that are easier to calculate

8 We can relax the partition requirement for Boolean functions If M can be partitioned into k 1-rectangles, then M can be written as the sum of k rank 1 matrices Ie. rank(M) · k Examples: rank(EQ)=2 n, rank(DISJ)=2 n -1, rank(IP)=2 n -1

9 Let M be a Boolean matrix We know that D(M) ¸ log rank(M) Conjecture: D(M) · poly(log rank(M)) Best known upper bound is rank(M) There is a polynomial gap D(f)=  (n), log rank(f)=n 0.61 Conjecture: quadratic gap

10 Observation: 1-rectangles are rank one matrices with nonnegative entries only prank(M) is the minimum k such that M can be written as the sum of k rank 1 matrices with nonnegative entries Clearly D(M) ¸ log prank(M) for all Boolean M Now we get a bound that is polynomially tight: D(M) · O(log prank(M) ¢ log prank(J-M)) for all Boolean M J is the all ones matrix

11 log prank(M) · poly(log rank(M)) for all Bool. M Every Bool. m £ n rank r matrix M has a monochromatic submatrix of size mn/2 polylog(r) Every Bool. m £ n rank r matrix M has a submatrix of size mn/2 polylog(r) that has rank <.99r The rank conjecture has also been related to some open problems in additive combinatorics

12 Computing prank is NP-hard Always gives polynomially tight bounds, but hard to show

13 In general it is hard to estimate the partition number (number of rectangles in a partition of the comm. matrix) Idea: write as an integer program, relax into a lin. program, estimate via the dual The dual is a max. problem!

14 Consider the set R of all 1-monochromatic rectangles in M Every R 2 R gets a weight w R 2 {0,1} Minimize  w R such that for all x,y with f(x,y)=1:  R:x,y 2 R w R =1 (implicit: for all x,y with f(x,y)=0:  R: x,y 2 R w R =0) The optimum is the partition number

15 R: set of all 1-monochromatic rectangles in M Every R 2 R gets a nonnegative real weight w R Minimize  w R such that for all x,y with f(x,y)=1:  R: x,y 2 R w R =1 (implicit: for all x,y with f(x,y)=0:  R: x,y 2 R w R =0) The optimum of this LP lower bounds the partition number

16 A variant of the LP lower bounds the (one-sided) nondeterministic CC: Minimize  w R such that for all x,y with f(x,y)=1:  R: x,y 2 R w R ¸ 1 Denote the optimal value B(M) Use max of B(M), B(J-M) to show lower bounds on det. CC Then: D(M) · O(log B(M) ¢ log B(J-M)) + log 2 n Bounds for D(M) obtainied this way are never worse than quadratically smaller than D(M)

17 In the dual there is one real variable for every input x,y Max  Á x,y such that for all 1-chromatic R:  x,y 2 R Á x,y · 1 In other words, put weights on inputs to “balance” the weights on each 1-chromatic rectangle For the dual of the nondeterministic LP bound the variables must have nonnegative values

18 Max  Á x,y such that for all 1-chromatic R:  x,y 2 R Á x,y · 1 Suppose that all Á x,y ¸ 0 After rescaling the Á x,y can be regarded as a probability distribution The scaling factor is the max size of matrices that are 1-monochromatic under the distribution

19 Equality: Weights Á x,x = 1 The only 1-monochromatic rectangles contain exactly one input x,x Thus B(EQ) ¸ 2 n Inner Product modulo 2: Consider f(x,y)=1-IP(x,y) 1-chromatic rectangles A £ B satisfy that A ? B Hence dim(A)+dim(B) · n ) |A| ¢ |B| · 2 n All 1-inputs get weight ½ n There are ¸ 2 2n-1 1-inputs Hence B(f) ¸ 2 n /2

20 Disjointness: 1-inputs satisfy  i x i Æ y i = 0 There are 3 n 1-inputs 1-chrom. rectangles A £ B have A ? B Hence still |A| ¢ |B| · 2 n Give weights 1/2 n We get the bound B(DISJ) ¸ 3 n /2 n

21 Recall the primal program: Minimize  w R such that for all x,y with f(x,y)=1:  R: x,y 2 R w R =1 We don’t allow x,y to be “covered too much” In the dual this means we can use negative weights Á x,y Makes it easier to satisfy constraints Have not used that in our examples

22 Promise-Nondisjointness: f(x,y)=0 if  i x i Æ y i =0 f(x,y)=1 if  i x i Æ y i =1 otherwise f is undefined The LP for f with the constraint  R: x,y 2 R w R ¸ 1 has optimum · n Use the following LP: Minimize  w R such that for all x,y with f(x,y)=1:  R:x,y 2 R w R = 1 for all x,y with f(x,y)=0:  R: x,y 2 R w R = 0 for all other x,y:  R: x,y 2 R w R · 1 z

23 We have to exhibit a solution to the dual We should put positive weights on inputs with intersections size 1 Negative weights on inputs with larger intersection size Choose 2 Weight 0 elsewhere

24 The following fact is useful [Razborov 92] Let ¹ k denote the uniform distribution on x,y with |x Å y|=k and |x|=|y|=n/4 Then for all large enough R=A £ B ¹ x 1 (R) ¸ (1- ² ) ¹ 0 (R) This mean all large rectangles are corrupted Similarly ¹ 2 (R) ¸ (1- ² ) 2 ¹ 1 (R) for all 1-chrom. R Hence putting 1- ² the weight on x,y with intersection 2 as on x,y with intersection 1 is enough to satisfy the constraints for all large 1-chrom. rectangles Constraints are also true for small rectangles if weights are not too large Total weight is 2  n)

25 Readily generalizes to bounded error protocols Can be generalized to deal with quantum protocols Proof consist of exhibiting a dual solution For randomized: Use contraints ¸ 1- ² and · 1 in the primal program

26 Primal: Min  w R For all x,y with f(x,y)=1:  R:x,y 2 R w R ¸ 1- ² For all x,y with f(x,y)=0:  R: R w R · ² This is the rectangle/corruption bound

27 Razborov showed that there is a distribution on inputs such that for all large R the fraction of 0-inputs is ² time the fraction of 1-inputs This corresponds to a solution of the dual

28 Previously we bounded the size of monochromatic rectangles In the bounded error scenario we want to bound the size of almost monochromatic rectangles Often this is much harder! What about the bias? Often easier to bound But usually not good enough

29 Fix a distribution ¹ on inputs Then the discrepancy of a rectangle is | ¹ (R Å 1)- ¹ (R Å 0)| Disc(f)=min log of the above Quite easy to show that Disc (f) is a lower bound on D(f) Even for randomized, quantum, and (weakly) unbounded error

30 Disc(IP)=  (n) All rectangles are almost balanced Disc(DISJ)=O(log n) Nondeterministic complexity is small, hence large monochromatic rectangles exist The method fails to capture either randomized or quantum communication complexity

31 Re-visit the result about inner product Indeed it is hard to compute IP even with very small error Error 1/2-½ ²n BUT many functions are close to IP even if their discrepancy is small “Generalized” or “Smooth” discrepancy

32 Take the function Maj(x,y)=1 iff  i x i Æ y i ¸ n/2 Easy to see that Disc(Maj)=O(log n) Nevertheless Maj is close enough to IP to inherit the lower bound  (n) for quantum protocols

33 Every rectangle R gets a real weight w R Min  w R such that for all x,y with f(x,y)=1:  R contains x,y w R 2 [1- ²,1] for all x,y with f(x,y)=0:  R contains x,y w R 2 [0, ² ] Dual: Put weights on inputs such that all large rectangles are almost balanced Difference to the rectangle bound: two-sided balance condition

34 Relaxing the rank bound further Why? Motivated by quantum communication Norm based methods allow to deal with entanglement in quantum protocols We will arrive at (almost) the same quantity as above (the LP bound) (via Grothendieck’s inequality)

35 D(f) ¸ log rank(f) ¸ ||A|| 2 /mn There is another relaxation of the rank: ° 2 =max u,v ||M ± u ¢ v|| tr rank(M) ¸ ° 2 (M) 2 Then define ° 2 ® as the minimum ° 2 of any M that is ® - close to M This method subsumes all previous methods for lower bounding quantum cc


Download ppt "Hartmut Klauck Centre for Quantum Technologies Nanyang Technological University Singapore."

Similar presentations


Ads by Google