Presentation is loading. Please wait.

Presentation is loading. Please wait.

On Gallager’s problem: New Bounds for Noisy Communication. Navin Goyal & Mike Saks Joint work with Guy Kindler Microsoft Research.

Similar presentations


Presentation on theme: "On Gallager’s problem: New Bounds for Noisy Communication. Navin Goyal & Mike Saks Joint work with Guy Kindler Microsoft Research."— Presentation transcript:

1 On Gallager’s problem: New Bounds for Noisy Communication. Navin Goyal & Mike Saks Joint work with Guy Kindler Microsoft Research

2 Ambrose Bierce 1842 – 1914(?) “Noise is t he chief product and the authenticating sign of civilization”  In CS: Noise appears in the study of information theory, network design, learning theory, cryptography, quantum computation, hardness of approximation, theory of social choice, embeddings of metric spaces, privacy in databases…

3 In this talk  [El Gamal ’84]: The noisy broadcast network model.  [Gallager ’88]: n ¢ loglog(n) algorithm for identity.  Main result: Gallager’s algorithm is tight.  Proof by reduction: Generalized noisy decision trees (gnd-trees). Lower bound for gnd-trees.

4 First, a Fourier-analytic result  Definition (Fourier): Let f:{-1,1} n ! {-1,1} be a Boolean function. The i’th Fourier coefficient of f: f i = E x » U [f(x) ¢ x i ].  [Talagrand ’96]: Let p= Pr x » U [f(x)=1], (p<1/2). Then  i (f i ) 2 · p 2 log(1/p).  Crucial for our result! (as hinted in slide #26..)

5 What next: What next: Communication under noise - examples The noisy broadcast model Gallager: the algorithm and the problem Gnd-trees: Generalized Noisy Decision Trees Our results About the proof

6 01100 Noisy computation: case 1 1.Noiseless channel: n transmissions. 2.naïve: n ¢ log(n) (error is polynomially small in n) 3.[Shannon ’48]: c ¢ n (error is exponentially small in n) Aggregation of bits: Big advantage

7 y=10101 Noisy computation: case 2 x=01100 Goal: compute f(x,y) 1.Noiseless channel: k transmissions. 2. naïve: k ¢ log(k) 3.[Schulman ’96]: c ¢ k (error is exponentially small in k)

8 The Noisy Broadcast Model [El Gamal ’84] 0 0 0 0 0 0 0 0 1 1 x1x1x1x1 x1x1x1x1 x5x5x5x5 x5x5x5x5 x4x4x4x4 x4x4x4x4 x3x3x3x3 x3x3x3x3 x2x2x2x2 x2x2x2x2 x6x6x6x6 x6x6x6x6 x7x7x7x7 x7x7x7x7 x8x8x8x8 x8x8x8x8 x9x9x9x9 x9x9x9x9 x 10  Input: x 1,..,x n.  One bit transmitted at a time.  Error rate:  (small const.).  Goal: compute g(x 1,..,x n ).  In this talk: we want to compute x 1,..,x n. Order of transmissions is predefined 1 1 1 1 1 1 1 1 0 0

9 Some history  Computing identity: Naïve solution: n log n (repetition) [Gallager ’88]: n loglog n.  [Yao 97]: Try thresholds first.  [KM ’98]: Any threshold in O(n).  In adversarial model: [FK ’00]: OR in O(n ¢ log * n). [N ’04]: OR in O(n). x1x1x1x1 x1x1x1x1 1 1 1 1 1 1 1 1 0 0 x5x5x5x5 x5x5x5x5 x4x4x4x4 x4x4x4x4 x3x3x3x3 x3x3x3x3 x2x2x2x2 x2x2x2x2 x6x6x6x6 x6x6x6x6 x7x7x7x7 x7x7x7x7 x8x8x8x8 x8x8x8x8 x9x9x9x9 x9x9x9x9 x1x1x1x1 x1x1x1x1 Fails for “adversarial noise”. Gallager’s problem: Can this be made linear?

10 what’s next: what’s next: Communication under noise - examples The noisy broadcast model Gallager: an algorithm and a problem Gnd-trees: Generalized Noisy Decision Trees Statement of results About the proof

11 g(x)=y g(x)=.. f  (x 1 ) f  (x 1 ) f  (x 2 ) f  (x 1 ) f  (x 1 ) f  (x 2 ) f  (x 2 ) Generalized Noisy Decision (gnd) Trees  Input: x, but access is to noisy copies x 1,x 2, x 3 … x i =x © N i (N i flips x j w.p.  ) x i =x © N i (N i flips x j w.p.  )  Any Boolean queries! =“01” v =“01” f v : Boolean function Goal: compute g(x), Goal: compute g(x), minimizing depth(T) minimizing depth(T)

12 Generalized Noisy Decision (gnd) Trees  Noisy decision trees [FPRU ‘94]: Query noisy coordinates of x. Query noisy coordinates of x.  Identity computable in nlog(n). g(x)=y g(x)=.. f  (x 1 ) f  (x 1 ) f  (x 2 ) f  (x 1 ) f  (x 1 ) f  (x 2 ) f  (x 2 )

13 [FPRU] O(n) [FPRU] Some bounds for noisy trees functionnoisy trees OR [FPRU]  n) [FPRU] PARITY [FPRU]  n log n) [FPRU] MAJORITY [FPRU]  n log n) [FPRU]  n) [GKS] [KM * ]  n) [KM * ] IDENTITY [FPRU]  n log n) [FPRU]  n log n) [GKS] gnd-trees g(x)=y g(x)=.. f  (x 1 ) f  (x 1 ) f  (x 2 ) f  (x 1 ) f  (x 1 ) f  (x 2 ) f  (x 2 ) (n)(n) (n)(n)

14 Our results  Main theorem: bound for identity in n.b. network.  Main theorem:   (n ¢ loglog(n)) bound for identity in n.b. network.  Lower-bound for gnd-tree :   nlog(n) Lower bound for computing identity in generalized decision trees.  Reduction theorem: kn time protocol in  -noise n.b. network ) 2kn depth gnd-tree for noise  ck.  Proof of main theorem: 2kn ¸  ck n log n 2k(1/  ) ck ¸ log n k=  ( loglog(n) )

15 what’s next: what’s next: About communication under noise The noisy broadcast model Gallager: the algorithm and the problem Generalized Noisy Decision Trees (gnd-trees) Our results About the proof

16 About the proof: About the proof: The reduction:  A series of transformations from a broadcast protocol into a gnd-tree protocol.

17 About the proof: About the proof: The reduction:  A series of transformations from a broadcast protocol into a gnd-tree protocol. Gnd-tree lower bound:  Defining a knowledge measure.  Bounding knowledge measure by depth of tree.

18 Lower bound for gnd-trees  Our claim: A gnd-tree which computes identity on x=x 1,..,x n requires  (   n ¢ log n) depth.  We actually prove: If depth(T) ·   n ¢ log n then Pr x » U [T returns x] <  (  ), (lim  ! 0  (  )=0)

19 The big picture  We prove: If depth(T) ·   n ¢ log n then Pr x » U [T returns x] <  (  ), ( lim  ! 0  (  )=0 )  Structure of such proofs: 1.Define: Knowledge measure M x (v) 2.Show: T correct only if w.h.p. M x (  ) > t 3.Show: If depth(T)<<nlog n, then w.h.p. M x (  )<t In our case: t = log(n), and typically M x (v,a) - M x (v) · 1/(  3 ¢ n)  is the leaf reached by T. g(x)=y g(x)=.. f  (x 1 ) f  (x 1 ) f  (x 2 ) f  (x 1 ) f  (x 1 ) f  (x 2 ) f  (x 2 ) Disclaimer: We consider case where each noisy copy is queried once… (more work needed in general case) Disclaimer: We consider case where each noisy copy is queried once… (more work needed in general case)

20 Perceived probability  Perceived probability (“likelihood”) of x: L x (v)=Pr[x|visit(v)]  Pr[x|visit(v)] is “multiplicative”. g(x)=y g(x)=.. f  (x 1 ) f  (x 1 ) f  (x 2 ) f  (x 1 ) f  (x 1 ) f  (x 2 ) f  (x 2 )

21 Knowledge measure: 1 st attempt  Log likelihood of x: LL x (v) = n + log(L x (v)) LL x (root) =0, LL x ()n – const LL x (  ) ¸ n – const  We’d like to show: Typically, LL x (v,a)-LL x (v) < 1/log(n).  But : After n coordinate queries, LL x ¼  (n).  Reason: x is quickly separated from far away points. Separating x from neighbors is the hardest. g(x)=y g(x)=.. f  (x 1 ) f  (x 1 ) f  (x 2 ) f  (x 1 ) f  (x 1 ) f  (x 2 ) f  (x 2 )

22 Knowledge measure: seriously  Log likelihood “gradient” at x: M i x (v)= log(L x (v)) - log(L x © i (v)) M x (v)= AVG i (M i x (v)) = log(L x (v)) - AVG i ( log(L x © i (v)) ) = log(L x (v)) - AVG i ( log(L x © i (v)) ) M x (root)=0, M x () ¸ log(n) - c M x (root)=0, M x (  ) ¸ log(n) - c g(x)=y g(x)=.. f  (x 1 ) f  (x 1 ) f  (x 2 ) f  (x 1 ) f  (x 1 ) f  (x 2 ) f  (x 2 ) All that is left: typical gain in M x is at most 1/n.

23 a =1. v f (x 5 ) v,1 v,0 v,1 Gain in knowledge measure

24 M i x (v,a)- M i x (v)= v f (x 5 ) v,0 v,1 a =1. log(L x (v,a)) - log(L x © i (v,a)) - ( log(L x (v)) - log(L x © i (v)) ) log(L x (v,a)) - log(L x © i (v,a)) - ( log(L x (v)) - log(L x © i (v)) )

25 Gain in knowledge measure M i x (v,a)- M i x (v) M x (v,a)-M x (v)=  The coup des grâce: For every query f v, x, E[M x (v,a)-M x (v)] · 1/(  3 n) E[(M x (v,a)-M x (v)) 2 ] · 1/(  3 n)  Proof: Adaptation of [Talagrand ‘96]. v f (x 5 ) v,0 v,1 Expression depends only on f, x !

26 Main open problem  Show lower bound for computing a Boolean function.  Not known even for a random function!  Generalize for other network designs.

27 Thank You !

28 Gallager’s solution, simplified 1.Partition to groups of size log(n) 2.Each player sends its bit loglog(n) times. 00 1 11 1,1,11 00 1 11 1 0,0,0

29 1 Gallager’s solution, simplified 1.Partition to groups of size log(n) 2.Each player sends its bit loglog(n) times. 00 10 11 0,0,010 00 11111111 11 10 1 1 1

30 1101 10 10 11111111 10 Gallager’s solution, simplified 1.Partition to groups of size log(n) 2.Each player sends its bit loglog(n) times.  W.h.p., in all groups, almost all players know all bits. 00 1001 11 1001 00 11 1001

31 Gallager’s solution, simplified 3.Each group transmits error correcting code of its bits: * Each player transmits a constant number of bits. 4.W.h.p. all players now know all bits of all groups. 00 1001 11 1001 00 1101 11 10011,1,1 0,1,10,1,0 1,0,0 suppose code(1001)=100 111 100 011

32 The reduction  The program: Start with a noisy broadcast protocol with kn steps. Gradually, simlulate protocol in more “tree-like” models.  W.l.o.g., assume each node performs 10k transmissions.  first step: each transmission is replaced by three, only one of which is noisy.

33 The reduction  First step: each transmission is replaced by three, only one of which is noisy. Function of x 3, and of past receptions. b x3x3x3x3 x3x3x3x3 x 3, b(0),b(1) x3x3x3x3 x3x3x3x3 b(0), b(1) transmitted noise free.

34 The reduction  Second step: noisy transmissions moved to beginning of protocol. Function of x 3, and of past receptions. b x3x3x3x3 x3x3x3x3 b(0),b(1) x3x3x3x3 x3x3x3x3 x 3, x 3, x 3,.. x3x3x3x3 x3x3x3x3

35 The reduction  Second step: noisy transmissions moved to beginning of protocol.  After noisy phase: each player has 10k noisy copies of each bit.  Equivalent to having an  k -noisy copy of x. b(0),b(1) x3x3x3x3 x3x3x3x3 x 3, x 3, x 3,.. x3x3x3x3 x3x3x3x3

36 The reduction  Third step: each player begins with an  k -noisy copy of x.  Each transmission depends on transmitter’s noisy copy, and past transmissions (and perhaps a random decision). b(0),b(1) x 3, x 3, x 3,.. x©N3x©N3 x©N3x©N3 Equivalent to a gnd tree!

37 Gain in progress measure M i x (v)= log(Pr[x|visit(v)])-log(Pr[x © i|visit(v)]) v f (x 5 ) v,0 v,1

38 Gain in progress measure M i x (v) M i x (v) M i x (v,a)- M i x (v)= v f (x 5 ) v,0 v,1 a = f (x 5 ) : a random variable Only depends on f !.


Download ppt "On Gallager’s problem: New Bounds for Noisy Communication. Navin Goyal & Mike Saks Joint work with Guy Kindler Microsoft Research."

Similar presentations


Ads by Google