Presentation is loading. Please wait.

Presentation is loading. Please wait.

LT Codes Paper by Michael Luby FOCS ‘02 Presented by Ashish Sabharwal Feb 26, 2003 CSE 590vg.

Similar presentations


Presentation on theme: "LT Codes Paper by Michael Luby FOCS ‘02 Presented by Ashish Sabharwal Feb 26, 2003 CSE 590vg."— Presentation transcript:

1 LT Codes Paper by Michael Luby FOCS ‘02 Presented by Ashish Sabharwal Feb 26, 2003 CSE 590vg

2 2 Binary Erasure Channel Code distance d ) can decode d-1 erasures Probabilistic Model Bits get erased with prob p (Shannon) Capacity of BEC = 1 – p In particular, p>1/2 is decodable! Input 00101 Codeword 10100101 BEC Received 10?001?? Input 00101 decodeencode “Packet loss”

3 3 LT Codes: Encoding 1 1 1 0 1 1 XOR 0 = 1 input 1 code bit degree d = 2 1.Choose degree d from a distribution 2.Pick d neighbors uniformly at random 3.Compute XOR

4 4 LT Codes: Encoding 1 1 1 0 1 1 1 1 1 1 1 input codeword … … 0

5 5 LT Codes: Decoding 1.Identify code bit of remaining degree 1 2.Recover corresponding input bit 1 1 1 0 1 ? 1 1 1 1 ? input codeword 0

6 6 LT Codes: Decoding 1 1 1 0 1 1 1 1 1 input codeword 1 = 0 XOR 1 3.Update neighbors of this input bit 4.Delete edges 5.Repeat

7 7 LT Codes: Decoding 1 1 1 0 1 1 1 1 1 input codeword 1

8 8 LT Codes: Decoding 1 1 1 0 1 0 1 0 = 1 XOR 1 1 input codeword 0 Decoding unsuccessful!

9 9 LT Codes: Features Binary, efficient Bits can arrive in any order Probabilistic model No preset rate Generate as many or as few code bits as required by the channel Almost optimal RS inefficient Tornado codes are optimal and linear time, but have fixed rate

10 10 Larger Encoding Alphabet Why? Less overhead Partition input into m-bit chunks Encoding symbol is bit-wise XOR We’ll think of these as binary codes

11 11 Caveat: Transmitting the Graph Send degree + list of neighbors Associate a key with each code bit Encoder and decoder apply the same function to the key to compute neighbors Share random seed for pseudo-random generator

12 12 Outline The Goal All 1’s distribution: Balls and Bins case LT Process; Probabilistic machinery Ideal Soliton Distribution Robust Soliton Distribution

13 13 The Goal Construct a degree distribution s.t. 1. Few encoding bits required for recovery = small t 2. Few bit operations needed = small sum of degrees = small s

14 14 All 1’s distribution: Balls and Bins All encoding degrees are 1 k bins t balls t balls thrown into k bins Pr [can’t recover input]  Pr [no green input bits] = k. (1 – 1/k) t ¼ k e -t/k Pr [failure]   guaranteed if t  k ln k/  k bit input t unerased code bits

15 15 All 1’s distribution: Balls and Bins t = k ln (k/  ) s = k ln (k/  ) GOOD Optimal! BAD Too much overhead k +  k ln 2 (k/  ) suffices

16 16 Why is s = k ln (k/  ) optimal? k bins s balls s balls thrown into k bins can’t recover input ) no green input bits Pr [no green input bits]  k. (1 – 1/k) s ¼ k e -s/k Pr [failure]   if s  k ln k/  k bit input s edges NOTE: This line of reasoning is not quite right for lower bound! Use coupon collector type argument.

17 17 The LT Process covered = { } processed = { } ripple = { } released = { } a1 a2 a3 a4 a5 c1 c2 c3 c4 c5 c6 STATE: ACTION:Init: Release c2, c4, c6

18 18 The LT Process released = {c2,c4,c6} covered = {a1,a3,a5} processed = { } ripple = {a1,a3,a5} c1 c2 c3 c4 c5 c6 STATE: ACTION:Process a1 a1 a2 a3 a4 a5

19 19 The LT Process released = {c2,c4,c6,c1} covered = {a1,a3,a5} processed = {a1} ripple = {a3,a5} STATE: ACTION:Process a3 a1 a2 a3 a4 a5 c1 c2 c3 c4 c5 c6

20 20 The LT Process released = {c2,c4,c6,c1} covered = {a1,a3,a5} processed = {a1,a3} ripple = {a5} STATE: ACTION:Process a5 a1 a2 a3 a4 a5 c1 c2 c3 c4 c5 c6

21 21 The LT Process released = {c2,c4,c6,c1,c5} covered = {a1,a3,a5,a4} processed = {a1,a3,a5} ripple = {a4} STATE: ACTION:Process a4 a1 a2 a3 a4 a5 c1 c2 c3 c4 c5 c6

22 22 The LT Process released = {c2,c4,c6,c1,c5,c3} covered = {a1,a3,a5,a4,a2} processed = {a1,a3,a5,a4} ripple = {a2} STATE: ACTION:Process a2 a1 a2 a3 a4 a5 c1 c2 c3 c4 c5 c6

23 23 The LT Process released = {c2,c4,c6,c1,c5,c3} covered = {a1,a3,a5,a4,a2} processed = {a1,a3,a5,a4,a2} ripple = { } STATE: ACTION:Success! a1 a2 a3 a4 a5 c1 c2 c3 c4 c5 c6

24 24 The LT Process: Properties Corresponds to decoding When a code bit c p is released The step at which this happens is independent of other c q ’s The input bit c p covers is independent of other c q ’s

25 25 Ripple size Desired property of ripple Not too large: redundant covering Not too small: might die prematurely GOAL: “Good” degree distribution Ripple doesn’t grow or shrink 1 input bit added per step Why??

26 26 Degree Distributions Degrees of code bits chosen independently  (d) = Pr [degree = d] All 1’s distribution:  (1) = 1,  (d  1) = 0 initial ripple = all input bits “All-At-Once distribution”

27 27 Machinery: q(d,L), r(d,L), r(L) L = | unprocessed | k, k-1, …,1 q(d,L) = Pr [ c p is released at L | deg(c p )=d} r(d,L) = Pr [ c p is released at L, deg(c p )=d} =  (d) q(d,L) r(L) = Pr [c p is released at L] =  d r(d,L) r(L) controls ripple size

28 28 q(d,L)

29 29 Ideal Soliton Distribution,  (.) “Soliton Wave”: dispersion balances refraction Expected degree = ln k r(L) = 1/k for all L = k, …, 1

30 30 Expected Behavior Choose t = k Exp(s) = t Exp(deg) = k ln k Exp(Initial ripple size) = t  (1) = 1 Exp(# code bits released per step) = t r(L) = 1 ) Exp(ripple size) = 1 optimal

31 31 We expect too much… What if the ripple vanishes too soon? In fact, very likely! FIX: Robust Soliton Distribution  Higher initial ripple size ¼  k log k/   Expected change still 0

32 32 Robust Soliton Distribution,  (.) R = c  k ln k/   (d) = (  (d) +  (d)) /  where t = k 

33 33 Robust Soliton Distribution,  (.) t is small t = k   k + O(  k ln 2 k/  ) Exp(s) is small Exp(s) = t  d d  (d) = O(k ln k/  )

34 34 Robust Soliton Distribution,  (.) Initial ripple size is not too small Exp(Initial ripple size) = t  (1) ¼ R ¼  k ln k/   Ripple unlikely to vanish Ripple size = random walk of length k Deviates from it’s mean by  k ln k/  with prob  

35 35 Robust Release Probability t r(L)  L / (L –  R) for L ¸ R, const  t  L=R..2R r(L)   R ln R/  for const  > 0 Proofs on board…

36 36 Pessimistic Filtering Let Z = ripple size when L bits unprocessed Let h = Pr [released code bit covers input bit not in ripple] h should be around (L – Z) / L If h is lowered to any value  (L – Z)/L then Pr[success] doesn’t increase

37 37 Pessimistic Filtering Applying to robust release probability: t r(L) ¸ L/(L –  R) turns into t r(L) = L/(L –  R) for worst case analysis Will use pessimistic filtering again later

38 38 Main Theorem: Pr[success] ¸ 1–  Idea: ripple size is like a random walk of length k with mean R ¼  k ln k/  1. Initial ripple size ¸  R/2 with prob ¸ 1–  /3 Chernoff bound: # of code bits of deg 1 2. Ripple does not vanish for L ¸ R with prob ¸ 1–  /3 3. Last R input bits are covered by  (k/R) spike with prob ¸ 1–  /3

39 39 Ripple does not vanish for L ¸ R Let X L = | {code bits released at L} | Exp(X L ) = L / (L –  R) Let Y L = 0-1 random variable with Pr [Y L = 0] = (L –  R) / L Let I = any end interval of {R, …, k-1} starting at L Ripplesize L =  R/2 + (  L’ 2 I X L’ Y L’ ) – (k–L) Filtered down init ripplesize

40 40 Ripple does not vanish for L ¸ R |  L’ 2 I X L’ Y L’ – (k–L) | · |  L’ 2 I (X L’ Y L’ – Exp(X L’ ) Y L’ ) | + |  L’ 2 I (Exp(X L’ ) Y L’ – Exp(X L’ ) Exp(Y L’ )) | + |  L’ 2 I (Exp(X L’ ) Exp(Y L’ )) – (k–L) | ¸  R/4 with prob ·  (6k) = 0 Pr [ |  L’ 2 I X L’ Y L’ – (k–L) | ¸  R/2] ·  /(3k)

41 41 Ripple does not vanish for L ¸ R Recall Ripplesize L =  R/2 +  L’ 2 I X L’ Y L’ – (k–L) There are k–R intervals I Pr [Summation ¸  R/2 for some I] ·  /3 0 < Ripplesize L <  R with prob ¸ 1–  /3 Ripple doesn’t vanish!

42 42 Main Theorem: Pr[success] ¸ 1–  Idea: ripple size is like a random walk of length k with mean R ¼  k ln k/  1. Initial ripple size ¸  R/2 with prob ¸ 1–  /3 Chernoff bound: # of code bits of deg 1 2. Ripple does not vanish for L ¸ R with prob ¸ 1–  /3 3. Last R input bits are covered by  (k/R) spike with prob ¸ 1–  /3

43 43 Last R input bits are covered Recall t  L=R..2R r(L)   R ln R/  By argument similar to Balls and Bins, Pr [Last R input bits not covered] · 1 –  /3

44 44 Main Theorem With Robust Soliton Distribution, the LT Process succeeds with prob ¸ 1 –  t = k + O(  k ln 2 k/  ) s = O(k ln k/  )


Download ppt "LT Codes Paper by Michael Luby FOCS ‘02 Presented by Ashish Sabharwal Feb 26, 2003 CSE 590vg."

Similar presentations


Ads by Google