Presentation is loading. Please wait.

Presentation is loading. Please wait.

RAPTOR CODES AMIN SHOKROLLAHI DF2003-06-001 Digital Fountain Technical Report.

Similar presentations


Presentation on theme: "RAPTOR CODES AMIN SHOKROLLAHI DF2003-06-001 Digital Fountain Technical Report."— Presentation transcript:

1 RAPTOR CODES AMIN SHOKROLLAHI DF2003-06-001 Digital Fountain Technical Report

2 OutLine Introduction LT codes Raptor codes Pre codes Systematic Codes Conclusion

3 OutLine

4 Introduction - Digital Fountain codes What is Digital fountain codes  A code with robust “recover” ability  Data are broken up many components  Redundant duplicate information is used  Decoding with enough received components  Without re-transmission on TCP Issue  High speed (almost linear)  Low error rate (1/k c )

5 Introduction - Digital Fountain codes

6 Principle of Digital fountain codes :  Linear independent  Redundant equation  Example : Ex : Reed-Solomon Codes, Tornado Codes, LT codes, Raptor codes Reference : Digital Fountain with Tornado Codes and LT Codes (kcyang) Digital Fountain with Tornado Codes and LT Codes (kcyang)

7 Introduction - Transmission Error Binary Symmetric Channel (BSC) Binary Erasure Channel (BEC) 0 1 0 1 P P 1-P 0 1 0 1 P P e

8 Introduction - Distribution F F F 2 k : The space of linear forms in k variables with coefficients in F 2, it can be isomorphism mapped to vector (a 1, a 2, … a k ). Weight of v : The number of 1’s in vector v. Distribution : LetΩ 1,..., Ω k be a distribution on {1,...,k} so that Ω i denotes the probability that the value i is chosen. Often denote this distribution by Ω(x) =Σ i Ω i *x i. F Distribution over F 2 k : F  For any vector v in F 2 k, the probability of v is Ω w /C(k,w), where w is the weight of v.  A simple algorithm for this distribution would be to sample first from the distribution Ω(x) to obtain a weight w, and then the sample a vector of weight w in uniformly at random.

9 OutLine

10 LT Codes - Encoding A transformation from FF A transformation from F 2 k to F 2 N. F Parameter with (k, D ), where k is the input size and, D is a distribution over F 2 k. Encoder Algorism (repeat N times)  select a value d from D F  select a vector v uniformly at random from F 2 k with weight d  the value of output symbol y l is calculated as Σ i v i AND x i (addition is XOR operation)  Update encoding graph and l FF F 2 can be generalized to F.

11 LT Codes - Encoding d v 2 (101000) 2 (110000) 2 (000011) 2 (000101) 1 (010000) 1 (000010) 3 (100101) 1 (001000) 1 1 (000100)

12 LT Codes - Decoding A transformation from FF A transformation from F 2 N to F 2 k. Parameter with encoder graph G. ML decoding and BP decoding Decoder Algorism (while(1))  Find a sub-graph G’ depend on received data  select a node y l which has degree 1 connecting to x i  Recover the x i with y l, and update the y value which is connected by x i  Remove x i  If all x is decoded, break loop

13 LT Codes - Decoding d v 2 (101000) 2 (110000) 2 (000101) 1 (000010) 3 (100101) 1 (001000) Fault!!!

14 LT Codes - Decoding d v 2 (101000) 2 (110000) 2 (000101) 1 (000010) 3 (100101) 1 (001000) 1 (000100) OK!!!

15 LT Codes – Property If an LT-Code with k input symbols possesses a reliable decoding algorithm, then there is a constant c such that the graph associated to the decoder has at least cklog(k) edges A random LT-Code with k input symbols has encoding cost k/2, and ML decoding is a reliable decoding algorithm for this code of overhead 1+O(log(k)/k). Luby’s paper for a description of LT-Codes with a distribution Ω (x) with Ω’(1) = O(log(k)) and for which the BP decoder is a reliable decoder of overhead k(1 +2 √O(log (k)/ k)).

16 OutLine

17 Raptor codes The graph for LT codes needs to have of the order of klog(k) edges in order to make sure that all the input nodes are covered with high probability. Reason :  The information theoretic lower bound may not be matched with an algorithm  We need to recover all the input symbols Solution :  design the traditional code and the LT-Code appropriately  encode input using a traditional erasure correcting code

18 Raptor codes Two extreme example  LT codes No Pre codes Time large  PCO codes Pre codes with Ω(x) = x Space large A Good Asymptotic Performance Raptor codes  modify LT Distribution

19 Raptor codes With parameter (k, C, Ω D ) With suitable Pre codes  The rate R of Cn is (1+ε/2)/(1 +ε),  The BP decoder can decode Cn on a BEC with erasure probability δ = (ε/4)/(1+ε) = (1 R)/2 with O(nlog(1/ε)) arithmetic operations. What kind of Pre codes is suitable  LDPC codes is adopted

20 OutLine

21 Pre Codes LDPC  Low-density parity- check codes  Error correct codes  Low density give a low complexity  N inputs produce r check point  Ex: N=10 r=5  Null Space

22 Pre Codes Null Space  Null space finding  Find rref() http://www.stat.nctu.edu.tw/MISG/SUmmer_Course/C_language/Ch06/GaussEl imination.htm

23 Pre Codes Encoding (repeat r times)  Set binary matrix M with column C i is input X i  Find Null Space N(M)  Random find vector v From N(M)  Calculate check point R l = M*v (element operation is XOR )  Update encoding graph G and l Decoding  Set all check point equal 0  For all received input X i, Set R i which is connected to X i as X i XOR R i and remove X i together with all edges emanating from it from the graph.  If there is a check node c of degree one, substitute its value into the value of its unique neighbor among the input, add that value into the values of all adjacent check point and remove input and all edges emanating from it from the graph

24 Pre Codes 1 1 0 1 0 1 1 0 0 0 0 0 Encode 1 ? 0 ? 0 ? 1 0 0 0 0 0 Decode ? ? ? 1 1 0 1 1 ? 0 1 1 1 1

25 Pre Codes – Repeat Accumulate codes How about the check point fault ? RA codes  More robust correct codes  Add redundant for check point to recover  But don’t add too large over head  EX: RA code

26 Pre Codes - More

27 OutLine

28 Systematic Codes One of the disadvantages of Raptor Codes is that they aren’t systematic. systematic is means that the input symbols are not necessarily reproduced by the encoder. Systematic codes offer better performance. Systematic Raptor Codes :  Find systematic position i 1, i 2, …, i k  Idea : a pre-decoded-like processing is applied to systematic position

29 Systematic Codes - Encoding

30 Systematic Codes Pre codes G k n LT codes A n k(1+ε) × Inv - LT R k k(1+ε) = Inv - LT R k k(1+ε) R k k v i1, v i2,…, v ik Inversible

31 Systematic Codes - Encoding

32 R k k X k k × R -1 k k × Pre codes G k n LT codes A n k(1+ε) × For position i 1, i 2, …,i k, Input is fist decoded, and then encoded So that output x i1, x i2, …,x ik are equal z i1, z i2, …,z ik

33 Systematic Codes - Decoding

34 Systematic Codes – Considerations In practice, it’s a good to permute the vectors v1,...,vk(1+ε) so that the systematic positions become the first k positions. It’s possible to reduce the error probability of encoding by generating many more initial vectors than k(1+ε) in Alg. 7 To improve decoding running time  it is not necessary to entirely re-encode the vector y in Step 2 of Alg. 14. This is because the decoding process in Step 1 will have recovered a large fraction of the coordinate positions of the vector obtained by applying the pre-code to y. These coordinate positions do not need to be recalculated.  We also comment that in practice the cost of multiplying with R in Algorithm 11 is much smaller than O(k 2 ). This is because the matrix R can be “almost” upper triangularized,

35 OutLine

36 Conclusion The first class of universal Fountain Codes was LT Codes invented by Luby. It can be decoded with a error probability that is at most inversely polynomial in input symbols, and the average weight of the n output symbols is Ω(log(k)) When n  k, and it is possible to find weight distribution that can match the lower bound via a fast decoder. One were exhibited by Luby. The basic idea of Raptor codes is a additional pre-coding on an appropriate LT-Code. In asymptotic settings, a class of universal Raptor Codes with linear encode/decode time for which the failure probability converges to 1 polynomial fast in input size.

37 Conclusion In practice, it’s important to bound failure probability. A finite length Raptor Codes which exhibit low decoding failure probabilities, by designing a specific Raptor Code with guaranteed bounds on its error performance. One disadvantage of LT/Raptor Codes is assystematic. This means that the input symbols are not necessarily reproduced among the output symbols. A effcient systematic versions of Raptor Codes is designed to get better performance,

38 LT Codes – Property

39

40

41


Download ppt "RAPTOR CODES AMIN SHOKROLLAHI DF2003-06-001 Digital Fountain Technical Report."

Similar presentations


Ads by Google