Feng Lu Chuan Heng Foh, Jianfei Cai and Liang- Tien Chia Information Theory, 2009. ISIT 2009. IEEE International Symposium on LT Codes Decoding: Design.

Slides:



Advertisements
Similar presentations
Jesper H. Sørensen, Toshiaki Koike-Akino, and Philip Orlik 2012 IEEE International Symposium on Information Theory Proceedings Rateless Feedback Codes.
Advertisements

5.4 Basis And Dimension.
5.1 Real Vector Spaces.
1.5 Elementary Matrices and a Method for Finding
José Vieira Information Theory 2010 Information Theory MAP-Tele José Vieira IEETA Departamento de Electrónica, Telecomunicações e Informática Universidade.
Information and Coding Theory
1.5 Elementary Matrices and a Method for Finding
1.2 Row Reduction and Echelon Forms
Linear Equations in Linear Algebra
Volkan Cevher, Marco F. Duarte, and Richard G. Baraniuk European Signal Processing Conference 2008.
1 Data Persistence in Large-scale Sensor Networks with Decentralized Fountain Codes Yunfeng Lin, Ben Liang, Baochun Li INFOCOM 2007.
Part 4 b Forward-Backward Algorithm & Viterbi Algorithm CSE717, SPRING 2008 CUBS, Univ at Buffalo.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
Narapong Srivisal, Swarthmore College Class of 2007 Division Algorithm Fix a monomial order > in k[x 1, …, x n ]. Let F = (f 1, …, f s ) be an ordered.
Curve-Fitting Regression
Richard Fateman CS 282 Lecture 111 Determinants Lecture 11.
RAPTOR CODES AMIN SHOKROLLAHI DF Digital Fountain Technical Report.
Mario Vodisek 1 HEINZ NIXDORF INSTITUTE University of Paderborn Algorithms and Complexity Erasure Codes for Reading and Writing Mario Vodisek ( joint work.
Stats & Linear Models.
1 1.1 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra SYSTEMS OF LINEAR EQUATIONS.
Chapter 10 Review: Matrix Algebra
Linear Equations in Linear Algebra
MAKING COMPLEX DEClSlONS
System of Linear Equations Nattee Niparnan. LINEAR EQUATIONS.
Repairable Fountain Codes Megasthenis Asteris, Alexandros G. Dimakis IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 32, NO. 5, MAY /5/221.
MA2213 Lecture 5 Linear Equations (Direct Solvers)
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
On Multiplicative Matrix Channels over Finite Chain Rings Roberto W. Nobrega, Chen Feng, Danilo Silva, Bartolomeu F. Uchoa-Filho Conference version: NetCod.
Optimal Degree Distribution for LT Codes with Small Message Length Esa Hyytiä, Tuomas Tirronen, Jorma Virtamo IEEE INFOCOM mini-symposium
Application of Finite Geometry LDPC code on the Internet Data Transport Wu Yuchun Oct 2006 Huawei Hisi Company Ltd.
1 1.5 © 2016 Pearson Education, Inc. Linear Equations in Linear Algebra SOLUTION SETS OF LINEAR SYSTEMS.
An Optimal Partial Decoding Algorithm for Rateless Codes Valerio Bioglio, Rossano Gaeta, Marco Grangetto, and Matteo Sereno Dipartimento di Informatica.
1 1.3 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra VECTOR EQUATIONS.
Algebra 3: Section 5.5 Objectives of this Section Find the Sum and Difference of Two Matrices Find Scalar Multiples of a Matrix Find the Product of Two.
Chih-Ming Chen, Student Member, IEEE, Ying-ping Chen, Member, IEEE, Tzu-Ching Shen, and John K. Zao, Senior Member, IEEE Evolutionary Computation (CEC),
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
User Cooperation via Rateless Coding Mahyar Shirvanimoghaddam, Yonghui Li, and Branka Vucetic The University of Sydney, Australia IEEE GLOBECOM 2012 &
CprE 545 project proposal Long.  Introduction  Random linear code  LT-code  Application  Future work.
Stochastic Networks Conference, June 19-24, Connections between network coding and stochastic network theory Bruce Hajek Abstract: Randomly generated.
Elementary Linear Algebra Anton & Rorres, 9th Edition
CS717 Algorithm-Based Fault Tolerance Matrix Multiplication Greg Bronevetsky.
3.3 Implementation (1) naive implementation (2) revised simplex method
UEP LT Codes with Intermediate Feedback Jesper H. Sørensen, Petar Popovski, and Jan Østergaard Aalborg University, Denmark IEEE COMMUNICATIONS LETTERS,
1 Raptor codes for reliable multicast object delivery Michael Luby Digital Fountain.
A Robust Luby Transform Encoding Pattern-Aware Symbol Packetization Algorithm for Video Streaming Over Wireless Network Dongju Lee and Hwangjun Song IEEE.
Multi-Edge Framework for Unequal Error Protecting LT Codes H. V. Beltr˜ao Neto, W. Henkel, V. C. da Rocha Jr. Jacobs University Bremen, Germany IEEE ITW(Information.
Word : Let F be a field then the expression of the form a 1, a 2, …, a n where a i  F  i is called a word of length n over the field F. We denote the.
Pei-Chuan Tsai Chih-Ming Chen Ying-ping Chen WCCI 2012 IEEE World Congress on Computational Intelligence Sparse Degrees Analysis for LT Codes Optimization.
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
Arab Open University Faculty of Computer Studies M132: Linear Algebra
Elementary Linear Algebra Anton & Rorres, 9th Edition
Hongjie Zhu,Chao Zhang,Jianhua Lu Designing of Fountain Codes with Short Code-Length International Workshop on Signal Design and Its Applications in Communications,
ELEC692 VLSI Signal Processing Architecture Lecture 12 Numerical Strength Reduction.
Reed-Solomon Codes Rong-Jaye Chen.
1 1.2 Linear Equations in Linear Algebra Row Reduction and Echelon Forms © 2016 Pearson Education, Ltd.
1 Implementation and performance evaluation of LT and Raptor codes for multimedia applications Pasquale Cataldi, Miquel Pedros Shatarski, Marco Grangetto,
RS – Reed Solomon Error correcting code. Error-correcting codes are clever ways of representing data so that one can recover the original information.
An improved LT encoding scheme with extended chain lengths
Block Wiedemann Algorithm
Subject Name: Information Theory Coding Subject Code: 10EC55
Lesson 7.3 Multivariable Linear Systems
Chapter 6. Large Scale Optimization
Linear Equations in Linear Algebra
Linear Equations in Linear Algebra
A Block Based MAP Segmentation for Image Compression
Linear Equations in Linear Algebra
Linear Equations in Linear Algebra
Chapter 6. Large Scale Optimization
Presentation transcript:

Feng Lu Chuan Heng Foh, Jianfei Cai and Liang- Tien Chia Information Theory, ISIT IEEE International Symposium on LT Codes Decoding: Design and Analysis

Outline Introduction Full rank LT decoding process LT decoding drawbacks Full rank decoding Recovering the borrowed symbol Non-square case Random matrix rank Random matrix rank when n=k Random matrix rank when n > k Numerical results and discussion

Introduction LT codes Large value of k : Perform very well [5] Small numbers of k : Often encountered difficulties [7] optimize the configuration parameters of the degree distribution Only handle symbols k≤10 [9] using Gaussian elimination method for decoding The decoding complexity increase significantly [5] A. Shokrollahi, "Raptor Codes," IEEE Transactions on Information Theory, Vol. 52, no. 6, pp , [7] E. Hyytia,T. Tirronen, J. Virtamo, "Optimal Degree Distribution for LT Codes with Small Message Length," The 26th IEEE International Conference on Computer Communications INFOCOM, pp , [9] J. Gentle, "Numerical Linear Algebra for Application in Statistics," pp , Springer-Verlag, 1998

Introduction We propose a new decoding process called full rank decoding algorithm To preserve the low complexity benefit of LT codes : Retaining the original LT encoding and decoding process in maximal possible extent To prevent LT decoding from terminating prematurely: Our proposed method extends the decodability of LT decoding process

Full rank LT decoding process LT decoding drawbacks Full rank decoding Recovering the borrowed symbol Non-square case

LT decoding drawbacks The LT decoding process terminates when there is no more symbol left in the ripple. When LT decoding process terminates By using Gaussian elimination, often the undecodable packets can be decoded to recover all symbols.

LT decoding drawbacks Viewing a packet as an equation formed by combining linearly a number of variables (or symbols) in GF(2) The set of available equations (or packets) may give a full rank A numerical solver (or decoder) can determine all variables (or symbols). Attributing to the design of the LT decoding process, the method recovers only partial but not all symbols

GF(2) GF(2) is the Galois field of two elements. The two elements are nearly always 0 and 1. Addition operation : Multiplication operation : ×

Full rank decoding 1. Whenever the ripple is empty An early termination 2. A particular symbol is borrowed Decoded through some other method 3. Placing the symbol into the ripple for the LT decoding process to continue. 4. Repeated until the LT decoding process terminates with a success In the case of full rank, any picked borrowed symbol can be decoded with a suitable method

Full rank decoding Mainly uses LT decoding to recover symbols When LT decoding fails Trigger Wiedemann algorithm to recover a borrowed symbol Return back to LT decoding to recover subsequent symbols How to choose the borrowed symbol ?  Choose the symbol that is carried by most packets

Full rank decoding

Recovering the borrowed symbol We need to seek for a suitable method that can recover only a single symbol using a low computational cost. Let M denote the coefficient matrix. (n*k) M is defined over GF(2), x: size k*l, y: size n*l

Recovering the borrowed symbol We let n=k  We want to solve for a particular symbol. x’: size k*1, describes the selection of row vectors x’: size k*1, where the unique 1 locates at the index i The inner product of (x', y) gives the borrowed symbol.

Recovering the borrowed symbol We use the efficient Wiedemann algorithm [11] to solve The vector u, is used to generate Krylov sequence : Let S be the space spanned by this sequence M : the operator M restricted to S : the minimal polynomial of M; (Using the BM algorithm [12], [13]) [I I] D. Wiedemann, "Solving sparse linear equations over finite fields," IEEE Transactions on Information Theory, Vol. 32, no. I, pp , [12] E. Berlekamp, "Algebraic Coding Theory," McGraw-Hili, New York,1968. [13] J. Massey, "Shift-register synthesis and BCII decoding," IEEE Transactions on Information Theory, Vol. 15, no. I, pp , 1969.

Non-square case n > k The coefficient matrix M will be non-square  Find a n x k matrix Me,such that MjM, will be of full rank M should be of full rank  One way to obtain Me is to randomly set an entry of row i in Me Once x' is solved, the recovered symbol is obtained as

Random matrix rank The probability of successful decoding for our proposed algorithm The probability that the coefficient matrix M is of full rank M is of full rank Our proposed algorithm guarantees the success of the decoding.

Random matrix rank when n=k Let Vi be the row vector of M. The row vectors are linearly dependent if there exists a nonzero vector (C1,"" Ck) E GF (2 that satisfies If M is said to have a full rank, any linear combination of coefficient vectors (VI, V2,...,Vk) will not produce 0. Consider a non-zero vector c with exactly q non-zero coordinates. Define to be the probability that

Random matrix rank when n=k Suppose that summing the first q vectors resulting a vector with degree i. The probability that of degree (a + b) is

The state transition probability : This allows us to determine the degree distribution of the sum of any number of vectors. Random matrix rank when n=k

We shall define a transition matrix Tr with dimension (k+1) x (k+1) Let denotes the degree distribution of the sum of q vectors (q ≥1)

If M is said to have a full rank, any linear combination of coefficient vectors (VI, V2,...,Vk) will not produce 0. : the probability that The probability of full rank Random matrix rank when n=k

Random matrix rank when n > k For a full rank matrix, no linear dependency exists for any combination of the row vectors  Which is not true for the case of n > k Let (q, r) denote M consists of q row vectors with rank r

Random matrix rank when n > k We can be utilize the methods like eigen decomposition or companion matrix and Jordan normal form [15] to derive a closed form expression for P(q, r). Initialized to [15] R.A. Hom, C.R. Johnson, "Matrix Analysis," Cambridge University Press, 1985

Random matrix rank when n > k

Numerical results and discussion [6] R. Karp, M. Luby, A. Shokrollahi, “Finite length analysis of LT codes,” The IEEE International Symposium on Information Theory, 2004.