296.3Page1 296.3:Algorithms in the Real World Error Correcting Codes II – Cyclic Codes – Reed-Solomon Codes.

Slides:



Advertisements
Similar presentations
Vector Spaces A set V is called a vector space over a set K denoted V(K) if is an Abelian group, is a field, and For every element vV and K there exists.
Advertisements

15-251: Great Theoretical Ideas in Computer Science Error Correction Lecture 17 October 23, 2014.
Cyclic Code.
Error Control Code.
L. J. Wang 1 Introduction to Reed-Solomon Coding ( Part I )
15-853:Algorithms in the Real World
Information and Coding Theory
2015/4/28System Arch 2008 (Fire Tom Wada) 1 Error Correction Code (1) Fire Tom Wada Professor, Information Engineering, Univ. of the Ryukyus.
CHANNEL CODING REED SOLOMON CODES.
Wireless Mobile Communication and Transmission Lab. Chapter 2 Block code ----BCH The Theory and Technology of Error Control Coding.
Cellular Communications
DIGITAL COMMUNICATION Coding
6/20/2015List Decoding Of RS Codes 1 Barak Pinhas ECC Seminar Tel-Aviv University.
Chapter 11 Algebraic Coding Theory. Single Error Detection M = (1, 1, …, 1) is the m  1 parity check matrix for single error detection. If c = (0, 1,
7/2/2015Errors1 Transmission errors are a way of life. In the digital world an error means that a bit value is flipped. An error can be isolated to a single.
15-853Page :Algorithms in the Real World Error Correcting Codes I – Overview – Hamming Codes – Linear Codes.
ON MULTIVARIATE POLYNOMIAL INTERPOLATION
Linear Codes.
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
Cyclic codes 1 CHAPTER 3: Cyclic and convolution codes Cyclic codes are of interest and importance because They posses rich algebraic structure that can.
1 S Advanced Digital Communication (4 cr) Cyclic Codes.
Channel Coding Part 1: Block Coding
Great Theoretical Ideas in Computer Science.
1 Channel Coding (II) Cyclic Codes and Convolutional Codes.
Exercise in the previous class p: the probability that symbols are delivered correctly C: 1 00 → → → → What is the threshold.
Cyclic Codes for Error Detection W. W. Peterson and D. T. Brown by Maheshwar R Geereddy.
Error Control Code. Widely used in many areas, like communications, DVD, data storage… In communications, because of noise, you can never be sure that.
15-853Page :Algorithms in the Real World Error Correcting Codes II – Cyclic Codes – Reed-Solomon Codes.
Cyclic Code. Linear Block Code Hamming Code is a Linear Block Code. Linear Block Code means that the codeword is generated by multiplying the message.
COEN 180 Erasure Correcting, Error Detecting, and Error Correcting Codes.
MIMO continued and Error Correction Code. 2 by 2 MIMO Now consider we have two transmitting antennas and two receiving antennas. A simple scheme called.
Fields: Defns “Closed”: a,b in F  a+b, a.b in F Properties: – Commutative: a+b=b+a, a.b=b.a – Associative: a+(b+c)=(a+b)+c, a.(b.c) = (a.b).c – Distributive:
Linear Feedback Shift Register. 2 Linear Feedback Shift Registers (LFSRs) These are n-bit counters exhibiting pseudo-random behavior. Built from simple.
Great Theoretical Ideas in Computer Science.
CS717 Algorithm-Based Fault Tolerance Matrix Multiplication Greg Bronevetsky.
§6 Linear Codes § 6.1 Classification of error control system § 6.2 Channel coding conception § 6.3 The generator and parity-check matrices § 6.5 Hamming.
Information and Coding Theory Cyclic codes Juris Viksna, 2015.
Information Theory Linear Block Codes Jalal Al Roumy.
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
Cryptography and Coding Theory
2016/2/14 1 Error Correction Code (1) Fire Tom Wada Professor, Information Engineering, Univ. of the Ryukyus.
INFORMATION THEORY Pui-chor Wong.
15-499Page :Algorithms and Applications Cryptography II – Number theory (groups and fields)
Applied Symbolic Computation1 Applied Symbolic Computation (CS 567) The Fast Fourier Transform (FFT) and Convolution Jeremy R. Johnson TexPoint fonts used.
Error Control Coding. Purpose To detect and correct error(s) that is introduced during transmission of digital signal.
Reed-Solomon Codes Rong-Jaye Chen.
CHAPTER 8 CHANNEL CODING: PART 3 Sajina Pradhan
Exercise in the previous class (1) Define (one of) (15, 11) Hamming code: construct a parity check matrix, and determine the corresponding generator matrix.
Cyclic Linear Codes. p2. OUTLINE  [1] Polynomials and words  [2] Introduction to cyclic codes  [3] Generating and parity check matrices for cyclic.
ECE 442 COMMUNICATION SYSTEM DESIGN LECTURE 10. LINEAR BLOCK CODES Husheng Li Dept. of EECS The University of Tennessee.
Class Report 林格名 : Reed Solomon Encoder. Reed-Solomom Error Correction When a codeword is decoded, there are three possible outcomes –If 2s + r < 2t (s.
RS – Reed Solomon Error correcting code. Error-correcting codes are clever ways of representing data so that one can recover the original information.
V. Non-Binary Codes: Introduction to Reed Solomon Codes
Modulo-2 Digital coding uses modulo-2 arithmetic where addition becomes the following operations: 0+0= =0 0+1= =1 It performs the.
Information and Coding Theory
Cyclic Codes 1. Definition Linear:
Great Theoretical Ideas in Computer Science
15-853:Algorithms in the Real World
Polynomial + Fast Fourier Transform
15-853:Algorithms in the Real World
Algorithms in the Real World
Polynomials, Secret Sharing, And Error-Correcting Codes
RS – Reed Solomon List Decoding.
Polynomials, Secret Sharing, And Error-Correcting Codes
Cyclic Code.
Error Correction Code (1)
Erasure Correcting Codes for Highly Available Storage
Error Correction Code (1)
CHAPTER 3: Cyclic and convolution codes
Presentation transcript:

296.3Page :Algorithms in the Real World Error Correcting Codes II – Cyclic Codes – Reed-Solomon Codes

296.3Page2 Viewing Messages as Polynomials A (n, k, n-k+1) code: Consider the polynomial of degree k-1 p(x) = a k-1 x k-1 +  + a 1 x + a 0 Message: (a k-1, …, a 1, a 0 ) Codeword: (p(y 0 ),p(y 1 ), …, p(y n-1 )) for distinct y 0,…,y n-1 To keep the p(y i ) fixed size, we use y i, a i  GF(p r ) To make the y i distinct, n < p r Unisolvence Theorem: Any subset of size k of (p(y 1 ), p(y 2 ), …, p(y n )) is enough to (uniquely) reconstruct p(x) using polynomial interpolation, e.g., LaGrange’s Formula.

296.3Page3 Polynomial-Based Code A (n, k, 2s +1) code: k2s Can detect 2s errors Can correct s errors Generally can correct  erasures and  errors if  + 2   2s n

296.3Page4 Correcting Errors Correcting s errors: 1.Find k + s symbols that agree on a polynomial p(x). These must exist since originally k + 2s symbols agreed and only s are in error 2.There are no k + s symbols that agree on the wrong polynomial p’(x) -Any subset of k symbols will define p’(x) -Since at most s out of the k+s symbols are in error, p’(x) = p(x)

296.3Page5 A Systematic Code Message: (m 0, m 1, …, m k-1 ) Find polynomial p(x) = a k-1 x k-1 +  + a 1 x + a 0 such that p(y 0 )= m 0, p(y 2 )= m 2, …, p(y k-1 ) = m 0 Codeword: (m 0, m 1, …, m k-1, p(y k ), p(y k+1 ), …, p(y n-1 )) This has the advantage that if we know there are no errors (e.g., all points lie on the same degree k-1 polynomial), it is trivial to decode. The version of RS used in practice uses something slightly different. This will allow us to use the “Parity Check” ideas from linear codes (i.e., Hc T = 0?) to quickly test for errors.

296.3Page6 Reed-Solomon Codes in the Real World (204,188,17) 256 : ITU J.83(A) 2 (128,122,7) 256 : ITU J.83(B) (255,223,33) 256 : Common in Practice –Note that they are all byte based (i.e., symbols are from GF(2 8 )). Decoding rate on 1.8GHz Pentium 4: –(255,251) = 89Mbps –(255,223) = 18Mbps Dozens of companies sell hardware cores that operate 10x faster (or more) – (204,188,17) = 320Mbps (Altera decoder)

296.3Page7 Applications of Reed-Solomon Codes Storage: CDs, DVDs, “hard drives”, Wireless: Cell phones, wireless links Sateline and Space: TV, Mars rover, … Digital Television: DVD, MPEG2 layover High Speed Modems: ADSL, DSL,.. Good at handling burst errors. Other codes are better for random errors. –e.g., Gallager codes, Turbo codes

296.3Page8 RS and “burst” errors They can both correct 1 error, but not 2 random errors. –The Hamming code does this with fewer check bits However, RS can fix 8 contiguous bit errors in one byte –Much better than lower bound for 8 arbitrary errors code bitscheck bits RS (255, 253, 3) Hamming ( , , 3) Let’s compare to Hamming Codes (which are “optimal”).

296.3Page9 Galois Field GF(2 3 ) with irreducible polynomial: x 3 + x + 1  = x is a generator  x0102 22 x2x 33 x 44 x 2 + x1105 55 x 2 + x 66 x 7 Will use this as an example.

296.3Page10 Discrete Fourier Transform (DFT) Evaluating polynomial at n points via matrix multiply:  is a primitive n th root of unity (  n = 1) – a generator Inverse DFT: Evaluate polynomial m k-1 x k-1 +  + m 1 x + m 0 at n distinct roots of unity, 1, ,  2,  3, ,  n-1

296.3Page11 DFT Example  = x is 7 th root of unity in GF(2 3 )/x 3 + x + 1 (i.e., multiplicative group, which excludes additive inverse) Recall  = “2”,  2 = “3”, …,  7 = 1 = “1” Should be clear that c = T  (m 0,m 1,…,m k-1,0,…) T is the same as evaluating p(x) = m 0 + m 1 x + … + m k-1 x k-1 at n points.

296.3Page12 Decoding Why is it hard? Brute Force: try k+2s choose k + s possibilities and solve for each.

296.3Page13 Cyclic Codes A linear code is cyclic if: (c 0, c 1, …, c n-1 )  C  (c n-1, c 0, …, c n-2 )  C Both Hamming and Reed-Solomon codes are cyclic. Note: we might have to reorder the columns to make the code “cyclic”. Motivation: They are more efficient to decode than general codes.

296.3Page14 Linear Code Generator and Parity Check Matrices View message, codeword as vectors (m 0, m 1, …,m k-1 ) and (c 0, c 1, …,c n-1 ) Generator Matrix: A k x n matrix G such that: C = {m  G | m   k } Made from stacking the basis vectors Parity Check Matrix: A (n – k) x n matrix H such that: C = {v   n | H  v T = 0} Codewords are the nullspace of H These always exist for linear codes H  G T = 0

296.3Page15 RS Generator and Parity Check Polynomials View message (m 0, m 1, …,m k-1 ) as polynomial m 0 + m 1 x + … + m k-1 x k-1, codeword (c 0, c 1, …,c n-1 ) as polynomial c 0 + c 1 x + … + c n-1 x n-1 Generator Polynomial: A degree (n-k) polynomial g(x) = g 0 + g 1 x + … + g n-k x n-k such that: C = {m  g | m  m 0 + m 1 x + … + m k-1 x k-1 } such that g | x n – 1 Parity Check Polynomial: A degree k polynomial h(x) = h 0 + h 1 x + … + h k x k such that: C = {v   n [x] | h  v = 0 (mod x n –1)} such that h | x n - 1 These always exist for linear cyclic codes h  g = x n - 1

296.3Page16 Poly multiplication via matrix multiplication If g(x) = g 0 + g 1 x + … + g n-k x n-k We can put this generator in matrix form (k x n): Write m = m 0 + m 1 x +…+ m k-1 x k-1 as (m 0, m 1, …, m k-1 ) Then c = mG k-1 n-k+1

296.3Page17 g generates cyclic codes Codes are linear combinations of the rows. All but last row is clearly cyclic (left shift of next row) Right shift of last row is x k g mod (x n –1) = g n-k,0,…,g 0,g 1,…,g n-k-1 Will show x k g mod (x n –1) is a linear combination of other rows. Consider h = h 0 + h 1 x + … + h k x k (gh = x n –1) h 0 g + (h 1 x)g + … + (h k-1 x k-1 )g + (h k x k )g = x n – 1 (h k x k )g = (x n – 1) – (h 0 g + (h 1 x)g + … + (h k-1 x k-1 )g ) (h k x k )g mod (x n –1) = -(h 0 g + h 1 (xg) + … + h k-1 (x k-1 g)) x k g mod (x n –1) = -h k -1 (h 0 g + h 1 (xg) + … + h k-1 (x k-1 g))

296.3Page18 Viewing h as a matrix If h = h 0 + h 1 x + … + h k x k we can put this parity check poly. in matrix form ((n-k) x n) : Hc T = 0 (syndrome gives coefficients of x n-1 through x k in h  c, which are the same as in h  c mod x n -1) k+1 n-k-1

296.3Page19 Hamming Codes Revisited The Hamming (7,4,3) 2 code. g = 1 + x + x 3 h = x 4 + x 2 + x + 1 The columns are not identical to the previous example Hamming code. gh = x 7 – 1, GH T = 0

296.3Page20 Factors of x n -1 Intentionally left blank

296.3Page21 Another way to write g Let  be a generator of GF(p r ). Let n = p r - 1 (the size of the multiplicative group) Then we can write a generator polynomial as g(x) = (x-  )(x-  2 ) … (x -  n-k ), h = (x-  n-k+1 )…(x-  n ) Lemma: g | x n – 1, h | x n – 1, gh | x n – 1 (a | b means a divides b) Proof: –  n = 1 (because of the size of the group)   n – 1 = 0   root of x n – 1  (x -  ) | x n -1 –similarly for  2,  3, …,  n –therefore x n - 1 is divisible by (x -  )(x -  2 ) …

296.3Page22 Back to Reed-Solomon Consider a generator polynomial g  GF(p r )[x], s.t. g | (x n – 1) Recall that n – k = 2s (the degree of g is n-k, n-k+1 coefficients) Encode (trick to make code systematic): –m’ = m x 2s (basically shift by 2s) –b = m’ (mod g), m’ = qg + b for some q –c = m’ – b = (m k-1, …, m 0, -b 2s-1, …, -b 0 ) –Note that c is a cyclic code based on g -c = m’ – b = qg (i.e., given m we found another message q such that qg is “systematic” for m) Parity check: -h c = 0 ?

296.3Page23 Example Lets consider the (7,3,5) 8 Reed-Solomon code. We use GF(2 3 )/x 3 + x + 1  x0102 22 x2x 33 x 44 x 2 + x1105 55 x 2 + x 66 x 7

296.3Page24 Example RS (7,3,5) 8 g = (x -  )(x -  2 )(x -  3 )(x -  4 ) = x 4 +  3 x 3 + x 2 +  x +  3 h = (x -  5 )(x -  6 )(x -  7 ) = x 3 +  3 x 3 +  2 x +  4 gh = x Consider the message: m = (  4, 0,  4 ) =  4 x 2 +  4 m’ = x 4 m =  4 x 6 +  4 x 4 = (  4 x 2 + x +  3 )g + (  3 x 3 +  6 x +  6 ) c = (  4, 0,  4,  3, 0,  6,  6 ) =  010 22 100 33 011 44 110 55 111 66 101 77 001 ch = 0 (mod x 7 –1) n = 7, k = 3, n-k = 2s = 4, d = 2s+1 = 5

296.3Page25 A useful theorem Theorem: For any , if g(  ) = 0 then  2s m(  ) = b(  ) Proof: x 2s m(x) = m’(x) = g(x)q(x) + b(x)  2s m(  ) = g(  )q(  ) + b(  ) = b(  ) Corollary:  2s m(  ) = b(  ) for   { ,  2,  3,…,  2s=n-k } Proof: { ,  2, …,  2s } are the roots of g by definition.

296.3Page26 Fixing errors Theorem: Any k symbols from c can reconstruct c and hence m Proof: We can write 2s equations involving m (c n-1, …, c 2s ) and b (c 2s-1, …, c 0 ). These are  2s m(  ) = b(  )  4s m(  2 ) = b(  2 ) …  2s(2s) m(  2s ) = b(  2s ) We have at most 2s unknowns, so we can solve for them. (I’m skipping showing that the equations are linearly independent).

296.3Page27 Efficient Decoding I don’t plan to go into the Reed-Solomon decoding algorithm, other than to mention the steps. Syndrome Calculator Error Polynomial Berlekamp Massy Error Locations Chien Search Error Magnitudes Forney Algorithm Error Corrector cm This is the hard part. CD players use this algorithm. (Can also use Euclid’s algorithm.)