15-853Page1 15-853:Algorithms in the Real World Error Correcting Codes II – Cyclic Codes – Reed-Solomon Codes.

Slides:



Advertisements
Similar presentations
Vector Spaces A set V is called a vector space over a set K denoted V(K) if is an Abelian group, is a field, and For every element vV and K there exists.
Advertisements

15-251: Great Theoretical Ideas in Computer Science Error Correction Lecture 17 October 23, 2014.
BCH Codes Hsin-Lung Wu NTPU.
Cyclic Code.
Error Control Code.
296.3Page :Algorithms in the Real World Error Correcting Codes II – Cyclic Codes – Reed-Solomon Codes.
L. J. Wang 1 Introduction to Reed-Solomon Coding ( Part I )
15-853:Algorithms in the Real World
Information and Coding Theory
2015/4/28System Arch 2008 (Fire Tom Wada) 1 Error Correction Code (1) Fire Tom Wada Professor, Information Engineering, Univ. of the Ryukyus.
CHANNEL CODING REED SOLOMON CODES.
Information and Coding Theory Finite fields. Juris Viksna, 2015.
Wireless Mobile Communication and Transmission Lab. Chapter 2 Block code ----BCH The Theory and Technology of Error Control Coding.
DIGITAL COMMUNICATION Coding
6/20/2015List Decoding Of RS Codes 1 Barak Pinhas ECC Seminar Tel-Aviv University.
Chapter 11 Algebraic Coding Theory. Single Error Detection M = (1, 1, …, 1) is the m  1 parity check matrix for single error detection. If c = (0, 1,
Error detection and correction
15-853Page :Algorithms in the Real World Error Correcting Codes I – Overview – Hamming Codes – Linear Codes.
ON MULTIVARIATE POLYNOMIAL INTERPOLATION
1 Solid State Storage (SSS) System Error Recovery LHO 08 For NASA Langley Research Center.
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
Cyclic codes 1 CHAPTER 3: Cyclic and convolution codes Cyclic codes are of interest and importance because They posses rich algebraic structure that can.
1 S Advanced Digital Communication (4 cr) Cyclic Codes.
Channel Coding Part 1: Block Coding
Great Theoretical Ideas in Computer Science.
1 Channel Coding (II) Cyclic Codes and Convolutional Codes.
Exercise in the previous class p: the probability that symbols are delivered correctly C: 1 00 → → → → What is the threshold.
Application of Finite Geometry LDPC code on the Internet Data Transport Wu Yuchun Oct 2006 Huawei Hisi Company Ltd.
Cyclic Codes for Error Detection W. W. Peterson and D. T. Brown by Maheshwar R Geereddy.
1 SNS COLLEGE OF ENGINEERING Department of Electronics and Communication Engineering Subject: Digital communication Sem: V Cyclic Codes.
Cyclic Code. Linear Block Code Hamming Code is a Linear Block Code. Linear Block Code means that the codeword is generated by multiplying the message.
COEN 180 Erasure Correcting, Error Detecting, and Error Correcting Codes.
MIMO continued and Error Correction Code. 2 by 2 MIMO Now consider we have two transmitting antennas and two receiving antennas. A simple scheme called.
Fields: Defns “Closed”: a,b in F  a+b, a.b in F Properties: – Commutative: a+b=b+a, a.b=b.a – Associative: a+(b+c)=(a+b)+c, a.(b.c) = (a.b).c – Distributive:
Linear Feedback Shift Register. 2 Linear Feedback Shift Registers (LFSRs) These are n-bit counters exhibiting pseudo-random behavior. Built from simple.
Great Theoretical Ideas in Computer Science.
CS717 Algorithm-Based Fault Tolerance Matrix Multiplication Greg Bronevetsky.
§6 Linear Codes § 6.1 Classification of error control system § 6.2 Channel coding conception § 6.3 The generator and parity-check matrices § 6.5 Hamming.
David Wetherall Professor of Computer Science & Engineering Introduction to Computer Networks Error Detection (§3.2.2)
Information and Coding Theory Cyclic codes Juris Viksna, 2015.
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Some Computation Problems in Coding Theory
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
2016/2/14 1 Error Correction Code (1) Fire Tom Wada Professor, Information Engineering, Univ. of the Ryukyus.
INFORMATION THEORY Pui-chor Wong.
15-499Page :Algorithms and Applications Cryptography II – Number theory (groups and fields)
Reed-Solomon Codes Rong-Jaye Chen.
CHAPTER 8 CHANNEL CODING: PART 3 Sajina Pradhan
Exercise in the previous class (1) Define (one of) (15, 11) Hamming code: construct a parity check matrix, and determine the corresponding generator matrix.
Cyclic Linear Codes. p2. OUTLINE  [1] Polynomials and words  [2] Introduction to cyclic codes  [3] Generating and parity check matrices for cyclic.
Class Report 林格名 : Reed Solomon Encoder. Reed-Solomom Error Correction When a codeword is decoded, there are three possible outcomes –If 2s + r < 2t (s.
RS – Reed Solomon Error correcting code. Error-correcting codes are clever ways of representing data so that one can recover the original information.
V. Non-Binary Codes: Introduction to Reed Solomon Codes
Modulo-2 Digital coding uses modulo-2 arithmetic where addition becomes the following operations: 0+0= =0 0+1= =1 It performs the.
Information and Coding Theory
Cyclic Codes 1. Definition Linear:
Great Theoretical Ideas in Computer Science
15-853:Algorithms in the Real World
Advanced Computer Networks
Polynomial + Fast Fourier Transform
15-853:Algorithms in the Real World
Algorithms in the Real World
RS – Reed Solomon List Decoding.
Block codes. encodes each message individually into a codeword n is fixed, Input/out belong to alphabet Q of cardinality q. The set of Q-ary n-tuples.
Cyclic Code.
Error Correction Code (1)
Erasure Correcting Codes for Highly Available Storage
Error Correction Code (1)
Types of Errors Data transmission suffers unpredictable changes because of interference The interference can change the shape of the signal Single-bit.
296.3:Algorithms in the Real World
Presentation transcript:

15-853Page :Algorithms in the Real World Error Correcting Codes II – Cyclic Codes – Reed-Solomon Codes

15-853Page2 And now a word from our founder… Governor Sandford [sic] made a useless rival as you and I saw when in San Francisco, to the State University. I could be no party to such a thing. - Andrew Carnegie in a letter to Andrew White, Ambassador to Berlin, on the establishment of Stanford University, 1901.

15-853Page3 Viewing Messages as Polynomials A (n, k, n-k+1) code: Consider the polynomial of degree k-1 p(x) = a k-1 x k-1 +  + a 1 x + a 0 Message: (a k-1, …, a 1, a 0 ) Codeword: (p(1), p(2), …, p(n)) To keep the p(i) fixed size, we use a i  GF(p r ) To make the i distinct, n < p r Unisolvence Theorem: Any subset of size k of (p(1), p(2), …, p(n)) is enough to (uniquely) reconstruct p(x) using polynomial interpolation, e.g., LaGrange’s Formula.

15-853Page4 Polynomial-Based Code A (n, k, 2s +1) code: k2s Can detect 2s errors Can correct s errors Generally can correct  erasures and  errors if  + 2   2s n

15-853Page5 Correcting Errors Correcting s errors: 1.Find k + s symbols that agree on a polynomial p(x). These must exist since originally k + 2s symbols agreed and only s are in error 2.There are no k + s symbols that agree on the wrong polynomial p’(x) -Any subset of k symbols will define p’(x) -Since at most s out of the k+s symbols are in error, p’(x) = p(x)

15-853Page6 A Systematic Code Systematic polynomial-based code p(x) = a k-1 x k-1 +  + a 1 x + a 0 Message: (a k-1, …, a 1, a 0 ) Codeword: (a k-1, …, a 1, a 0, p(1), p(2), …, p(2s)) This has the advantage that if we know there are no errors, it is trivial to decode. Later we will see that version of RS used in practice uses something slightly different than p(1), p(2), … This will allow us to use the “Parity Check” ideas from linear codes (i.e., Hc T = 0?) to quickly test for errors.

15-853Page7 Reed-Solomon Codes in the Real World (204,188,17) 256 : ITU J.83(A) 2 (128,122,7) 256 : ITU J.83(B) (255,223,33) 256 : Common in Practice –Note that they are all byte based (i.e., symbols are from GF(2 8 )). Decoding rate on 600MHz Pentium (approx.): –(255,251) = 45Mbps –(255,223) = 4Mbps Dozens of companies sell hardware cores that operate 10x faster (or more) – (204,188) = 320Mbps (Altera decoder)

15-853Page8 Applications of Reed-Solomon Codes Storage: CDs, DVDs, “hard drives”, Wireless: Cell phones, wireless links Sateline and Space: TV, Mars rover, … Digital Television: DVD, MPEG2 layover High Speed Modems: ADSL, DSL,.. Good at handling burst errors. Other codes are better for random errors. –e.g., Gallager codes, Turbo codes

15-853Page9 RS and “burst” errors They can both correct 1 error, but not 2 random errors. –The Hamming code does this with fewer check bits However, RS can fix 8 contiguous bit errors in one byte –Much better than lower bound for 8 arbitrary errors code bitscheck bits RS (255, 253, 3) Hamming ( , , 3) Let’s compare to Hamming Codes (which are “optimal”).

15-853Page10 Galois Fields The polynomials  p [x] mod p(x) of degree n-1 where p(x)   p [x], p(x) is irreducible, and deg(p(x)) = n form a finite field. Such a field has p^n elements. These fields are called Galois Fields or GF(p n ). The special case n = 1 reduces to the fields  p The multiplicative group of GF(p n )/{0} is cyclic, i.e., contain a generator element. (This will be important later).

15-853Page11 GF(2 n ) Hugely practical! The coefficients are bits {0,1}. For example, the elements of GF(2 8 ) can be represented as a byte, one bit for each term, and GF(2 64 ) as a 64-bit word. –e.g., x 6 + x 4 + x + 1 = How do we do addition? Addition over  2 corresponds to xor. Just take the xor of the bit-strings (bytes or words in practice). This is dirt cheap.

15-853Page12 Multiplication over GF(2 n ) If n is small enough can use a table of all combinations. The size will be 2 n x 2 n (e.g., 64K for GF(2 8 )). Otherwise, use standard shift and add (xor) Note: dividing through by the irreducible polynomial on an overflow by 1 term is simply a test and an xor. e.g., 0111 / 1001 = / 1001 = 1011 xor 1001 = 0010 ^ just look at this bit for GF(2 3 )

15-853Page13 Multiplication over GF(2 8 ) unsigned char mult(unsigned char a, unsigned char b) { int p = a; unsigned char r = 0; /* result */ while(b) { if (b & 1) r = r ^ p; b = b >> 1; p = p << 1; if (p & 0x100) /* p has x 8 term */ /* 0x11B = = x 8 +x 4 +x 3 +x+1*/ {p = p ^ 0x11B;} } return r; }

15-853Page14 Finding inverses over GF(2 n ) Again, if n is small just store in a table. –Table size is just 2 n. For larger n, use long division algorithm. –This is again easy to do with shift and xors.

15-853Page15 Galois Field GF(2 3 ) with irreducible polynomial: x 3 + x + 1  = x is a generator  x0102 22 x2x 33 x 44 x 2 + x1105 55 x 2 + x 66 x 7 Will use this as an example.

15-853Page16 Discrete Fourier Transform (DFT) Another View of polynomial-based codes  is a primitive n th root of unity (  n = 1) – a generator Inverse DFT: Evaluate polynomial m k-1 x k-1 +  + m 1 x + m 0 at n distinct roots of unity, 1, ,  2,  3, ,  n-1

15-853Page17 DFT Example  = x is 7 th root of unity in GF(2 3 )/x 3 + x + 1 (i.e., multiplicative group, which excludes additive inverse) Recall  = “2”,  2 = “3”, …,  7 = 1 = “1” Should be clear that c = T  (m 0,m 1,…,m k-1,0,…) T is the same as evaluating p(x) = m 0 + m 1 x + … + m k-1 x k-1 at n points.

15-853Page18 Decoding Why is it hard? Brute Force: try k+2s choose k + s possibilities and solve for each.

15-853Page19 Cyclic Codes A linear code is cyclic if: (c 0, c 1, …, c n-1 )  C  (c n-1, c 0, …, c n-2 )  C Both Hamming and Reed-Solomon codes are cyclic. Note: we might have to reorder the columns to make the code “cyclic”. Motivation: They are more efficient to decode than general codes.

15-853Page20 Generator and Parity Check Matrices Generator Matrix: A k x n matrix G such that: C = {m  G | m   k } Made from stacking the basis vectors Parity Check Matrix: A (n – k) x n matrix H such that: C = {v   n | H  v T = 0} Codewords are the nullspace of H These always exist for linear codes H  G T = 0

15-853Page21 Generator and Parity Check Polynomials Generator Polynomial: A degree (n-k) polynomial g such that: C = {m  g | m  m 0 + m 1 x + … + m k-1 x k-1 } such that g | x n – 1 Parity Check Polynomial: A degree k polynomial h such that: C = {v   n [x] | h  v = 0 (mod x n –1)} such that h | x n - 1 These always exist for linear cyclic codes h  g = x n - 1

15-853Page22 Viewing g as a matrix If g(x) = g 0 + g 1 x + … + g n-k-1 x n-k-1 We can put this generator in matrix form: Write m = m 0 + m 1 x +…+ m k-1 x k-1 as (m 0, m 1, …, m k-1 ) Then c = mG (in column k, not n-k) should be g n-k-1

15-853Page23 g generates cyclic codes Codes are linear combinations of the rows. All but last row is clearly cyclic (based on next row) Shift of last row is x k g mod (x n –1) = g n-k-1,0,…,g 0,g 1,…,g n-k-2 Consider h = h 0 + h 1 x + … + h k-1 x k-1 (gh = x n –1) h 0 g + (h 1 x)g + … + (h k-2 x k-2 )g + (h k-1 x k-1 )g = x n - 1 x k g = -h k-1 -1 (h 0 g + h 1 (xg) + … + h k-1 (x k-1 g)) mod (x n –1) This is a linear combination of the rows. should be g n-k-1

15-853Page24 Viewing h as a matrix If h = h 0 + h 1 x + … + h k-1 x k-1 we can put this parity check poly. in matrix form: Hc T = 0 should be h k-1

15-853Page25 Hamming Codes Revisited The Hamming (7,4,3) 2 code. g = 1 + x + x 3 h = x 4 + x 2 + x + 1 The columns are not identical to the previous example Hamming code. gh = x 7 – 1, GH T = 0

15-853Page26 Factors of x n -1 Intentionally left blank

15-853Page27 Another way to write g Let  be a generator of GF(p r ). Let n = p r - 1 (the size of the multiplicative group) Then we can write a generator polynomial as g(x) = (x-  )(x-  2 ) … (x -  n-k ), h = (x-  n-k+1 )…(x-  n ) Lemma: g | x n – 1, h | x n – 1, gh | x n – 1 (a | b means a divides b) Proof: –  n = 1 (because of the size of the group)   n – 1 = 0   root of x n – 1  (x -  ) | x n -1 –similarly for  2,  3, …,  n –therefore x n - 1 is divisible by (x -  )(x -  2 ) …

15-853Page28 Back to Reed-Solomon Consider a generator polynomial g  GF(p r )[x], s.t. g | (x n – 1) Recall that n – k = 2s (the degree of g is n-k-1, n-k coefficients) Encode: –m’ = m x 2s (basically shift by 2s) –b = m’ (mod g) –c = m’ – b = (m k-1, …, m 0, -b 2s-1, …, -b 0 ) –Note that c is a cyclic code based on g -m’ = qg + b -c = m’ – b = qg Parity check: -h c = 0 ?

15-853Page29 Example Lets consider the (7,3,5) 8 Reed-Solomon code. We use GF(2 3 )/x 3 + x + 1  x0102 22 x2x 33 x 44 x 2 + x1105 55 x 2 + x 66 x 7

15-853Page30 Example RS (7,3,5) 8 g = (x -  )(x -  2 )(x -  3 )(x -  4 ) = x 4 +  3 x 3 + x 2 +  x +  3 h = (x -  5 )(x -  6 )(x -  7 ) = x 3 + a 3 x 3 + a 2 x + a 4 gh = x Consider the message: m = (  4, 0,  4 ) =  4 x 2 +  4 m’ = x 4 m =  4 x 6 +  4 x 4 = (  4 x 2 + x +  3 )g + (  3 x 3 +  6 x +  6 ) c = (  4, 0,  4,  3, 0,  6,  6 ) =  010 22 100 33 011 44 110 55 111 66 101 77 001 ch = 0 (mod x 7 –1) n = 7, k = 3, n-k = 2s = 4, d = 2s+1 = 5

15-853Page31 A useful theorem Theorem: For any , if g(  ) = 0 then  2s m(  ) = b(  ) Proof: x 2s m(x) = m’(x) = g(x)q(x) + b(x)  2s m(  ) = g(  )q(  ) + b(  ) = b(  ) Corollary:  2s m(  ) = b(  ) for   { ,  2,  3,…,  2s=n-k } Proof: { ,  2, …,  2s } are the roots of g by definition.

15-853Page32 Fixing errors Theorem: Any k symbols from c can reconstruct c and hence m Proof: We can write 2s equations involving m (c n-1, …, c 2s ) and b (c 2s-1, …, c 0 ). These are  2s m(  ) = b(  )  4s m(  2 ) = b(  2 ) …  2s(2s) m(  2s ) = b(  2s ) We have at most 2s unknowns, so we can solve for them. (I’m skipping showing that the equations are linearly independent).

15-853Page33 Efficient Decoding I don’t plan to go into the Reed-Solomon decoding algorithm, other than to mention the steps. Syndrome Calculator Error Polynomial Berlekamp Massy Error Locations Chien Search Error Magnitudes Forney Algorithm Error Corrector cm This is the hard part. CD players use this algorithm. (Can also use Euclid’s algorithm.)