Presentation is loading. Please wait.

Presentation is loading. Please wait.

DIGITAL COMMUNICATION Error - Correction November 15, 2011 A.J. Han Vinck.

Similar presentations


Presentation on theme: "DIGITAL COMMUNICATION Error - Correction November 15, 2011 A.J. Han Vinck."— Presentation transcript:

1 DIGITAL COMMUNICATION Error - Correction November 15, 2011 A.J. Han Vinck

2 Error correction is needed in high tech devices A.J. Han Vinck, Trondheim, 20112

3 fundamental limits set by Shannon (1948) Bell Labs, A.J. Han Vinck, Trondheim, Fundamental problem: reproducing at the receiver a message selected at the transmitter Shannon‘s contribution: - bound on the efficiency (capacity) - how to achieve the bound The Bell System Technical Journal, Vol. 27, pp. 379–423, 623–656, October, 1948.

4 A.J. Han Vinck4 Overview of the lecture The position of channel coding in a communication chain Some channel models Burst error models Detection rules Intro to the coding problem performance The class of linear codes Bounds on the minimum distance Examples of codes Why error correction?

5 A.J. Han Vinck5 Overview of the lecture The position of channel coding in a communication chain Some channel models Burst error models Detection rules Intro to the coding problem performance The class of linear codes Bounds on the minimum distance Examples of codes Why error correction?

6 A.J. Han Vinck6 The communication model source data reduction/ compression data protection sink Message construction decoder k n k K‘

7 A.J. Han Vinck7 Point-to-point transmitterreceiverchannel physicalmodem message  bits Signal generator Signal processor bits  message

8 A.J. Han Vinck8 Position of Error Control Coding signal generator channel detector k input bits k output bits channel k k input bits k output bits k input bits signal generator coded signal generator detector detector/decoder n input bits n ECC coding coded modulation

9 A.J. Han Vinck9 transmission model (OSI) Data Link Control Physical link Unreliable trans- mission of bits Transmission of reliable packets

10 A.J. Han Vinck10 Something to think about message  bits bits  message message  compression  protection of bits correction of incorrect bits  decompression  message Error correctionMPEG, JPEG, etc Compression reduces bit rate Protection increases bit rate

11 A.J. Han Vinck11 Overview of the lecture The position of channel coding in a communication chain Some channel models Burst error models Detection rules Intro to the coding problem The class of linear codes Bounds on the minimum distance Examples of codes Why error correction?

12 A.J. Han Vinck12 Memoryless channel Input X P(y|x)output Y transition probabilities memoryless: - output at time i depends only on input at time i - input and output alphabet finite

13 A.J. Han Vinck13 Example: binary symmetric channel (BSC) Error Source + E X Output Input E is the binary error sequence s.t. P(1) = 1-P(0) = p X is the binary information sequence Y is the binary output sequence 1-p 0 p 1 1-p

14 A.J. Han Vinck14 from AWGN to BSC Homework: calculate the capacity as a function of A and  2 p Gaussian noise with prob. density function X = +/- AY = X + N Decide: + or -

15 A.J. Han Vinck15 Other models (light on) 1 (light off) p 1-p X Y P(X=0) = P E10E1 1-e e 1-e P(X=0) = P 0 Z-channel (optical) Erasure channel (MAC)

16 A.J. Han Vinck16 the erasure channel Application: cdma detection, disk arrays E10E1 1-e e 1-e x y P( x = 0) = 1 – P( x = 1 ) = P 0 Disk 1 Disk 2 Disk 3 Disk 4 Disk 5 Known position of error

17 A.J. Han Vinck17 From Gaussian to binary to erasure + e x i = +/- Output Input y i = x i + e E E E

18 A.J. Han Vinck18 Erasure with errors E10E1 p p e e 1-p-e

19 A.J. Han Vinck19 Overview of the lecture The position of channel coding in a communication chain Some channel models Burst error models Detection rules Intro to the coding problem performance The class of linear codes Bounds on the minimum distance Examples of codes Why error correction?

20 A.J. Han Vinck20 Modeling: networking Ack/Nack - 1-error causes retransmission - long packets always have an error - short packets with ECC give lower efficiency packet Suppose that a packet arrives correctly with probability Q. What is then the throughput as a funtion of Q?

21 A.J. Han Vinck21 A Simple code For low packet loss rates (e.g. 5%), sending duplicates is expensive (wastes bandwidth) XOR code –XOR a group of data pkts together to produce repair pkt –Transmit data + XOR: can recover 1 lost pkt      

22 A.J. Han Vinck22 CDDVD Blue Laser Density increases sensitivity

23 A.J. Han Vinck23 modeling How to model scratches on a CD? Answer is important for the design of ECC

24 A.J. Han Vinck24 Modeling: binary transmission channel test sequence error sequence Problem: Determination of burst and guard space burstguardburst

25 A.J. Han Vinck25 A simple error detection method row parity Fill row wise Transmit column wise RESULT: RESULT: any burst of length L can be detected L What happens with bursts of length larger than L?

26 A.J. Han Vinck26 burst error model (Gilbert-Elliot) Error Source Random Random error channel; outputs independent P(0) = 1- P(1); Burst Burst error channel; outputs dependent Error Source P(0 | state = bad ) = P(1|state = bad ) = 1/2; P(0 | state = good ) = 1 - P(1|state = good ) = State info: good or bad goodbad transition probability P gb P bg P gg P bb

27 A.J. Han Vinck27 question goodbad P gb P bg P gg P bb P(0 | state = bad ) = P(1|state = bad ) = 1/2; P(0 | state = good ) = 1 - P(1|state = good ) = 0.99 What is the average for P(0) for: P gg = 0.9, P gb = 0.1; P bg = 0.99, P bb = 0.01 ? Indicate how you can you extend the model?

28 A.J. Han Vinck28 Fritchman model for mobile radio channels multiple states G and only one state B Closer to an actual real-world channel GnGn B 1-p Error probability 0Error probability h … G1G1

29 A.J. Han Vinck29 Example (from Timo Korhonen, Helsinki) In fading channels received data can experience burst errors that destroy large number of consecutive bits. This is harmful for channel coding Interleaving distributes burst errors along data stream A problem of interleaving is introduced extra delay Example below shows block interleaving: time received power Reception after fading channel Received interleaved data: Block deinterleaving : Recovered data:

30 A.J. Han Vinck30 example –Consider the code C = { 000, 111 } –A burst error of length 3 can not be corrected. –Let us use a block interleaver 3X3 A1A2A3B1B2B3C1C2C3 2 errors A1A2A3B1B2B3C1C2C3 Interleaver A1B1C1A2B2C2A3B3C3A1B1C1A2B2C2A3B3C3 Deinterleaver A1A2A3B1B2B3C1C2C3 1 error

31 A.J. Han Vinck31 Middleton type of burst channel model Select channel k with probability Q(k) Transition probability P(0) … channel 1 channel 2 channel k has transition probability p(k)

32 A.J. Han Vinck32 Impulsive Noise Classification (a) Single transient model Parameters of single transient : - peak amplitude - pseudo frequency f 0 =1/T 0 - damping factor - duration - Interarrival Time Measurements carried out by France Telecom in a house during 40 h 2 classes of pulses (on 1644 pulses) : single transient and burst

33 A.J. Han Vinck33 Interleaving: from bursty to random Message interleaver channel interleaver -1 message encoder decoder bursty „random error“ Note: interleaving brings encoding and decoding delay Homework: compare the block and convolutional interleaving w.r.t. delay

34 A.J. Han Vinck34 Interleaving: block Channel models are difficult to derive: - burst definition ? - random and burst errors ? for practical reasons: convert burst into random error read in row wise transmit column wise

35 A.J. Han Vinck35 De-Interleaving: block read in column wise this row contains 1 error eeee1eeee ee110ee read out row wise

36 A.J. Han Vinck36 Interleaving: convolutional input sequence 0 input sequence 1delay of b elements  input sequence m-1delay of (m-1)b elements Example: b = 5, m = 3 in out

37 A.J. Han Vinck37 ExampleUMTS Turbo Encoder From ETSI TS v3.4.0 ( ), UMTS Multiplexing and channel coding Data is segmented into blocks of L bits, where 40  L  5114 “Upper” RSC Encoder “Lower” RSC Encoder Interleaver Systematic Output X k Uninterleaved Parity Z k Interleaved Parity Z’ k Input X k Interleaved Input X’ k Output

38 A.J. Han Vinck38 UMTS Interleaver: Inserting Data into Matrix Data is fed row-wise into a R by C matrix. R = 5, 10, or 20, 8  C  256 –If L < RC then matrix is padded with dummy characters. X1X1 X2X2 X3X3 X4X4 X5X5 X6X6 X7X7 X8X8 X9X9 X 10 X 11 X 12 X 13 X 14 X 15 X 16 X 17 X 18 X 19 X 20 X 21 X 22 X 23 X 24 X 25 X 26 X 27 X 28 X 29 X 30 X 31 X 32 X 33 X 34 X 35 X 36 X 37 X 38 X 39 X 40 X2X2 X6X6 X5X5 X7X7 X3X3 X4X4 X1X1 X8X8 X 10 X 12 X 11 X 15 X 13 X 14 X9X9 X 16 X 18 X 22 X 21 X 23 X 19 X 20 X 17 X 24 X 26 X 28 X 27 X 31 X 29 X 30 X 25 X 32 X 40 X 36 X 35 X 39 X 37 X 38 X 33 X 34 Data is permuted within each row. X 40 X 36 X 35 X 39 X 37 X 38 X 33 X 34 X 26 X 28 X 27 X 31 X 29 X 30 X 25 X 32 X 18 X 22 X 21 X 23 X 19 X 20 X 17 X 24 X 10 X 12 X 11 X 15 X 13 X 14 X9X9 X 16 X2X2 X6X6 X5X5 X7X7 X3X3 X4X4 X1X1 X8X8 Rows are permuted X 40 X 36 X 35 X 39 X 37 X 38 X 33 X 34 X 26 X 28 X 27 X 31 X 29 X 30 X 25 X 32 X 18 X 22 X 21 X 23 X 19 X 20 X 17 X 24 X 10 X 12 X 11 X 15 X 13 X 14 X9X9 X 16 X2X2 X6X6 X5X5 X7X7 X3X3 X4X4 X1X1 X8X8 Data is read from matrix column-wise

39 A.J. Han Vinck39 Overview of the lecture The position of channel coding in a communication chain Some channel models Burst error models Detection rules Intro to the coding problem performance The class of linear codes Bounds on the minimum distance Examples of codes Why error correction?

40 A channel transition model A.J. Han Vinck40 Input X channel transition Output Y x 1 y 1 x 2 y 2 x M y N P(y 1 |x 1 ) P(y 2 |x 1 ) P(y N |x 1 ) P(y N |x M ) P(y 1 |x M )

41 A.J. Han Vinck41 Error probability (MAP) Suppose for a received vector y we assign the message i as being transmitted. Then, the probability of a correct decision = P( x i transmitted | y received ) For the assignment j we have P(j is correct|y) = P( x j transmitted | y rec.) Hence, in order to maximize the probability of being correct we assign the i to the received vector y that maximizes P( x i transmitted | y received ) (Maximum Aposteriori Probability, MAP)

42 A.J. Han Vinck42 Maximum Likelihood (ML) receiver find i that maximizes P( x i | y) = P( x i, y ) / P( y ) = P( y |x i ) P ( x i ) / P( y ) for equally likely x i this is equivalent to find maximum P( y | x i )

43 A.J. Han Vinck43 example For p = 0.1 and X 1 = ( 0 0 ); P( X 1 = 1/3 ) X 2 = ( 1 1 ); P( X 2 = 2/3) Give your MAP and ML decision for Y = ( 0 1 )

44 A.J. Han Vinck44 Overview of the lecture The position of channel coding in a communication chain Some channel models Burst error models Detection rules Intro to the coding problem performance The class of linear codes Bounds on the minimum distance Examples of codes Why error correction?

45 A.J. Han Vinck45 Bit protection Obtained by Error Control Codes (ECC) –Forward Error Correction (FEC) –Error Detection and feedback (ARQ) Performance depends on error statistics! –Error models are very important

46 A.J. Han Vinck46 example Transmit: 0 0 0or How many errors can we correct? How many errors can we detect? Transmit: A = 00000; B = 01011; C = 10101; D = How many errors can we correct? How many errors can we detect? What is the difference?

47 Practical communication system design message estimate channel decoder n Code word in receive There are 2 k code words of length n k is the number of information bits transmitted in n channel uses 2k2k Code book with errors

48 Channel capacity Definition: The rate R of a code is the ratio k/n, where k is the number of information bits transmitted in n channel uses Shannon showed that: : for R  Capacity an encoding methods exist with decoding error probability 0

49 Encoding and decoding according to Shannon Code: 2 k binary codewords where p(0) = P(1) = ½ Channel errors: P(0  1) = P(1  0) = p i.e.  # error sequences  2 nh(p) Decoder: search around received sequence for codeword with  np differences space of 2 n binary sequences

50 decoding error probability 1.For t errors: |t/n-p|> Є  0 for n   (law of large numbers) 2. > 1 code word in region (codewords random)

51 A.J. Han Vinck51 A pictorial view 2 n vectors2 k code words

52 A.J. Han Vinck52 decoder Compare received word with all possible codewords code words received Decode the code word with minimum # of differences („Most Likely“)

53 A.J. Han Vinck53 example code words: received: difference: best guess: only 1 difference

54 A.J. Han Vinck54 we have some problems Mapping from information to code words –generation of code words (mutually far apart) –storing of code book (2 k code words, length n) Decoding –Compare a received word with all possible code words

55 A.J. Han Vinck, Trondheim, What are the classical research problems Codebook: maximize - # of codewords of length n - minimum distance d min Channel characterization: -Types of errors; memory in the noise behavior ? -Channel Capacity? Decoder: -Design of a decoding/decision algorithm -Minimize complexity

56 A.J. Han Vinck56 Overview of the lecture The position of channel coding in a communication chain Some channel models Burst error models Detection rules Intro to the coding problem performance The class of linear codes Bounds on the minimum distance Examples of codes Why error correction?

57 A.J. Han Vinck57 Definitions Hamming distance between x and y is d H := d(x, y) is the # of positions where x i  y i The minimum distance of a code C is – d min = min { d(x, y) | x  C, y  C, x y} Hamming weight of a vector x is - w(x) := d(x, 0 ) is the # of positions where x i  0

58 A.J. Han Vinck58 example Hamming distance d( 1001, 0111) = 3 Minimum distance (101, 011, 110) = 2 Hamming weight w( ) = 4 Hamming was a famous scientist from Bell-lab and inventer of the Hamming code.

59 A.J. Han Vinck59 Performance A code with minimum distance d min is capable of correcting t errors if d min  2 t + 1. Proof: If  t errors occur, then since d min  2 t + 1 an incorrect code word has at least t+1 differences with the received word.

60 A.J. Han Vinck60 picture 2t+1 differences A B  t differences from A  t differences from B

61 A.J. Han Vinck61 Performance A code with minimum distance d min is capable of correcting E erasures if d min > E. Proof: If E < d min erasures occur, then at least 1 position is left to distinguish between any two codewords. Note: an erasure is a position where the receiver knows that an error occured.

62 A.J. Han Vinck62 Performance A code with minimum distance d min is capable of correcting E erasures and t errors if d min > 2t +E. Proof: The minimum distance is reduced at maximum by the value E. Hence, if d min - E > 2t we can still correct t errors

63 A.J. Han Vinck63 Performance for the Gausian channel E P(erasure) P(error) pE For an error correcting code: 2t = 2 n (p 2 +p 3 ) error correcting With erasures: d min > n (p 2 +p 1 ) + 2 n p 3 Since p 2 < p 1 error correction without erasures is always better! p3p3 p3p3 p1p1 p2p2 p3p3

64 A.J. Han Vinck64 Overview of the lecture The position of channel coding in a communication chain Some channel models Burst error models Detection rules Intro to the coding problem performance The class of linear codes Bounds on the minimum distance Examples of codes Why error correction?

65 A.J. Han Vinck65 LINEAR CODES Binary codes are called linear iff · the component wise modulo-2 sum of two code words is again a code word. Consequently, the all zero word is a code word.

66 A.J. Han Vinck66 LINEAR CODE GENERATOR The code words are - linear combinations of the rows of a binary generator matrix G with dimensions k, n - G must have rank k! Example: Consider k = 3, n = generator matrix G = (1,0,1)G = ( 0, 0, 1, 0, 1, 1)

67 A.J. Han Vinck67 Systematic codes Let in general the matrix G be written as | G = [ I k P ];G=0 1 0 | | k = 3, n = 6 The code generated is –linear, systematic – has minimum distance 3. –the efficiency of the code is 3/6.

68 A.J. Han Vinck68 Example (optimum) Single Parity check code d min = 2, k = n   0 1 G = [ I n-1 P ]=  00  01 1 All codewords have even weight!

69 A.J. Han Vinck69 Example (optimum) Repetition code: d min = n, k = 1 G = [ 1 1  1 ]

70 A.J. Han Vinck70 Equivalent codes Any linear code generator can be brought in “systematic form” G sys = k n k n n Note: the elementary operation have an inverse. Homework: give an example for k = 4 and n = 7 Elementary row operations Elementary column operations Non- systematic form

71 A.J. Han Vinck71 Property The set of distances from all code words to the all zero code word is the same as to any other code word. Proof: d( x, y ) = d( x  x, z = y  x ) = d( 0, z ), by linearity z is also a code word.

72 A.J. Han Vinck72 Thus! the determination of the minimum distance of a code is equivalent to the determination of the minimum Hamming weight of the code words. The complexity of this operation is proportional to # of code words

73 A.J. Han Vinck73 example Consider the code words –00000 –01101 –10011 –11110 Homework: Determine the minimum distance

74 A.J. Han Vinck74 Linear code generator I(X) represents the k bit info vector ( i 0, i 1,..., i k-1 ) g(X) is a binary polynomial of degree ( n-k ) THEN: the code vector C of length n can be described by C(X) = I(X) g(X) all operations modulo-2.

75 A.J. Han Vinck75 EX: k = 4, n = 7 and g(X) = 1 + X + X 3 For the information vector (1,0,1,0) C(X) = (1 + X 2 ) ( 1 + X + X 3 ) = 1 + X + X 2 + X 5  (1,1,1, 0, 0,1, 0). the encoding procedure in (k x n) matrix form: G = c = I * G

76 A.J. Han Vinck76 Implementation with a shift-register The following shift register can be used: g(X) = ( 1 + X + X 3 ) i k-1... i 2 i 1 i 0 Homework: give a description of the shift control to obtain the result

77 A.J. Han Vinck77 Some remarks Generators for different k and n –are constructed using mathematics –listed in many text books What remains is the decoding!

78 A.J. Han Vinck78 Overview of the lecture The position of channel coding in a communication chain Some channel models Burst error models Detection rules Intro to the coding problem performance The class of linear codes Bounds on the minimum distance Examples of codes Why error correction?

79 A.J. Han Vinck79 Bounds on minimum distance (Hamming) Linear codes have a systematic equivalent G –Minimum Hamming weight  n – k + 1 (Singleton bound) # code words * # correctable error patterns  2 n Homework: show that Hamming codes satisfy the bound with equality!

80 A.J. Han Vinck80 Hamming bound example Problem: Give the upper bound of the size of a linear code C of length n = 6 and d min = 3 Solution: the code corrects up to one error and thus |C|  2 6 / ( 1 + 6) = 64 / 7  |C|  9 Since for a linear code |C| = 2 k we have |C|  8 example: G =

81 A.J. Han Vinck81 Bounds on minimum distance (Plotkin) - List all codewords of a binary linar code n 2 k codewords - Every column must have 2 k /2 ones (proof) (no all zero column exists) Conclusion: The total number of ones in the codewords is n x 2 k /2 => d min must be  average, and thus d min  n x 2 k-1 / (2 k -1)

82 A.J. Han Vinck82 Bounds on minimum distance (Gilbert) Start: Select codeword from 2 n possible words 1. Remove all words at distance < d min from selected codeword 2. Select one of the remaining as next codeword 3. Goto 1. unless no possibilities left. RESULT: homework: show that logM/n  1 – h(2p) for d min -1 = 2t  2pn; p < ¼

83 A.J. Han Vinck83 plot R = log 2 M/n p  t/n 1-h(p) 1-h(2p) 0 Singleton Plotkin

84 A.J. Han Vinck84 For Additive White Gaussian Noise Channels quantized as binary symmetric channel Error probability p  e -Es/No –where Es is energy per transmitted symbol –No the one-sided noise power spectral density For an uncoded system p  e -Eb/No For a coded system with minimum distance d –decoding errors occur when by 2t+1 > d –nEs = kEb and thus p c  e -d/2 k/n Eb/No CONCLUSION: make the factor C = d/2 k/n > 1

85 A.J. Han Vinck85 Overview of the lecture The position of channel coding in a communication chain Some channel models Burst error models Detection rules Intro to the coding problem performance The class of linear codes Bounds on the minimum distance Examples of codes Why error correction?

86 A.J. Han Vinck86 Richard Hamming ( ) In 1947, Hamming was one of the earliest users of primitive computers at Bell Laboratories. He was frustrated by their lack of fundamental reliability. He therefore puzzled over the problem of how a computer might check and correct its own results. Within several months Hamming discovered that extra bits could be added to the internal binary numbers of the computer to redundantly encode numerical quantities. This redundancy enabled relatively simple circuitry to identify and correct any single bit that was bad within the encoded block of bits (typically one word of data). This encoding scheme, now known as Hamming Code, also detects the condition of any two bits in the encoded block that fail simultaneously. In 1950, Hamming published his work as Error Detecting and Error Correcting Codes' in the Bell System Technical Journal, vol. 29, pp x5x6 x7 x1x2 x3 x4 x1, x2,..., x7 are binary values (0 or 1) x1, x2, x3, x4 are information bits x5 makes x1+x3+x4+x5 even x6 makes x2+x3+x4+x6 even x7 makes x1+x3+x2+x7 even x1, x2,..., x7 are transmitted (stored) Any single bit change can be corrected

87 A.J. Han Vinck87 Simpler code, same principle x5 x6 x1 x2 x3 x4 info check ? ? ? x1 x2 x3 x4 x5 x6 info check transmit x1 x2 x3 x4 x5 x6 Decode received Q: Transmission efficiency?

88 A.J. Han Vinck88 The next code has length 15 and 11 information bits x10 x5 x2 x6 x3 x8 Q: Check that 1 error can be corrected Q: Transmission efficiency? x7 x4 x11 x9 p4 P1 P2 P3 x1

89 A.J. Han Vinck89 Why making things complicated? The (3,1) code P1 P2X1

90 A.J. Han Vinck90 Can we do a general presentation ? YES (31,26)

91 A.J. Han Vinck91 Hamming codes Minimum distance 3 Construction –G = I k All m-tuples of Hamming weight > 1 –where k = 2 m – m - 1 Check that the minimum distance is 3! Give the efficiency of the code

92 A.J. Han Vinck92 Example k = 4, n = 7, m = G = In Bluetooth we have a shortened (10,15) Hamming Code

93 A.J. Han Vinck93 Syndrome decoding Let G = [ I k P ] then construct H T = P I n-k For all code words c = xG, cH T = xGH T = 0 Hence, for a received noisy vector ( c  n ) H T = c H T  n H T = n H T = : S

94 A.J. Han Vinck94 example G = H T = x = c = c H T = n = c  n = [c  n] H T = S = Obvious fast decoder: precalculate all syndromes at receiver for correctable errors

95 A.J. Han Vinck95 In system form c  nc  n Calculate syndrome [c  n] H T = S Precalculated syndromes n* c  nc  n c  n  n* when n = n* then n  n* = 0 Homework: choose parameters that can be implemented

96 A.J. Han Vinck96 The parity check matrix We saw that C = I G G is k x n IGH T = 0H T is n x n-k Proposition: Only for codewords CH T = 0 Proof: take the systematic encoder presentation - there are 2 n-k different syndromes - 2 k different vectors cause the same syndrome nH T = S; mH T = S; (n  m)H T = 0 i.e. they differ by a codeword - the 2 k codewords give the syndrome = 0

97 A.J. Han Vinck97 The parity check matrix A property to be used later: –Any d min -1 rows of H T are linearly independent Reason: If not, then less than d min rows give the syndrome 0 Only Codewords give syndrome 0 and have minimum weight  d min

98 A.J. Han Vinck98 Varshamov-Bound Let us construct a matrix H T with dimensions n x n-k for a given minimum distance d. Start with the identity matrix with dimension (n-k) x (n-k). Then, construct a list of vectors such that any d-2 vectors are linearly independent If at list size i, we can add one more vector, not equal to the all zero vector, different from the linear combinations and thus all d-1 combinations are linearly independent. Define n:= i+1. Take the largest value of n such that thus,

99 A.J. Han Vinck99 The parity check matrix property for a code with minimum distance d min = 2t+1, all error events of weight ≤ t give rise to a different syndrome. Reason: If not, then the sum of two events with sum-weight less than d min give the syndrome 0. This contradicts the assumption that only codewords give syndrome 0 and have minimum weight  d min = 2t+1

100 A.J. Han Vinck100 Reed Solomon Codes (CD, DVD) Structure: m k information symbols n-k check symbols Properties: minimum distance = n-k+1 (symbols) length 2 m -1

101 A.J. Han Vinck101 General remarks on RS codes The general problem is the decoding –RS codes can be decoded using Euclids algorithm Berlekamp Massey algorithm

102 A.J. Han Vinck102 Reed-Muller based codes (UMTS) Starting code m = 1; C = { 00,01,10,11 } –has minimum distance 2 m-1 –2 m+1 code words U of length 2 m NEW code with 2 m+2 code words of length 2 m+1 –(U,U) and (U,U)  distance [ (U,U), (U,U)] = 2 m why? Convince yourself!  distance [ (U,U), (V,V)] = 2 * 2 m-1 = 2 m  distance [ (U,U), (V,V)] = 2 * 2 m-1 = 2 m (use compl.prop.)

103 A.J. Han Vinck103 Reed-Muller based codes (UMTS) Suppose we have a generator G for certain m The construction of G for m+1 is simple Example: m = 1 m=2 m = 3 etc. G = 01 G = d min = …

104 A.J. Han Vinck104 Reed-Muller based codes (UMTS) Basis for construction Take 2 linear code generators G1 and G2 of length n and minimum distance D1 and D2, respectively Then, G1 G1 G = has d min = min{2D1,D2} 0G2 Proof!

105 A.J. Han Vinck105 ISBN numbering code for the ISBN numbers (a 1, a 2, …, a 10 ) we have where We use a special symbol X in case a 10 = 10. A single transmission error gives as a result since 11 is a prime number. Note: check that the code can also detect a double transposition (interchange of 2 numbers).

106 A.J. Han Vinck106 Varshamov-Tenengolz codes for the binary numbers (x 1, x 2, …, x n ) we have A single transmission error gives as a result Check that all numbers are different and thus we can correct 1 error! The cardinality of the code Homework: compare the rate of the code with the rate of the Hamming code.

107 A.J. Han Vinck107 Overview of the lecture The position of channel coding in a communication chain Some channel models Burst error models Detection rules Intro to the coding problem performance The class of linear codes Bounds on the minimum distance Examples of codes Why error correction?

108 A.J. Han Vinck108 Why error correction? Systems with errors can be made almost error free –CD, DvD would not work without RS codes

109 A „multi“ user application ENCODING: two users transmit +1 or – 1 - Suppose one user uses an error correcting code at rate ½. - The second user just sends his information at rate 1. CHANNEL: adds the values and thus the receiver observes +2, 0 or -2 DECODER: decodes the information for user 1 using an erasure decoding procedure; After decoding, the information from user 2 can be calculated SUM RATE = 1.5 bit/transmission NOTE: Time Division gives a rate 1! A.J. Han Vinck109

110 A „multi“ user application A.J. Han Vinck 110 +/- 1 +/- 2; 0 add User 1; R = 1/2 User 2; R =1 ERASURE Channel from user 1 to receiver ½ ½ ½ ½ R1 R2 ½ ½ Sum efficiency time sharing 1 1

111 A.J. Han Vinck111 Why error correction?

112 A.J. Han Vinck112 Why error correction? Suppose: transmit (n-k bits) receiver Y = X  N(oise)  YH T    NH T  N  X = Y add XH T add X X  X   X  X transmit n bits For n-k = nh(p) we transmitted n + nh(p) bits instead of 2n Reference: Slepian and Wolf

113 A.J. Han Vinck113 Why error correction? In ARQ systems systems collaps can be postponed! troughput 100% k/n % Channel error probability 01

114 A.J. Han Vinck114 Combining error detection-correction G1(X) G2(X) generates a code with minimum distance D2 G1(X) generates a code with minimum distance D1 < D2 C(X) = I(X) G1(X) G2(X) = I’(X)G1(X) decoding: step 1: correct a maximum of  ( D1 – 1 )/2  errors with G1(X) step 2: detect the remaining errors with G1(X)G2(X) properties: 1. t   ( D1 – 1 )/2  errors correctable 2. t  D  ( D1 – 1 )/2  errors detectable Homework: construct some examples

115 A.J. Han Vinck115 Combining error detection-correction Example: nk d min generator polynomial (octal) 63573G1 = 103 (octal) 515G2 x G1 = G1 x (127) 457G3 x G2 x G1= G2 x G1 x (147)

116 A.J. Han Vinck116 Example: bluetooth Three error correction schemes are defined for Bluetooth: –1/3 rate FEC, a simple repetition –2/3 rate FEC, a shortened Hamming Code (10,15) –ARQ scheme for data (automatic retransmission request) FEC schemes reduce the number of retransmissions A CRC code decides whether a packet/header contains errors, i.e. transmit C(X) = I(X) g(X) receive R(X) = C(X) + E(X) check R(X) modulo g(X) = 0 ?

117 Code shortening Suppose that we have a RS code in systematic form n‘ The generator takes the first k + n‘-k columns Parity part minimum distance is n‘-k k n‘-k k‘ n-k take the last k‘ rows of length k‘ + n - k Parity part minimum distance is n‘ – k‘ Case a Case b

118 A.J. Han Vinck118 Channel with insertions and deletions Bad synchronization or clock recovery at receiver: –insertion  –deletion  Problem: finding start and end of messages

119 A.J. Han Vinck119 Channel with insertions and deletions Example: –the following code corrects a single inversion error or a single deletion/insertion error C = { , , , } –d min = 3 –An insertion/deletion makes the word longer/shorter

120 A.J. Han Vinck120 Channel with insertions and deletions Due to errors in bit pattern flag= , avoid in frame   insertion   deletion

121 A.J. Han Vinck121 Channels with interference Example (optical channel) Error probability depends on symbols in neighboring slots

122 A.J. Han Vinck122 Channels with memory (ex: recording) Example: Y i = X i + X i-1 X i  { +1, -1 } XiXi X i-1 Y i  { +2, 0, -2 }

123 The password problem server Password Password = Honary Hash(Honary) memoryHash(Honary) Calculate Hash(password) compare server I am: Bahram Hash( ) memoryHash( ) Calculate Hash( ) compare I am: Bahram biometric

124 Problem in authentication: passwords: need to be exact biometrics: are only very similar

125 The problem A: secure sketch B Store BH T (syndrome) B’= B  N calculate calculate (B  B’)H T = NH T B’H T DECODE N BN B’H T NH T From NH T : 2 k possible vectors B exp.# correct B = 2 k x |B|/2 n prob. correct guess = 2 n-k / |B|  2 -k enroll sketch reconstruction BH T

126 Authentication: secure storage, with errors Enrollment:  S = B  E(R) random R Condition: given S and Hash(R) it is hard to estimate B, R store S Hash(R) R B E(R) authentication: S Hash(R) B‘S  B‘ = B  E(R)  B‘ = E(R)  N => R‘ => Hash(R‘) Hash(R)

127 Authentication: secure storage, with errors attack: S Hash(R) Guess R or B B‘ = S  E(R‘) = B  E(R)  E(R‘) or S H T = [B  E(R)] H T = B‘ H T Enrollment:  S = B  E(R) random R Condition: given S and Hash(R) it is hard to estimate B, R store S Hash(R) R B E(R)

128 128 Binary entropy interpretation: let a binary sequence contain pn ones, then we can specify each sequence with log 2 2 nh(p) = n h(p) bits Homework: Prove the approximation using ln N! ~ N lnN for N large. Use also log a x = y  log b x = y log b a The Stirling approximation 

129 129 The Binary Entropy: h(p) = -plog 2 p – (1-p) log 2 (1-p) Note: h(p) = h(1-p)


Download ppt "DIGITAL COMMUNICATION Error - Correction November 15, 2011 A.J. Han Vinck."

Similar presentations


Ads by Google