Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen.

Similar presentations


Presentation on theme: "Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen."— Presentation transcript:

1 Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

2 Chapter 11 goals To understand error-correcting codes in use theorems and their principles –block codes, convolutional codes, etc.

3 Chapter 11 contents Introduction Discrete Memoryless Channels Linear Block Codes Cyclic Codes Maximum Likelihood decoding of Convolutional Codes Trellis-Coded Modulation Coding for Compound-Error Channels

4 Introduction Cost-effective facility for transmitting information at a rate and a level of reliability and quality –signal energy per bit-to-noise power density ratio –achieved practically via error-control coding Error-control methods Error-correcting codes

5 Discrete Memoryless Channels

6 Discrete memoryless channles (see fig. 11.1) described by the set of transition probabilities –in simplest form binary coding [0,1] is used of which BSC is an appropriate example –channel noise modelled as additive white gaussian noise channel the two above are so called hard-decision decoding –other solutions, so called soft-decision coding

7 Linear Block Codes A code is said to be linear if any twowords in the code can be added in modulo-2 arithmetic to produce a third code word in the code Linear block code has n bits of which k bits are always identical to the message sequence Then n-k bits are computed from the message bits in accordance with a prescribed encoding rule that determines the mathematical structure of the code –these bits are also called parity bits

8 Linear Block Codes Normally code equations are written in the form of matrixes (1-by-k message vector) –P is the k-by-(n-k) coefficient matrix –I (of k) is the k-by-k identity matrix –G is k-by-n generator matrix Another way to show the relationship between the message bits and parity bits –H is parity-check matrix

9 Linear Block Codes In Syndrome decoding the generator matrix (G) is used in the encoding at the transmitter and the parity-check matrix (H) atthe receiver –if corrupted bit, r = c+e, this leads to two important properties the syndrome is dependant only on the error pattern, not on the trasmitted code word all error patterns that differ by a code word, have same syndrome

10 Linear Block Codes The Hamming distance (or minimum) can be used to calculate the difference of the code words We have certain amount (2_power_k) code vectors, of which the subsets constitute a standard array for an (n,k) linear block code We pick the error pattern of a given code –coset leaders are the most obvious error patterns

11 Linear Block Codes Example : Let us have H as parity-check matrix which vectors are –(1110), (0101), (0011), (0001), (1000), (1111) –code generator G gives us following codes (c) : 000000, 100101,111010, 011111 –Let us find n, k and n-k ? –what will we find if we multiply Hc ?

12 Linear Block Codes Examples of (7,4) Hamming code words and error patterns

13 Cyclic Codes Cyclic codes form subclass of linear block codes A binary code is said to be cyclic if it exhibits the two following properties –the sum of any two code words in the code is also a code word (linearity) this means that we speak linear block codes –any cyclic shift of a code word in the code is also a code word (cyclic) mathematically in polynomial notation

14 Cyclic Codes The polynomial plays major role in the generation of cyclic codes If we have a generator polynomial g(x) of an (n,k) cyclic code with certain k polynomials, we can create the generator matrix (G) Syndrome polynomial of the received code word corresponds error polynomial

15 Cyclic Codes

16 Example : A (7,4) cyclic code that has a block length of 7, let us find the polynomials to generate the code (see example 3 on the book) –find code polynomials –find generation matrix (G) and parity-check matrix (H)

17 Cyclic Codes Other remarkable cyclic codes –Cyclic redundancy check (CRC) codes –Bose-Chaudhuri-Hocquenghem (BCH) codes –Reed-Solomon codes

18 Convolutional Codes Convolutional codes work in serial manner, which suits better to such kind of applications The encoder of a convolutional code can be viewed as a finite-state machine that consists of an M-stage shift register with prescribed connections to n modulo-2 adders, and a multiplexer that serializesthe outputs of the address

19 Convolutional Codes Convolutional codes are portrayed in graphical form by using three different diagrams –Code Tree –Trellis –State Diagram

20 Maximum Likelihood Decoding of Convolutional Codes We can create log-likelihood function to a convolutional code that have a certain hamming distance The book presents an example algorithm (Viterbi) –Viterbi algorithm is a maximum-likelihood decoder, which is optimum for a AWGN (see fig. 11.17) initialisation computation step final step

21 Trellis-Coded Modulation Here coding is described as a process of imposing certain patterns on the transmitted signal Trellis-coded modulation has three features –Amount of signal point is larger than what is required, therefore allowing redundancy without sacrificing bandwidth –Convolutional coding is used to introduce a certain dependancy between successive signal points –Soft-decision decoding is done in the receiver

22 Coding for Compound-Error Channels Compound-error channels exhibit independent and burst error statistics (e.g. PSTN channels, radio channels) Error-protection methods (ARQ, FEC)


Download ppt "Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen."

Similar presentations


Ads by Google