Presentation is loading. Please wait.

Presentation is loading. Please wait.

EE436 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.

Similar presentations


Presentation on theme: "EE436 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14."— Presentation transcript:

1 EE436 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) nazriee@eng.usm.my Room 2.14

2 EE436 Lecture Notes2 Turbo Codes A relatively new class of convolutional codes first introduced in 1993 A basic turbo encoder is a recursive systematic encoder that employs two convolutional encoders (recursive systematic convolutional or RSC in parallel, where the second encoder is preceded by a pseudorandom interleaver to permute the symbol sequence Turbo code is also known as Parallel Concatenated Codes (PCC) RSC encoder 1 Interleaver RSC encoder 2 Systematic bits, x k Parity bits, y 1k Parity bits, y 2k Puncture & MUX To transmitter Message bits

3 EE436 Lecture Notes3 Turbo Codes The input data stream is applied directly to encoder 1 and the pseudorandomly reordered version of the same data stream is applied to encoder 2 Both encoders produce the parity bits. The parity bits and the original bit stream are multiplexed and then transmitted The block size is determined by the size of the interleaver for example 65, 536 is common) Puncture is applied to remove some parity bits to maintain the data rate. For example, by eliminating odd parity bits from the first RSC and the even parity bits from the second RSC RSC encoder 1 Interleaver RSC encoder 2 Systematic bits, x k Parity bits, y 1k Parity bits, y 2k Puncture & MUX To transmitter Message bits

4 EE436 Lecture Notes4 RSC encoder for Turbo encoding

5 EE436 Lecture Notes5 RSC encoder for Turbo encoding Non-recursive Recursive 0001 0011 0001 1111 0001 More 1’s for recursive gives better error performance

6 EE436 Lecture Notes6 Turbo decoding Turbo decoder consists of two maximum a posterior (MAP) decoders and a feedback path Decoding operates on the noisy version of the systematic bits and the two sets of parity bits in two decoding stages to produce estimate of the original message bits The first decoder takes the information from the received signal and calculates the A posterior probability (APP) value This value is then used as the a priori probability value for the second decoder The output is then fedback to the first decoder where the process repeats in an iterative fashion with each iteration producing more refined estimates.

7 EE436 Lecture Notes7 Turbo decoding uses BCJR algorithm BCJR ( Bahl, Cocke, Jelinek and Raviv, 1972) algorithm is a maximum a posteriori probability (MAP) decoder which minimizes the bit errors by estimating the a posteriori probabilitities of the individual bits in a code word. It takes into account the recursive character of the RSC codes and computes a log-likelihood ratio to estimate the APP for each bit.

8 EE436 Lecture Notes8 Low Density Parity Check (LDPC) codes An LDPC code is specified in terms of its parity-check matrix, H that has the following structural properties i)Each row consists of ρ 1’s ii)Each column consists of γ 1’s iii)The number of 1’s in common between any two columns is no greater than 1; ie λ= 0 or 1 iv)Both ρ and γ are small compared with the length of the code LDPC codes are recognised in the form of (n, γ, ρ) H is said to be a low density parity check matrix H has constant row and column weights ( ρ and γ ) Density of H = total number of 1’s divided by total number of entries in H

9 EE436 Lecture Notes9 Low Density Parity Check (LDPC) codes Example (15, 4, 4) LDPC code Each row consists of ρ=4 1’s Each column consists of γ=4 1’s The number of 1’s in common between any two columns is no greater than 1; ie λ= 0 or 1 Both ρ and γ are small compared with the length of the code Density = 4/15 = 0.267

10 EE436 Lecture Notes10 Low Density Parity Check (LDPC) codes – Constructing H For a given choice of ρ and γ, form a k γ by kρ matrix H (where k = a positive integer > 1) that consists of γ k-by-kρ submatrix, H 1, H 2,….H γ Each row of a submatrix has ρ 1’s and each column of a submatrix contains a single 1 Therefore each submatrix has a total of kρ 1’s. Based on this, construct H 1 by appropriately placing the 1’s. For,the ith row of H 1 contains all its ρ 1’s in columns (i-1) ρ+1 to iρ The other submatrices are merely column permutations of H 1

11 EE436 Lecture Notes11 Low Density Parity Check (LDPC) codes – Example Constructing H Choice of ρ=4 and γ=3 and k=5 Form a k γ by kρ (15-by-20) matrix H that consists of γ=3 k-by-kρ (5-by-20) submatrix, H 1, H 2,H γ=3 Each row of a submatix has ρ=4 1’s and each column of a submatrix contains a single 1 Therefore each submatrix has a total of kρ=20 1’s. Based on this, construct H 1 by appropriately placing the 1’s. For,the ith row of H 1 contains all its ρ=4 1’s in columns (i-1) ρ+1 to iρ The other submatrices are merely column permutations of H 1

12 EE436 Lecture Notes12 Low Density Parity Check (LDPC) codes – Example Constructing H for (20, 3, 4) LDPC code

13 EE436 Lecture Notes13 Construction of Low Density Parity Check (LDPC) codes There are many techniques of constructing the LDPC codes Constructing LDPC codes with shorter block is easier than the longer ones For large block sizes, LDPC codes are commonly constructed by first studying the behaviour of decoders. Among the techniques are Pseudo-random techniques, Combinatorial approaches and finite geometry. These are beyond the scope of this lecture. For this lecture, we see how short LDPC codes are constructed from a given parity check matrix. For example a (6,3) linear LDPC code given by the following H 0

14 EE436 Lecture Notes14 Construction of Low Density Parity Check (LDPC) codes For example a (6,3) linear LDPC code given by the following H The 8 codewords can be obtained by putting the parity-check matrix H into this form H=[P T I I n-k ]; where P=coefficient matrix and I n-k = identity matrix The generator matrix is, G = [ I n-k I P] At the receiver, H=[P T I I n-k ] is used to check the error syndrome. 0 Exercise: Generate the codeword for m=001 and show how the receiver performs the error checking


Download ppt "EE436 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14."

Similar presentations


Ads by Google