EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.

Slides:



Advertisements
Similar presentations
Noise-Predictive Turbo Equalization for Partial Response Channels Sharon Aviran, Paul H. Siegel and Jack K. Wolf Department of Electrical and Computer.
Advertisements

Data and Computer Communications Tenth Edition by William Stallings Data and Computer Communications, Tenth Edition by William Stallings, (c) Pearson Education.
1 Channel Coding in IEEE802.16e Student: Po-Sheng Wu Advisor: David W. Lin.
Near Shannon Limit Performance of Low Density Parity Check Codes
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
Turbo Codes – Decoding and Applications Bob Wall EE 548.
Coding and Error Control
Turbo Codes Azmat Ali Pasha.
Lecture 9-10: Error Detection and Correction Anders Västberg Slides are a selection from the slides from chapter 8 from:
DIGITAL COMMUNICATION Coding
Error detection/correction FOUR WEEK PROJECT 1 ITEMS TO BE DISCUSSED 1.0 OVERVIEW OF CODING STRENGTH (3MINS) Weight/distance of binary vectors Error detection.
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen.
Low Density Parity Check Codes LDPC ( Low Density Parity Check ) codes are a class of linear bock code. The term “Low Density” refers to the characteristic.
EE436 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
EE 3220: Digital Communication Dr Hassan Yousif 1 Dr. Hassan Yousif Ahmed Department of Electrical Engineering College of Engineering at Wadi Aldwasser.
DIGITAL COMMUNICATION Coding
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
Hamming Code Rachel Ah Chuen. Basic concepts Networks must be able to transfer data from one device to another with complete accuracy. Data can be corrupted.
CS774. Markov Random Field : Theory and Application Lecture 10 Kyomin Jung KAIST Oct
S Advanced Digital Communication (4 cr)
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
1 S Advanced Digital Communication (4 cr) Cyclic Codes.
Channel Coding and Error Control
CHANNEL CODING TECHNIQUES By K.Swaraja Assoc prof MREC
1 Systematic feedback (recursive) encoders G’(D) = [1,(1 + D 2 )/(1 + D + D 2 ),(1 + D)/(1 + D + D 2 ) ] Infinite impulse response (not polynomial) Easier.
1 Channel Coding (II) Cyclic Codes and Convolutional Codes.
Application of Finite Geometry LDPC code on the Internet Data Transport Wu Yuchun Oct 2006 Huawei Hisi Company Ltd.
III. Turbo Codes.
CODING/DECODING CONCEPTS AND BLOCK CODING. ERROR DETECTION CORRECTION Increase signal power Decrease signal power Reduce Diversity Retransmission Forward.
1 SNS COLLEGE OF ENGINEERING Department of Electronics and Communication Engineering Subject: Digital communication Sem: V Cyclic Codes.
Codes Codes are used for the following purposes: - to detect errors - to correct errors after detection Error Control Coding © Erhan A. Ince Types: -Linear.
A Novel technique for Improving the Performance of Turbo Codes using Orthogonal signalling, Repetition and Puncturing by Narushan Pillay Supervisor: Prof.
Wireless Mobile Communication and Transmission Lab. Theory and Technology of Error Control Coding Chapter 5 Turbo Code.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 12.
Basic Characteristics of Block Codes
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Coding Theory. 2 Communication System Channel encoder Source encoder Modulator Demodulator Channel Voice Image Data CRC encoder Interleaver Deinterleaver.
§6 Linear Codes § 6.1 Classification of error control system § 6.2 Channel coding conception § 6.3 The generator and parity-check matrices § 6.5 Hamming.
DIGITAL COMMUNICATIONS Linear Block Codes
ADVANTAGE of GENERATOR MATRIX:
Word : Let F be a field then the expression of the form a 1, a 2, …, a n where a i  F  i is called a word of length n over the field F. We denote the.
Real-Time Turbo Decoder Nasir Ahmed Mani Vaya Elec 434 Rice University.
Part 1: Overview of Low Density Parity Check(LDPC) codes.
Low Density Parity Check codes
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Timo O. Korhonen, HUT Communication Laboratory 1 Convolutional encoding u Convolutional codes are applied in applications that require good performance.
Error Detection and Correction – Hamming Code
Some Computation Problems in Coding Theory
1 Design of LDPC codes Codes from finite geometries Random codes: Determine the connections of the bipartite Tanner graph by using a (pseudo)random algorithm.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
Dr. Muqaibel \ EE430 Convolutional Codes 1 Convolutional Codes.
Log-Likelihood Algebra
SNS COLLEGE OF ENGINEERING Department of Electronics and Communication Engineering Subject: Digital communication Sem: V Convolutional Codes.
Implementation of Turbo Code in TI TMS320C8x Hao Chen Instructor: Prof. Yu Hen Hu ECE734 Spring 2004.
Error Control Coding. Purpose To detect and correct error(s) that is introduced during transmission of digital signal.
1 Product Codes An extension of the concept of parity to a large number of words of data 0110… … … … … … …101.
1 Channel Coding: Part III (Turbo Codes) Presented by: Nguyen Van Han ( ) Wireless and Mobile Communication System Lab.
Diana B. Llacza Sosaya Digital Communications Chosun University
1 Code design: Computer search Low rate: Represent code by its generator matrix Find one representative for each equivalence class of codes Permutation.
Channel Coding: Part I Presentation II Irvanda Kurniadi V. ( ) Digital Communication 1.
1 Convolutional Codes An (n,k,m) convolutional encoder will encode a k bit input block into an n-bit ouput block, which depends on the current input block.
The Viterbi Decoding Algorithm
Factor Graphs and the Sum-Product Algorithm
MAP decoding: The BCJR algorithm
Rate 7/8 (1344,1176) LDPC code Date: Authors:
S Digital Communication Systems
II. Linear Block Codes.
DIGITAL COMMUNICATION Coding
Homework #2 Due May 29 , Consider a (2,1,4) convolutional code with g(1) = 1+ D2, g(2) = 1+ D + D2 + D3 a. Draw the.
Presentation transcript:

EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14

EEE377 Lecture Notes2 Error-Correcting capability of the Convolutional Code The error-correcting capability of a convolutional code is determined by its constraint length K = L + 1 where L is the number of bits of message sequence in the shift registers and also the free distance, d free The constraint length of a convolutional code, expressed in terms of message bits is equal to the number of shifts over which a single message bit can influence the encoder output. In an encoder with an L-stage shift register, the memory of the encoder equals L message bits and K=L+1 shifts are required for a message bit to enter the shift register and finally come out. Thus the constraint length = K. The constraint length determines the maximum free distance of a code Free distance is equal to the minimum Hamming distance between any two codewords in the code. A convolutional code can correct t errors if d free > 2t

EEE377 Lecture Notes3 Error-correction The free distance can be obtained from the state diagram by splitting node a into a o and a 1

EEE377 Lecture Notes4 Rules A branch multiplies the signal at its input node by the transmittance characterising that branch A node with incoming branches sums the signals produced by all of those branches The signal at a node is applied equally to all the branches outgoing from that node The transfer function of the graph is the ratio of the output signal to the input signal

EEE377 Lecture Notes5 D on a branch describes the Hamming weight of the encoder output corresponding to that branch The exponent of L is always equal to one since the length of each branch is one. Let T(D,L) denote the transfer function of the signal flow graph Using rules 1, 2 and 3 to obtain the following input-output relations b = D 2 La o + Lc c= DLb + DLd d=DLb + DLd a 1 =D 2 Lc

EEE377 Lecture Notes6 Solving the set of equations for the ratio a 1 /a 0, the transfer function of the graph is given by T(D,L) = D 5 L 3 / 1-DL(1+L) We can use the binomial expansion and power series to get the expression for the distance transfer function (with L=1) as T(D,1)=D 5 + 2D 6 + 4D 7 + …….. Since the free distance is the minimum Hamming distance between any two codewords in the code and the distance transfer function T(D,1) enumerates the number of codewords that are a given distance apart, it follows that the exponent of the first term in the expansion T(D,1) defines the free distance=5. Therefore the (2,1,2) convolutional encoder with constraint length K = 3, can only correct up to 2 errors.

EEE377 Lecture Notes7 Error-correction Constraint Length, KMaximum free distance, d free

EEE377 Lecture Notes8 Turbo Codes A relatively new class of convolutional codes first introduced in 1993 A basic turbo encoder is a recursive systematic encoder that employs two convolutional encoders (recursive systematic convolutional or RSC in parallel, where the second encoder is preceded by a pseudorandom interleaver to permute the symbol sequence Turbo code is also known as Parallel Concatenated Codes (PCC) RSC encoder 1 Interleaver RSC encoder 2 Systematic bits, x k Parity bits, y 1k Parity bits, y 2k Puncture & MUX To transmitter Message bits

EEE377 Lecture Notes9 Turbo Codes The input data stream is applied directly to encoder 1 and the pseudorandomly reordered version of the same data stream is applied to encoder 2 Both encoders produce the parity bits. The parity bits and the original bit stream are multiplexed and then transmitted The block size is determined by the size of the interleaver for example 65, 536 is common) Puncture is applied to remove some parity bits to maintain the code rate at 1/2. For example, by eliminating odd parity bits from the first RSC and the even parity bits from the second RSC RSC encoder 1 Interleaver RSC encoder 2 Systematic bits, x k Parity bits, y 1k Parity bits, y 2k Puncture & MUX To transmitter Message bits

EEE377 Lecture Notes10 RSC encoder for Turbo encoding

EEE377 Lecture Notes11 RSC encoder for Turbo encoding Non-recursive Recursive More 1’s for recursive gives better error performance

EEE377 Lecture Notes12 Turbo decoding Turbo decoder consists of two maximum a posterior (MAP) decoders and a feedback path Decoding operates on the noisy version of the systematic bits and the two sets of parity bits in two decoding stages to produce estimate of the original message bits The first decoder takes the information from the received signal and calculates the A posterior probability (APP) value This value is then used as the a priori probability value for the second decoder The output is then fedback to the first decoder where the process repeats in an iterative fashion with each iteration producing more refined estimates.

EEE377 Lecture Notes13 Turbo decoding uses BCJR algorithm BCJR ( Bahl, Cocke, Jelinek and Raviv, 1972) algorithm is a maximum a posteriori probability (MAP) decoder which minimizes the bit errors by estimating the a posteriori probabilitities of the individual bits in a code word. It takes into account the recursive character of the RSC codes and computes a log-likelihood ratio to estimate the APP for each bit.

EEE377 Lecture Notes14 Low Density Parity Check (LDPC) codes An LDPC code is specified in terms of its parity-check matrix, H that has the following structural properties i)Each row consists of ρ 1’s ii)Each column consists of γ 1’s iii)The number of 1’s in common between any two columns is no greater than 1; ie λ= 0 or 1 iv)Both ρ and γ are small compared with the length of the code LDPC codes are recognised in the form of (n, γ, ρ) H is said to be a low density parity check matrix H has constant row and column weights ( ρ and γ ) Density of H = total number of 1’s divided by total number of entries in H

EEE377 Lecture Notes15 Low Density Parity Check (LDPC) codes Example (15, 4, 4) LDPC code Each row consists of ρ=4 1’s Each column consists of γ=4 1’s The number of 1’s in common between any two columns is no greater than 1; ie λ= 0 or 1 Both ρ and γ are small compared with the length of the code Density = 4/15 = 0.267

EEE377 Lecture Notes16 Low Density Parity Check (LDPC) codes – Constructing H For a given choice of ρ and γ, form a k γ by kρ matrix H (where k = a positive integer > 1) that consists of γ k-by-kρ submatrix, H 1, H 2,….H γ Each row of a submatrix has ρ 1’s and each column of a submatrix contains a single 1 Therefore each submatrix has a total of kρ 1’s. Based on this, construct H 1 by appropriately placing the 1’s. For,the ith row of H 1 contains all its ρ 1’s in columns (i-1) ρ+1 to iρ The other submatrices are merely column permutations of H 1

EEE377 Lecture Notes17 Low Density Parity Check (LDPC) codes – Example Constructing H Choice of ρ=4 and γ=3 and k=5 Form a k γ by kρ (15-by-20) matrix H that consists of γ=3 k-by-kρ (5-by-20) submatrix, H 1, H 2,H γ=3 Each row of a submatix has ρ=4 1’s and each column of a submatrix contains a single 1 Therefore each submatrix has a total of kρ=20 1’s. Based on this, construct H 1 by appropriately placing the 1’s. For,the ith row of H 1 contains all its ρ=4 1’s in columns (i-1) ρ+1 to iρ The other submatrices are merely column permutations of H 1

EEE377 Lecture Notes18 Low Density Parity Check (LDPC) codes – Example Constructing H for (20, 3, 4) LDPC code

EEE377 Lecture Notes19 Construction of Low Density Parity Check (LDPC) codes There are many techniques of constructing the LDPC codes Constructing LDPC codes with shorter block is easier than the longer ones For large block sizes, LDPC codes are commonly constructed by first studying the behaviour of decoders. Among the techniques are Pseudo-random techniques, Combinatorial approaches and finite geometry. These are beyond the scope of this lecture. For this lecture, we see how short LDPC codes are constructed from a given parity check matrix. For example a (6,3) linear LDPC code given by the following H 0

EEE377 Lecture Notes20 Construction of Low Density Parity Check (LDPC) codes For example a (6,3) linear LDPC code given by the following H The 8 codewords can be obtained by putting the parity-check matrix H into this form H=[P T I I n-k ]; where P=coefficient matrix and I n-k = identity matrix The generator matrix is, G = [ I n-k I P] At the receiver, H=[P T I I n-k ] is used to check the error syndrome. 0 Exercise: Generate the codeword for m=001 and show how the receiver performs the error checking