Generalized Communication System: Error Control Coding Occurs In Right Column. 6.

Slides:



Advertisements
Similar presentations
Noise-Predictive Turbo Equalization for Partial Response Channels Sharon Aviran, Paul H. Siegel and Jack K. Wolf Department of Electrical and Computer.
Advertisements

Ulams Game and Universal Communications Using Feedback Ofer Shayevitz June 2006.
Noise, Information Theory, and Entropy (cont.) CS414 – Spring 2007 By Karrie Karahalios, Roger Cheng, Brian Bailey.
Cyclic Code.
Applied Algorithmics - week7
Error Control Code.
Information and Coding Theory
(speaker) Fedor Groshev Vladimir Potapov Victor Zyablov IITP RAS, Moscow.
Data and Computer Communications Tenth Edition by William Stallings Data and Computer Communications, Tenth Edition by William Stallings, (c) Pearson Education.
The Impact of Channel Estimation Errors on Space-Time Block Codes Presentation for Virginia Tech Symposium on Wireless Personal Communications M. C. Valenti.
Improving BER Performance of LDPC Codes Based on Intermediate Decoding Results Esa Alghonaim, M. Adnan Landolsi, Aiman El-Maleh King Fahd University of.
Arbitrary Bit Generation and Correction Technique for Encoding QC-LDPC Codes with Dual-Diagonal Parity Structure Chanho Yoon, Eunyoung Choi, Minho Cheong.
Cooperative Multiple Input Multiple Output Communication in Wireless Sensor Network: An Error Correcting Code approach using LDPC Code Goutham Kumar Kandukuri.
Near Shannon Limit Performance of Low Density Parity Check Codes
OCDMA Channel Coding Progress Report
Turbo Codes Azmat Ali Pasha.
1 Scalable Image Transmission Using UEP Optimized LDPC Codes Charly Poulliat, Inbar Fijalkow, David Declercq International Symposium on Image/Video Communications.
Low Density Parity Check Codes LDPC ( Low Density Parity Check ) codes are a class of linear bock code. The term “Low Density” refers to the characteristic.
Code and Decoder Design of LDPC Codes for Gbps Systems Jeremy Thorpe Presented to: Microsoft Research
EE436 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
The Role of Specialization in LDPC Codes Jeremy Thorpe Pizza Meeting Talk 2/12/03.
Hamming Code A Hamming code is a linear error-correcting code named after its inventor, Richard Hamming. Hamming codes can detect up to two bit errors,
CS774. Markov Random Field : Theory and Application Lecture 10 Kyomin Jung KAIST Oct
Analysis of Iterative Decoding
Low Density Parity Check (LDPC) Code Implementation Matthew Pregara & Zachary Saigh Advisors: Dr. In Soo Ahn & Dr. Yufeng Lu Dept. of Electrical and Computer.
USING THE MATLAB COMMUNICATIONS TOOLBOX TO LOOK AT CYCLIC CODING Wm. Hugh Blanton East Tennessee State University
1 INF244 Textbook: Lin and Costello Lectures (Tu+Th ) covering roughly Chapter 1;Chapters 9-19? Weekly exercises: For your convenience Mandatory.
Block-LDPC: A Practical LDPC Coding System Design Approach
Wireless Mobile Communication and Transmission Lab. Theory and Technology of Error Control Coding Chapter 7 Low Density Parity Check Codes.
Daphne Koller Message Passing Loopy BP and Message Decoding Probabilistic Graphical Models Inference.
Doc.: IEEE /992 Submission September, 2004 Victor Stolpman et. al Irregular Structured LDPC Codes and Structured Puncturing Victor Stolpman, Nico.
Application of Finite Geometry LDPC code on the Internet Data Transport Wu Yuchun Oct 2006 Huawei Hisi Company Ltd.
Information Coding in noisy channel error protection:-- improve tolerance of errors error detection: --- indicate occurrence of errors. Source.
MIMO continued and Error Correction Code. 2 by 2 MIMO Now consider we have two transmitting antennas and two receiving antennas. A simple scheme called.
Wireless Mobile Communication and Transmission Lab. Theory and Technology of Error Control Coding Chapter 5 Turbo Code.
1 Yuan Luo Xi’an Jan Optimum Distance Profiles of Linear Block Codes Shanghai Jiao Tong University.
Basic Characteristics of Block Codes
Error Control Code. Widely used in many areas, like communications, DVD, data storage… In communications, because of noise, you can never be sure that.
Introduction of Low Density Parity Check Codes Mong-kai Ku.
CS717 Algorithm-Based Fault Tolerance Matrix Multiplication Greg Bronevetsky.
Coding Theory. 2 Communication System Channel encoder Source encoder Modulator Demodulator Channel Voice Image Data CRC encoder Interleaver Deinterleaver.
DIGITAL COMMUNICATIONS Linear Block Codes
ADVANTAGE of GENERATOR MATRIX:
David Wetherall Professor of Computer Science & Engineering Introduction to Computer Networks Error Detection (§3.2.2)
Information Theory Linear Block Codes Jalal Al Roumy.
Part 1: Overview of Low Density Parity Check(LDPC) codes.
Low Density Parity Check codes
1 Design of LDPC codes Codes from finite geometries Random codes: Determine the connections of the bipartite Tanner graph by using a (pseudo)random algorithm.
Elementary Coding Theory Including Hamming and Reed-Solomom Codes with Maple and MATLAB Richard Klima Appalachian State University Boone, North Carolina.
FEC Linear Block Coding
Turbo Codes. 2 A Need for Better Codes Designing a channel code is always a tradeoff between energy efficiency and bandwidth efficiency. Lower rate Codes.
Hamming Distance & Hamming Code
1 Product Codes An extension of the concept of parity to a large number of words of data 0110… … … … … … …101.
Simulation of Finite Geometry LDPC code on the Packet Erasure channel Wu Yuchun July 2007 Huawei Hisi Company Ltd.
1 Aggregated Circulant Matrix Based LDPC Codes Yuming Zhu and Chaitali Chakrabarti Department of Electrical Engineering Arizona State.
1 Code design: Computer search Low rate: Represent code by its generator matrix Find one representative for each equivalence class of codes Permutation.
Channel Coding: Part I Presentation II Irvanda Kurniadi V. ( ) Digital Communication 1.
FEC decoding algorithm overview VLSI 자동설계연구실 정재헌.
Hamming Code In 1950s: invented by Richard Hamming
DIP 2005 Final Project Proposal
Rate 7/8 LDPC Code for 11ay Date: Authors:
Rate 7/8 (1344,1176) LDPC code Date: Authors:
Progress report of LDPC codes
Optimizing LDPC Codes for message-passing decoding.
Physical Layer Approach for n
Information Redundancy Fault Tolerant Computing
Irregular Structured LDPC Codes and Structured Puncturing
Low-Density Parity-Check Codes
Presentation transcript:

Generalized Communication System: Error Control Coding Occurs In Right Column. 6

Message Bits Codeword: message and parity bits.

For Example, H here has three parity checks on the 6 codeword bits:

Robert Gallagher, 1963: Low Density Parity Check Codes. Introduced all the key concepts: 1.) Low Density of H, 2.) Tree-based “hard” bit decoding, 3.) Probabilistic decoding from soft demodulation, more precisely, the a posteriori probabilities. The amount of computation required led to these codes being forgotten, until: David MacKay: “Good Error-Correcting Codes Based on Very Sparse Matrices.” IEEE Trans. Info. Thry D. MacKay and R. M. Neal: “Near Shannon Limit Performance of LDPC Codes.” Electronics Letters, 1996.

S. Chung, G. David Forney, Jr. and Thomas Richardson: “On the Design of Low-Density Parity-Check Codes within dB of the Shannon Limit.” IEEE Comm. Letters, Feb But(!) the block length (N) of the code used was The research on LDPC codes since then has been work to get results close to this, but with reasonable block length, and more efficient decoding.

i.e.

Construction Methods for LDPC Codes 1.) Finite Geometry Methods: Lin and Costello’s Text. 2.) Consult the recent literature! 3.) Gallagher’s original random process. Start with L many rows, staggered as shown:

Then “stack” column permutations of on top of itself:

Other methods of modifying H in obtain higher girths exist, such as deleting a row or column. See Lin and Costello’s text for further information.

1.) Hard Bit Flip Decoding 2.) Soft Bit Flip Decoding 3.) Modification of the Sum-Product Algorithm for SBG8 1.’) Hard Bit Flip Decoding can be summarized as “voting on which bit(s) are bad, by those checks which indicate an error,” and then one flips the bits by the idea: “worst first.” Consider the following part of an H matrix :

If checks C1, C2 and C3 indicate an error (i.e. incorrect parity in the bits they check), then C1 “votes” that bits b1, b2, and b3 are in error, C2 votes that b1, b4, and b5 are in error, etc. So after the voting is done, bit b1 will have received 3 votes, the other bits only 1 vote. So then bit b1 is flipped to its opposite. If after flipping the estimate is still not a codeword, the process is iterated for some fixed number of times.

The results gave surprisingly good performance. The sparsity of H matters to concentrate the votes: if H were dense, intuitively most rows (checks) of H would check a good proportion of the bits, and so many rows would indicate an error, and so most bits would receive roughly the same number of votes. 2.’) Soft Bit flip decoding was an attempt by the author to extend the previous idea to the case of soft demodulation to 8 levels of output. The problems in adapting hard bit flip are: (a) how to assign “votes to be in error” based on a bit’s current estimate, and (b) to what level should one “flip” the worst bits.

The author used the following scheme, “linear weighting:” The voting was averaged over the number of checks (rows of H) used.

The question now is how to flip the worst bit, i.e. the one(s) with the most votes-to-be-in-error. The following scheme was tried: the total number of votes was averaged by the column weight. Then:

Unfortunately, the using the extra information of soft demodulation didn’t significantly improve the performance. 3.’) Finally, the Sum-Product Algorithm gives the best performance, but has the highest complexity. It is based on estimating the Likelihood Ratio: This is the (ratio of ) probabilities that a transmitted bit is a 1, or 0, given the received estimates of the bits in the codoword.

There are numerous ways to estimate this ratio. Often, the Log of the ratio is used. The sum-product algorithm is an iterative update algorithm: the ratio, for each bit, is recalulated many times, with the probabilistic estimate (that each transmitted bit was a 1) being updated each time. The author eschewed a common way of calculating this ratio that uses the arctanh and tanh function, since it was felt that these transcendental functions would not be easily calculated in hardware. However, the author’s way of calculating these ratios (even with adaptations for the SBG8 channel) requires more multiplications. The author’s implementation gave exceptionial results, considering that the codes were short, and the channel not the ideal real-valued output AWGN channel of the literature, but the author’s hypothesized SBG8 channel.

The first two examples are taken from Lin and Costello.

The results for the classical block codes were obtained using Matlab/Simulink models; Simulink already has the encoding/decoding blocks available for models. The simulation of the SBG8 channel was coded as an m-file in Matlab. Each codeword transmission required estimation of the channels noise variance from the numbers of received levels. Then the iterative decoding methods (hard and soft bit flips, sum-product algorithm) were separately coded and used.

The major research topics in LDPC coding theory concern 1.) finding codes with good distance properties, 2.) finding codes with lower order encoding processes, 3.) finding codes whose graphs have better girths, 4.) reducing the complexity of the sum-product algorithm.

References