Page 1 of 37 Density Evolution, Capacity Limits, and the "5k" Code Result (L. Schirber 11/22/11) The Density Evolution (DE) algorithm calculates a "threshold.

Slides:



Advertisements
Similar presentations
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Advertisements

Sampling and Pulse Code Modulation
Information Theory EE322 Al-Sanie.
Noise on Analog Systems
1 Finite-Length Scaling and Error Floors Abdelaziz Amraoui Andrea Montanari Ruediger Urbanke Tom Richardson.
Cooperative Multiple Input Multiple Output Communication in Wireless Sensor Network: An Error Correcting Code approach using LDPC Code Goutham Kumar Kandukuri.
Chapter 6 Information Theory
Visual Recognition Tutorial
Near Shannon Limit Performance of Low Density Parity Check Codes
Fundamental limits in Information Theory Chapter 10 :
1 Scalable Image Transmission Using UEP Optimized LDPC Codes Charly Poulliat, Inbar Fijalkow, David Declercq International Symposium on Image/Video Communications.
Low Density Parity Check Codes LDPC ( Low Density Parity Check ) codes are a class of linear bock code. The term “Low Density” refers to the characteristic.
Code and Decoder Design of LDPC Codes for Gbps Systems Jeremy Thorpe Presented to: Microsoft Research
Generalized Communication System: Error Control Coding Occurs In Right Column. 6.
1 Digital Communication Systems Lecture-3, Prof. Dr. Habibullah Jamal Under Graduate, Spring 2008.
The Role of Specialization in LDPC Codes Jeremy Thorpe Pizza Meeting Talk 2/12/03.
Noise, Information Theory, and Entropy
1 10. Joint Moments and Joint Characteristic Functions Following section 6, in this section we shall introduce various parameters to compactly represent.
Noise, Information Theory, and Entropy
Noise and SNR. Noise unwanted signals inserted between transmitter and receiver is the major limiting factor in communications system performance 2.
Analysis of Iterative Decoding
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
INFORMATION THEORY BYK.SWARAJA ASSOCIATE PROFESSOR MREC.
Formatting and Baseband Modulation
Lecture 1 Signals in the Time and Frequency Domains
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
STAT 497 LECTURE NOTES 2.
CHAPTER 6 PASS-BAND DATA TRANSMISSION
Wireless Mobile Communication and Transmission Lab. Theory and Technology of Error Control Coding Chapter 7 Low Density Parity Check Codes.
ECE 4710: Lecture #6 1 Bandlimited Signals  Bandlimited waveforms have non-zero spectral components only within a finite frequency range  Waveform is.
Baseband Demodulation/Detection
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
Effect of Noise on Angle Modulation
Coding Theory. 2 Communication System Channel encoder Source encoder Modulator Demodulator Channel Voice Image Data CRC encoder Interleaver Deinterleaver.
DIGITAL COMMUNICATIONS Linear Block Codes
Chapter 2 Signals and Spectra (All sections, except Section 8, are covered.)
1 Chapter 9 Detection of Spread-Spectrum Signals.
Chapter 6. Effect of Noise on Analog Communication Systems
CHAPTER 5 SIGNAL SPACE ANALYSIS
ADVANTAGE of GENERATOR MATRIX:
Digital Communications Chapeter 3. Baseband Demodulation/Detection Signal Processing Lab.
ECE 4710: Lecture #31 1 System Performance  Chapter 7: Performance of Communication Systems Corrupted by Noise  Important Practical Considerations: 
Part 1: Overview of Low Density Parity Check(LDPC) codes.
Low Density Parity Check codes
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
1 Lecture 7 System Models Attributes of a man-made system. Concerns in the design of a distributed system Communication channels Entropy and mutual information.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
Spectrum Sensing In Cognitive Radio Networks
Baseband Receiver Receiver Design: Demodulation Matched Filter Correlator Receiver Detection Max. Likelihood Detector Probability of Error.
Raptor Codes Amin Shokrollahi EPFL. BEC(p 1 ) BEC(p 2 ) BEC(p 3 ) BEC(p 4 ) BEC(p 5 ) BEC(p 6 ) Communication on Multiple Unknown Channels.
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
Performance of Digital Communications System
Digital Communications I: Modulation and Coding Course Spring Jeffrey N. Denenberg Lecture 3c: Signal Detection in AWGN.
1 Channel Coding: Part III (Turbo Codes) Presented by: Nguyen Van Han ( ) Wireless and Mobile Communication System Lab.
Channel Coding: Part I Presentation II Irvanda Kurniadi V. ( ) Digital Communication 1.
UNIT I. Entropy and Uncertainty Entropy is the irreducible complexity below which a signal cannot be compressed. Entropy is the irreducible complexity.
Reduced Complexity LDPC Decoder: Min Sum Algorithm (L
Institute for Experimental Mathematics Ellernstrasse Essen - Germany DATA COMMUNICATION introduction A.J. Han Vinck May 10, 2003.
Lecture 1.31 Criteria for optimal reception of radio signals.
Advanced Wireless Networks
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
LECTURE 03: DECISION SURFACES
Advanced Wireless Networks
Error rate due to noise In this section, an expression for the probability of error will be derived The analysis technique, will be demonstrated on a binary.
Chris Jones Cenk Kose Tao Tian Rick Wesel
CT-474: Satellite Communications
EE513 Audio Signals and Systems
Sampling Theorems- Nyquist Theorem and Shannon-Hartley Theorem
Parametric Methods Berlin Chen, 2005 References:
Theory of Information Lecture 13
Presentation transcript:

Page 1 of 37 Density Evolution, Capacity Limits, and the "5k" Code Result (L. Schirber 11/22/11) The Density Evolution (DE) algorithm calculates a "threshold E b /N 0 " - a performance bound between practical codes and the capacity limit - for classes of regular low-density parity-check (LDPC) codes [2]. –A BER simulation result for a R =1/2 code with N = 5040 is compared to the DE threshold E b /N 0 and to the “channel capacity” E b /N 0. TOPICS –Result: Half-rate, "5k" Code: BER Curve and Bandwidth-Efficiency Plot –Channel Models and Channel Capacity [1] Bandlimited Channels and Spectral Efficiency –Review of the LLR Decoder Algorithm [1] –Density Evolution Algorithm Analytical Development [1], [3] Algorithm Description and Examples: R = code rate = 1/3 and 1/2 [1] Error Correction Coding: Mathematical Methods and Algorithms, by Moon (2005) [2] Gallager, "Low-Density Parity-Check Codes", MIT Press (1963) [3] Barry, “Low-Density Parity-Check Codes”, Georgia Institute of Technology, (2001)

4-Cycle Removed vs Original LDPC Code: (N = 5040, M = 2520) Codes We try the 4-cycle removal algorithm (see Lecture 18) with a larger matrix. There is a slight improvement with 4-cycles removed, although there are only 11 word errors at BER = 1e-5. Note that in these cases every word error is a decoder failure (N w = N f ); hence there are no undetected codeword errors (N a1 = N a2 = 0). Page 2 of million seconds = 31 days so the run for E b /N 0 = 1.75 dB took a CPU month on a PC!

N=1080 vs 5040 LDPC Codes : Gap to Capacity at a Given BER Page 3 of 37 The half-rate, "5k" code is within about 1.6 dB at BER = 1e-5 of the Shannon limit E b /N 0 (magenta dash-dotted line). The Density Evolution (DE) threshold E b /N 0 (black dashed line) is also shown.

5k Code Result in the Bandwidth Efficiency Plane BER performance is summarized by giving the required E b /N 0 to reach a certain BER level. –We choose a BER level, and report a minimum E b /N 0 =1.75 dB necessary for that BER for the 5k code. The "density evolution threshold" (red diamond) and the "capacity limit for a binary input AWGN channel" (green circle) are compared in the plot and table. Page 4 of 37

Channel Models and Channel Capacity We consider 3 “input X, output Y” channel models, – (1) the simple binary symmetric channel or BSC, – (2) the additive white Gaussian noise channel or AWGNC, and – (3) the binary-input AWGN channel or BAWGNC. We calculate the mutual information function I(X;Y) from the definitions for each channel. –The resulting channel capacity C (from [1]), the maximum possible mutual information over all input distributions for X, is given and plotted versus a SNR measure. Page 5 of 37 BSC model p p 1 - p X Y Channel

Calculation of Channel Capacity C: Binary Symmetric Channel Example (1 of 5) Exercise 1.31 (from [1]) For a BSC with crossover probability p having input X and output Y, let the probability of inputs be P(X = 0) = q and P(X = 1) = 1 - q. –(a) Show that the mutual information is Page 6 of 30 –(b) By maximizing over q show that the channel capacity per channel use is

Page 7 of 37 Calculation of Channel Capacity: Binary Symmetric Channel Example (2 of 5) BSC model p p 1 - p

Calculation of Channel Capacity: Binary Symmetric Channel Example (3 of 5)

Calculation of Channel Capacity: Binary Symmetric Channel Example (4 of 5)

Calculation of Channel Capacity: Binary Symmetric Channel Example (5 of 5) BSC model p p 1 - p Page 10 of 37

Calculation of Channel Capacity: Binary input AWGN Channel (BAWGNC) ( 1 of 3) Example Suppose we have a input alphabet A x ={-a, a} (e.g., BPSK modulation with amplitude a) with P(X = a) = P(X = -a) = 1/2. Let N ~ N (0,  2 ) and Y = X + N. Find the mutual information and channel capacity. Page 11 of 37

Calculation of Channel Capacity: Binary input AWGN Channel (BAWGNC) ( 2 of 3)

Page 13 of 30 Calculation of Channel Capacity: Binary input AWGN Channel (BAWGNC) ( 3 of 3) Page 13 of 30

Aside: Probability Function  (y;a,1) The probability function  is a function of y with two parameters: amplitude a and noise variance  2. We set  2 to 1 here for convenience.  (y ;a,  2 ) is the average of two Gaussians – with separation 2a and common variance  2 - at a given y. – It has a shape resembling a Gaussian with variance  2 for small SNR = a 2 /  2, and two separated Gaussians with variance  2 for large SNR. Page 14 of 37

Calculation of Channel Capacity: AWGN Channel Example Let X ~ N (0,  x 2 ) and N ~ N (0,  n 2 ), independent of X. Let Y = X + N. Then Y ~ N (0,  x 2 +  n 2 ). Find the mutual information and capacity. Page 15 of 30

Capacity vs SNR for 3 Channel Models (from [1]) Capacity (bits/channel use) is determined for the –Binary Symmetric Channel (BSC) –AWGN Channel (AWGNC) –Binary-input AWGN Channel (BAWGNC) Page 16 of 37

Bandlimited Channel Analysis: Capacity Rate Assume that the channel is band-limited, i.e., the frequency content in any input, noise, or output signal is bounded above by frequency W in Hz. –By virtue of the Nyquist-Shannon Sampling theorem, then it is sufficient to choose a sampling frequency of 2W to adequately sample X, the channel input signal. Recall that the channel has capacity C in units of bits or bits per channel use, which is the maximal mutual information between input X and output Y. We can define a "capacity rate" - denoted here by C' to differentiate it from capacity C - in bit/s as the maximum possible rate of transfer of information for the bandlimited channel: We define the spectral efficiency for a bandlimited channel as the ratio of the data rate (R d ) to W. The maximum spectral efficiency  is equal to C'/W. Page 17 of 37

Aside: Shannon-Nyquist Sampling Theorem (from Wikipedia) The Nyquist-Shannon (or Shannon-Nyquist) sampling theorem states: –Theorem: If a function s(t) contains no frequencies higher than W hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2W) seconds apart. Suppose a continuous time signal s(t) is sampled at a finite number (N pts ) of equally-spaced time values with sampling interval  t. – In other words, we are given a “starting time” t 0 along with a sequence of values s[n] where s[n] = s(t n ) for n = 1,2,…, N pts with t n = t 0 + (n-1)  t and where  t = t 2 – t 1. If the signal is band-limited by W where W ≤ 1/(2  t), then the theorem says we can reconstruct the signal exactly: i.e., given the N pts values s[n] we can infer what the (continuous) function s(t) has to be for all t. Page 18 of 37

Spectral Efficiency Curve: Band-limited AWGN Channel For a given E b over N 0 in dB, we find for the band-limited AWGN channel that there is a limiting value for spectral efficiency  (measured in bit/second per hertz). –In other words, we cannot expect to transmit at a bit rate (R d ) greater than that  times W, with W the channel bandwidth. The Shannon limit is the minimum E b / N 0 for reliable communication. Page 19 of 37 Shannon limit “keep out” region

Spectral Efficiency for AWGN, BAWGN, and Quaternary-input AWGN Channels The maximum spectral efficiencies (  ) for AWGN, binary input AWGN, and quaternary input AWGN channels are shown above. For large E b /N 0,  goes to 1 (bit/s) / Hz for the BAWGNC and 2 (bit/s) / Hz for the QAWGNC. –Next we work through the details of constructing these curves. Page 20 of 37 Spectral Efficiency : Linear scaleSpectral Efficiency : Log scale

Procedure for Generating Spectral Efficiency vs SNR and E b /N 0 in dB Curves for Band-limited AWGN Channels 1. Choose a range of (receiver) SNR, e.g., SNR= [.001:.01: 10]. 2. Find the capacity C = f (SNR) in bits/channel use for each SNR. 3. Determine the capacity bit rate C’ = 2  WC in bit/s, where the channel is used 2W times per second, W the channel bandwidth, and  = 1 for AWGNC or QAWGNC, 1/2 for QAWGNC. 4. Calculate the max spectral efficiency  = C’/W with units of (bit/s)/Hz. 5. For each SNR also determine the corresponding (received) E b /N 0 in dB: Page 21 of Plot  vs SNR and  vs E b /N 0 in dB.

Generating Spectral Efficiency Curves, Example 2: Band-limited, Binary-input AWGN Channel Page 22 of 37 plot of  (SNR) for 0 < SNR < 10 plot of  (E b /N 0 ) with E b /N 0 in dB

Message Passing Formulas in the LLR Decoder Algorithm The LLR decoder computes check LLRs from bit LLRs, and then bit LLRs from check LLRs. –Assume that (c j | r) is approximately equal to (c j | r \n ) for j  n. We can visualize these computations as passing LLRs along edges of the Parity Check graph. Page 23 of 30 c n', n' in N m,n (3,6) Code: Parity Check Tree from Bit c n c j, j in N m',n' cncn z m, m in M n root checks root tier 1 checks tier 2 tier 1... j  m',n' z m', m' in M n',m

Page 24 of 37 adjustment to remove intrinsic information LLR LDPC Decoder Algorithm [2]

Ground Rules and Assumptions for Density Evolution Density evolution tracks the iteration-to-iteration PDFs calculated in the log likelihood ratio (LLR) LDPC decoder algorithm (using BPSK over AWGN). –The analysis presented here makes several simplifying assumptions: 1. The code is regular with w c = column weight, w r = row weight, and the code length N is very large. 2. The Tanner graph is a tree, or no cycles exist in the graph. 3. The all-zero codeword is sent, so the received vector is Gaussian. 4. The bit and check LLRs - the n for 1  n  N and  m,n for 1  m  M and 1  n  N - are consistent random variables, and are identically distributed over n and m. –The means of the check LLRs - denoted by  [l] for the mean at iteration l - satisfy a recurrence relation, which is described next. Page 25 of 37

Density Evolution Analysis (1 of 6) Suppose we map 0 to a and 1 to -a, with a denoting a signal amplitude for a (baseband BPSK) transmitted waveform, (over an assumed 1 ohm load). Page 26 of 37 Vector r is assumed to be equal to the mapped codeword plus random noise from the channel (i.e., ideal synchronization and detection are assumed). –Here we assume the channel is AWGN, so each component of the noise is a (uncorrelated) Gaussian random variable with zero mean and known variance  2 (found from a, the code rate R, and ratio E b / N 0 ) R d = 1 bit/s T b = 1/R d = 1s = bit time Encoder R = K/N, A, G m + crt Signal Mapper (e.g., BPSK) a De-Mapper and Decoder A, L

Density Evolution Analysis (2 of 6) Suppose (without loss of generality) that the all-zero codeword is transmitted, which implies for our current definitions that t n = a for n = 1,2, …, N. The PDF for r n with this all-zero codeword assumption is Gaussian, with the same mean and variance for each n: R d = 1 bit/s T b = 1/R d = 1s = bit time Encoder R = K/N, A, G m = 0 + c = 0 rt Signal Mapper (e.g., BPSK) a De-Mapper and Decoder A, L

Density Evolution Analysis (3 of 6) Recall that the LLR decoder algorithm (Algorithm 15.2) initializes bit LLRs or bit “messages” – the (c n | r) or n - to a constant (L c ) times r n. Page 28 of 37 Hence we see that the initial bit LLRs are all Gaussian, each with variances equal to twice their means. We call these random variables consistent. Although the initial PDFs of the bit LLRs are consistent, subsequent iterations are not in general; however, we assume that all bit LLR PDFs are consistent. Also assume that the n are identically distributed over n: i.e., the means of the bit LLRs or m [l] are the same for each n, but do vary with iteration l.

Aside: Consistent Random Variables Define a random variable to be consistent if: –1. it is Gaussian, and –2. its variance is equal to twice its mean in absolute value: Page 29 of 37 For density evolution we assume that the bit and check LLRs are consistent random variables. Furthermore, their statistics depend only on iteration number (l), not on (indices) m or n. If the mean of the LLR increases towards infinity, the corresponding bit (or check) estimate becomes more certain.

Density Evolution Analysis (4 of 6) Furthermore, assume the check LLRs are consistent and identically distributed. Assume the LDPC code is (w c, w r )-regular, and the Tanner graph is cycle-free. The bits (c j ) in check m besides n will be distinct – by the cycle-free assumption - and assuming they are also conditionally independent on r \n allows us to use the tanh rule to relate the bit LLRs and check LLRs: Page 30 of 37

Density Evolution Analysis (5 of 6) Take the expectation of both sides of (8). Define a function  (x) as below, plotted on the right along with tanh(x/2). Recast (9) in terms of  (x) to write down (10). Page 31 of 37

Aside: Function  (x) and Inverse We will need to evaluate the bounded function  (x), where x is any real number and y ranges between -1 to 1. The inverse function also needs to be evaluated, and its evaluation (near y = 1 or -1) leads to numerical instabilities. Page 32 of 37 x =  -1 (y), -1 < y < 1 y =  (x), for any x, … but only -10 < x < 10 shown

Density Evolution Analysis (6 of 6) From the bit LLR update equation in the LLR decoding algorithm (with some re-shuffling of operations). Take the expected value of both sides of (11). Plug (12) into (10) to develop (13), a recurrence relation for sequence  [l]. Initialize the calculations with  [l] = 0 for l = 0. Page 33 of 37

Density Evolution Algorithm Page 34 of 37

Density Evolution: Example 1 (Example 15.8 in [1] ) Page 35 of 37 E b /N 0 = 1.76 dB,  max = 100   E b /N 0 = dB,  max = 100    max after ~550 iters E b /N 0 = 1.8 dB,  max = 100    max after ~55 iters Check LLR mean value  approaches a constant if E b /N 0 is less than the threshold {E b /N 0 } t, or approaches infinity if E b /N 0 > {E b /N 0 } t. Here {E b /N 0 } t  dB. note: LLR = 30  P(z m,n = 0) >  c n = 0 for all n

Density Evolution: Example 2 Page 36 of 37 E b /N 0 = 1.16 dB,  max = 100   E b /N 0 = 1.19 dB,  max = 100   E b /N 0 = 1.2 dB,  max = 100    max after ~128 iters Here {E b /N 0 } t  1.2 dB.

Comparing Density Evolution Results (from [4]) : Comparisons to [1], p Density Evolution cases were attempted; the thresholds produced are listed in red. Apparently, there is a slight (<.05 dB) discrepancy between Moon's results (taken from [4]) in Table 15.1 and mine. –However, his Example 15.8 and Figure suggest a threshold of 1.76, not 1.73 for the R = 1/3 rate case. Page 37 of 37 [4] "Analysis of Sum-Product Decoding of Low-Density Parity-Check Codes Using a Gaussian Approximation", by Chung, Richardson, and Urbanke, IEEE Transactions in Information Theory, vol. 47, no. 2, (2001)