Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reduced Complexity LDPC Decoder: Min Sum Algorithm (L

Similar presentations


Presentation on theme: "Reduced Complexity LDPC Decoder: Min Sum Algorithm (L"— Presentation transcript:

1 Reduced Complexity LDPC Decoder: Min Sum Algorithm (L
Reduced Complexity LDPC Decoder: Min Sum Algorithm (L. Schirber 7/11/13) The Min Sum and standard (Belief Propagation) LDPC decoding algorithms are compared, giving results for both Matlab and MEX-file implementations. In addition, the ideas from information theory and probability leading to these algorithms - the standard and variations below - are explained. TOPICS Results: Min Sum and Belief Propagation Decoder Algorithm Simulations Implementation Comparison: "MEX-file" versus "Matlab-only" LDPC Code Definition, Signal Model, and Decoding Problem LLR Decoder Algorithms: Log-Likelihood Ratio Representation Belief Propagation (BP) Algorithm Standard Tanh Form [1],[3] Sign-Magnitude Form [2],[3] Min-Sum approximation DVB-S2 Form [1] Error Correction Coding: Mathematical Methods and Algorithms, by Moon (2005) [2] Gallager, "Low-Density Parity-Check Codes" , MIT Press (1963) [3] Guilloud, "Low Density Parity Check Codes", thesis at Telecom ParisTech (2004) Page 1

2 Result: Min Sum vs Belief Propagation BER Performance: DVB-S2 Example (MEX-file Implementations)
Min Sum shows a 0.25 dB performance hit. Min Sum algorithm has worse performance, but unexpectedly slower (or comparable) run times. Page 2

3 Result Tabular Summary: Min Sum and MEX-file BER Performance: DVB-S2 Matrix (N = 16200, M = 9000, K= 7200) Both "mex-file" implementations run a factor of ten times faster than original algorithms. Min Sum decoder is 0.2 to 0.3 dB worse than original, but runs at the same realizations per second rate. Page 3

4 Result: MEX-file versus Original Matlab Implementation: Belief Propagation Algorithm
MEX or mex stands for "Matlab executable", a compiled C (or other language) code that can be executed from Matlab scripts or functions. MEX file (red triangle) decoder results here align well with Matlab (cyan circle) results, and take 15 times less time for the runs. Page 4

5 Definition of a LDPC Code
Definition 15.1 A low density parity check code is a linear block code that has a very sparse parity check matrix A. The column weights of A are the weights wc of each column, the row weights of A are the weights wr of each row. An LDPC code is regular if the column weights (wc) for each column are the same, and the row weights (wr) for each row are the same. Page 5 of 31

6 Idealized Tanner Graph for a (3,4)-Regular LDPC Code (wc = 3,wr=4)
A Tanner graph depicts all connections between bits and checks in the parity check matrix. Here we draw a conceptual or idealized Tanner graph (a tree) for a (3,4)- regular LDPC code of arbitrary (and large) size N. We think of the decoding algorithm beginning by assuming bit probabilities at the top tier of the tree, and working down the tree - tier by tier - to determine the probability of the root bit. cn'' , n'' in Nm',n' . . . . . . tier 2 . . . zm' , m' in Mn',m tier 1 checks tier 1 cn' , n' in Nm,n root checks zm , m in Mn (3,4) Code: Idealized Tanner Graph from Bit cn root cn

7 Tanner Graph Portion Drawing:
Gallager's (3,4)-Regular Code (N =20, K=7) Gallager's (3,4)-regular parity check matrix (A) is shown below. To the right is a drawing of a portion of the Tanner graph - as suggested by the ellipsis -, with bit 1 or c1 as the root bit. cn' , n' in Nm,n zm' , m' in Mn',m (3,4) Parity Check Graph from Bit cn with n = 1 c1 zm , m in Mn root checks root tier 1 z1 z6 z11 c13 c9 c5 c2 c4 c3 z7 z12 z9 z14 z8 z13 z4 z3 z14 z15 z2 z z3 z10 z5 z8 c6 c12 c18 checks 6-cycles in graph above: (c1, z1, c3, z13, c13 , z6 ), (c1, z6, c5, z2, c6 , z11 ) (c1, z1, c4, z14, c9, z6 ), (c1, z1, c3, z8, c18, z11 ), (c1, z1, c2, z7, c6 , z11 ),(c1, z6, c9, z3, c12 , z11 )

8 Gallager's (3,4)-Regular Code (N =20, K = 7)
Tanner Graph Drawing: Gallager's (3,4)-Regular Code (N =20, K = 7) A Tanner graph depicts all connections between checks and bits in a linear block code. The Tanner graph for Gallager's ("small-size") (3,4)-regular code, with M=15 parity checks and N= 20 bits, is drawn to the right. cn' , n' in Nm,n zm' , m' in Mn',m (3,4) Code: Tanner Graph from Bit cn with n = 1 c1 zm , m in Mn root checks root tier 1 z1 z6 z11 c13 c9 c5 c2 c4 c3 z12 z z8 z13 z4 z3 z14 z15 z2 z z z5 c6 c12 c18 c c c c c c c c c c20 tier 2 tier 1 checks cn'' , n'' in Nm',n' The decoding algorithm starts by assigning probabilities (or LLRs) to each bit (circle). Then check probabilities are found from the bit probabilities. Bit probabilities are then updated from check probabilities. This process can be repeated.

9 Signal Model for LDPC Decoder: AWGN Channel
Tb = bit time [s] Rd = data rate =1/Tb Encoder R = K/N, A, G m + c r t Signal Mapper (e.g., BPSK) a De-Map,Decode A, pn(x), L The encoded message c = Gm is mapped from binary numbers into a “signal space” with, for example, 2 antipodal states as in BPSK. The received vector r is assumed to be equal to the mapped codeword plus random noise from the channel (i.e., ideal demod/synchronization is assumed). Here we assume the channel is AWGN, so each component of the noise is a (uncorrelated) Gaussian random variable with zero mean and known noise variance 2 (found from a, the code rate R, and ratio Eb / N0 )

10 LDPC Decoder Problem (1 of 2)
Encoder R = K/N, A, G c Signal Mapper (e.g., BPSK) a t r De-Map/ Decode A, pn(x), L + Tb = bit time P(cn=x|rn)=pn(x) Given: A, the parity check matrix; r, the received vector Assume antipodal signaling with amplitude a , an AWGN channel model with noise variance σ2 , and a maximum number of decoder iterations (L) of 50. Determine the best estimate of the message in the flow diagram above - using code constraints and measurement r. In addition, determine the estimate of the codeword . If a codeword is found at some iteration of the algorithm - always between 1 and the maximum number of iterations (L) - the simulation is stopped, and the message associated with the codeword is compared with the message m.

11 LDPC Decoder Problem: Possible Outcomes (2 of 2)
Encoder R = K/N, A, G c Signal Mapper (e.g., BPSK) a t r De-Map/ Decode A, pn(x), L + Tb = bit time P(cn=x|rn)=pn(x) There are four distinct LDPC decoder outcomes: 1. success, codeword found, message correct 2. anomaly type 1, codeword found, message incorrect 3. failure, no codeword found, message wrong 4. anomaly type 2, no codeword found, message correct. codeword found ? yes r; A,L, pn(x) yes success: message and codeword are correct no no failure: message and codeword are incorrect anomaly type 2: decoder fails, but message is correct. anomaly type 1: codeword found, but message is incorrect. yes no

12 LLR LDPC Decoder Algorithm [1]
adjustment to remove redundant information Page 12

13 Measuring Information: Preliminary Notions
We want a quantitative measure of the information in a transmitted or received message signal (in each case a sequence of symbols). The rationale is we need this quantity - defined for both transmitted and received signals - to decide if a communication system is capable of a successful “transfer” of this quantity from transmitter to receiver. Consider generally a symbol alphabet , - which could be most simply just 0 and 1 - as comprising the set of individual symbols in the message. An information source has a memory if a current symbol is dependent in any way on a previous symbol: otherwise it is a memoryless source. Information in a message - or even in a symbol itself - can be posited to be inversely proportional to the uncertainty of its occurrence. A very likely event - if it indeed occurs - doesn’t tell us much, or low uncertainty corresponds to little to no information. We expect then that 1. Information is related to probability 2. Information from independent outcomes should add

14 Information, Entropy, and Log Likelihood Ratio
Information in an individual symbol xi in alphabet Ax is defined [1] in terms of the logarithm of the reciprocal of the probability of its occurrence. Definition: The information in symbol xi is measured in bits and is equal to the log base 2 of the reciprocal of the probability of the production of that symbol. Entropy (of a source of symbols) is average information over the symbols. Definition: The likelihood ratio of a {0,1} random variable X is the ratio (or the reciprocal ratio) of probability of X = 1 to probability of X = 0. The log-likelihood ratio is the log (base e) of the likelihood ratio of X.

15 Guilloud's Belief Propagation (BP) LLR Relations (1 of 7)
Guilloud in Chapter 1 of [3] derives the Belief Propagation LLR relationships in a similar fashion to Moon [1]; unfortunately, however there are differences in the expressions due to a (0,1) inversion in the log likelihood ratio (or LLR) definition. For example, Guilloud describes the "decomposition" of information in bit n into 2 separate (and independent) information parts- intrinsic plus extrinsic - as below Moon's expressions are identical apart from some changes in notation and an inversion in the LLR definition (1 on top, not 0 in likelihood ratio). Hence the LLRs differ in sign, their magnitudes are (of course) exactly the same.

16 Guilloud's BP LLR Relations (2 of 7)
Extrinsic and intrinsic information for bit n described by Guilloud (and Moon)...

17 Guilloud's BP LLR Relations (3 of 7)
Guilloud defines a quantity of information provided to bit n, with n between one and N, by check m (with bit n a "participant" in that check), denoted by En,m . The extrinsic information En can be seen as a sum over m of the En,m , assuming that the code's Tanner graph is a tree (Guilloud's "cycle free graph" hypothesis).

18 Guilloud's BP LLR Relations (4 of 7)
The expression for En.m can be simplified via the "tanh rule" (expressed below in accordance with Guilloud's sign convention), resulting in Equation (4).

19 Guilloud's BP LLR Relations (5 of 7)
En.m represents the information "exuded" from check m, and in the direction of bit n. example: E1,1 , E2,1 , E3,1 and E4,1 are represented as green arrows to the right. Recall that the En.m are summed over m to form En , and the total information for bit cn is Tn . E3,1 T18,11 E4,1 E2,1 T18,5 T18,8 E1,1 T1=I1+E1,1+E1,6 + E1,11 E1,11 E1,6 cn' , n' in Nm,n zm' , m' in Mn',m (3,4) Code: Tanner Graph from Bit cn with n = 1 c1 zm , m in Mn root checks root tier 1 z1 z6 z11 c13 c9 c5 c2 c4 c3 z12 z z8 z13 z4 z3 z14 z15 z2 z z z5 c6 c12 c18 c c c c c c c c c c20 tier 2 tier 1 checks cn'' , n'' in Nm',n'

20 Guilloud's BP LLR Relations (6 of 7)
Tn represents the "total" information of bit n . Tn.m represents the information of bit n , excluding check m. example: T18,11 , T18,5 , and T18,8 ... purple arrows to the right. Tn.m is found by subtracting En.m from Tn. c c c c c c c c c c20 cn'' , n'' in Nm',n' tier 2 z12 z z8 z13 z4 z3 z14 z15 z2 z z z5 zm' , m' in Mn',m tier 1 checks T18,8 E4,1 T18,5 tier 1 E3,1 cn' , n' in Nm,n c2 c4 c3 c13 c9 c5 c6 c12 c18 E2,1 T18,11 E1,6 root checks zm , m in Mn z1 z6 z11 E1,1 E1,11 root c1 (3,4) Code: Tanner Graph from Bit cn with n = 1 T1=I1+E1,1+E1,6 + E1,11 T1,11=I1+E1,1+E1,6

21 Guilloud's BP LLR Relations (7 of 7)
Tj,m =  (cj | r\n ) is assumed by Guilloud, leading to (6). (See Appendix C.) E3,1 T18,11 E4,1 E2,1 T18,5 T18,8 E1,1 tanh(E1,6/2) = tanh(T13,6/2) tanh(T9,6/2) tanh(T5,6/2) E1,11 E1,6 cn' , n' in Nm,n zm' , m' in Mn',m (3,4) Code: Tanner Graph from Bit cn with n = 1 c1 zm , m in Mn root checks root tier 1 z1 z6 z11 c13 c9 c5 c2 c4 c3 z12 z z8 z13 z4 z3 z14 z15 z2 z z z5 c6 c12 c18 c c c c c c c c c c20 tier 2 tier 1 checks cn'' , n'' in Nm',n' T5,6 T13,6 T9,6

22 Guilloud's BP LLR Relations Lead to an Iterative Algorithm
These last expressions are used to build an iterative algorithm. The picture below (and equations below) illustrate some example transfers of information along edges. c c c c c c c c c c20 T16,12 cn'' , n'' in Nm',n' tier 2 T11,12 T7,12 z12 z z8 z13 z4 z3 z14 z15 z2 z z z5 zm' , m' in Mn',m tier 1 checks E2,7 E2,12 T18,8 T9,6 T18,5 tier 1 cn' , n' in Nm,n c2 c4 c3 c13 c9 c5 c6 c12 c18 T13,6 E2,1 T5,6 T18,11 E1,6 root checks zm , m in Mn z1 z6 z11 E1,1 E1,11 root c1 (3,4) Code: Tanner Graph from Bit cn with n = 1 tanh(E2,12/2) = tanh(T7,12/2) tanh(T11,12/2) tanh(T16,12/2) T2=I2+E2,12+E2,7 + E2,1 T2,1=I2+E2,12+E2,7

23 Sign-Magnitude Form (1 of 5)
Gallager [2] suggested expressing the check node probability update in terms of signs and magnitudes of the log-likelihood ratios, to simplify calculations. Start from Guilloud's expressions for the LLR relations between check node LLRs (En,m ) and bit node LLRs (Tn,m ): Decompose both LLRs into "sign" and "magnitude":

24 Sign-Magnitude Form (2 of 5)
Substitute (7) into (6). Relate signs of LLRs in (8) and magnitudes in (9).

25 Sign-Magnitude Form (3 of 5) Aside: Gallager's f function
Gallager [2] defined a real-valued function f of a positive real variable x for use in LDPC decoding.

26 Sign-Magnitude Form (4 of 5) Aside: Gallager's f function
Gallager's function approaches zero as x goes to infinity, and infinity as x goes to 0. Gallager's function is monotically decreasing, and is symmetric about the line y = x, implying that f is its own inverse.

27 Sign-Magnitude Form (5 of 5)
Work on (9). Take the log (base e) of both sides. Multiply both sides by -1, and distribute the sign over the terms in the sum. Use Gallager's f function to re-express (11).

28 Guilloud's Sign-Magnitude Form of the BP Iterative Algorithm
These expressions are used by Guilloud to build a second iterative algorithm, where the check node update rule is modified to an equivalent expression.

29 Sign-Magnitude Form: LLR Decoder Algorithm [1]:Moon's Notation
Page 29 of 37

30 Min Sum Approximation: Sign-Magnitude Check Node Update
The check node update can be simplified by approximating f of the sum of the f(β) (as in equation (I) below) with the minimum LLR magnitude (as in (II)). explanation: As f(u) is monotonically decreasing, the minimum LLR magnitude βmin will produce the largest contribution to the sum of f(β). A lower bound for the sum on the rhs of (I) above is f (βmin). As f is its own inverse, f(f(βmin)) = βmin.

31 Min Sum LLR Decoder Algorithm [1]:Moon's Notation
Page 31 of 37

32 Check Node Update for DVB-S2 ( 1 of 2)
The check node updates for the LLR decoders use expressions for the LLR of an exclusive-or sum of {0,1} random variables. (See appendix A.)

33 Check Node Update for DVB-S2 (2 of 2)
A fourth form can be developed for the LLR of a parity check function from the third form, using a different but equivalent expression for function h(a,b). This last form (IIIb) is used in DVB-S2 (see annex G.2) documentation.

34 Appendices A: Log Likelihood Ratio Formulas for Exclusive-Or Sums [3]
B: LLR Decoder: Belief Propagation Algorithm [1] C. Guilloud's LLR Expression: Justification of Tj,m = Tj - Ej,m [3]

35 LLR of an Exclusive-Or Sum (from [3]) (1 of 5)
Define the log-likelihood ratio LLR of a {0,1} random variable x as below Define the LLR of the exclusive-or of two {0,1} random variables as below If the two {0,1} random variables are independent, then we can write equivalent expressions for (A.2), as in (A.3). In (A.3) h is a real-valued function of 2 real variables.

36 LLR of an Exclusive-Or Sum (2 of 5)
Equation (A.3) can be put into a more efficient form for computation [3]. These transformations lead to the form of the LDPC decoder described in the DVB-S2 documentation. Denote for notational convenience the LLRs of individual bits by a and b: Using these identifications we can rewrite (A.3) in terms of new variables: Next we state a key result for another form for h(a,b), h a real function of 2 real variables, in terms of signs and magnitudes of a and b, and an "error" term.

37 LLR of an Exclusive-Or Sum (3 of 5)
We omit the proof of the claim for brevity. The next slide shows a plot of the error term () versus a and b, for a and b between +8 and -8. Next we extend from 2 to n terms in the exclusive-or sum. The case for n = 3 shows the general procedure: h is evaluated again with new arguments. We work out the expressions for n = 3 and then the general expressions, after plotting the epsilon function on the next slide.

38 Aside: Min Sum Correction Term (4 of 5)
We graph the two-dimensional "eps" function, an error term in the Min-Sum approximation analysis. contour plot color grayscale plot

39 LLR of an Exclusive-Or Sum [3] (5 of 5)
For n = 3 we use the result for h(a1, a2) and a3 in a second evaluation of h: For the general result we use repeated applications of the h function; the LLR for the exclusive or of k bits requires k-1 evaluations of h.

40 LLR Decoder (1 of 7) Consider the log likelihood ratio of the codeword bit cn . Denote the N-dimensional received vector as r = [r1 r rN ]. Use conditional probability identities to re-express the numerator in (1):

41 LLR Decoder (2 of 7) A similar expression to (2) can be found for the denominator, leading to (3).

42 LLR Decoder (3 of 7) We've assumed that we have an AWGN channel (with noise variance 2), which allows us to express the first term as a constant times rn. We find that the LLR of bit cn can be expressed as the sum of an intrinsic term - determined only by the corresponding measurement rn - and an extrinsic term - dependent on the other measurements (r\n) and the code structure: Tn = total information In = intrinsic En = extrinsic

43 LLR Decoder (4 of 7) Now we express the probabilities in the extrinsic term En in terms of the parity checks. Let zm,n represent the sum of the bits in check zm except cn. We recognize that zm,n will equal 1 if and only if cn equals 1, as zm = 0 for each check m that contains cn. Hence, we can rewrite (5) in terms of zm,n : Assume that theTanner graph associated with the code is cycle-free. Then the bits associated with zm,n are distinct for different values of m. An illustration is shown on the next slide.

44 LLR Decoder (5 of 7) cn' , n' in Nm,n Parity Check Tree from Bit cn : (3,6) Code cn zm , m in Mn root checks root tier 1 this set of bits is (assumed to be) conditionally independent of this set of bits If the Tanner graph is cycle-free, then the bits associated with zm,n are distinct for different values of m. Assuming that they are also conditionally independent allows the joint probabilities in numerator and denominator of (6) to be factored as products of individual probabilities:

45 LLR Decoder (6 of 7) Next define the log likelihood ratio for the zm,n . (Recall zm,n represents the sum of the bits in check zm except cn.) With this definition, we can rewrite (7) in terms of this second LLR function: Next, again assuming the bits cj in zm,n are conditionally independent - conditioned on r\n -, we can apply the LLR theorem to re-express the sum in (8) using the tanh rule.

46 LLR Decoder (7 of 7) Now apply the theorem to re-express (8), which involves the LLR of a sum of (assumed) independent random variables, as (9), which involves the product of the terms of the sum:

47 Guilloud's LLR Expression (1 of 4)
tier 1 checks tier 2 tier 1 cn' , n' in Nm,n zm' , m' in Mn',m (3,4) Code: Idealized Tanner Graph from Bit cn cn'' , n'' in Nm',n' cn zm , m in Mn root checks root . . . Tj,m =  (cj | r\n ) is explained, which allows us to go from (4) to (6). cj Assume the graph has no cycles, and that cj is an arbitrary bit with j in index set Nm,n.

48 Guilloud's LLR Expression (2 of 4)
Work on  (cj | r\n ) - which is independent of rn - splitting it up further into information available from rj and the remaining ri . tier 1 checks tier 2 tier 1 cn' , n' in Nm,n zm' , m' in Mn',m (3,4) Code: Idealized Tanner Graph from Bit cn cn'' , n'' in Nm',n' cn zm , m in Mn root checks root . . . cj (1) can be transformed as in Appendix B to lead to (2).

49 Guilloud's LLR Expression (3 of 4)
cn'' , n'' in Nm',n' . . . . . . tier 2 Now imagine that cj is the root bit of a tree itself. We have that zm,j = 0 if and only if cj = 0; hence . . . zm' , m' in Mn',m tier 1 checks cj tier 1 cn' , n' in Nm,n root checks zm , m in Mn (3,4) Code: Idealized Tanner Graph from Bit cn root Eliminating cn means discounting dashed line tree segments. Note we have omitted check zm from the product in (3), as we are omitting information from bit n. cn

50 Guilloud's LLR Expression (4 of 4)
Combine (1), (2), and (3) and rewrite in terms of Guilloud's total and extrinsic information terms, this time for bit j and with information from bit n excluded. cn'' , n'' in Nm',n' . . . . . . tier 2 . . . zm' , m' in Mn',m tier 1 checks Ej,m1 Ej,m2 cj tier 1 cn' , n' in Nm,n Ij Tj,m root checks zm , m in Mn (3,4) Code: Idealized Tanner Graph from Bit cn root cn Tj,m=Ij + Ej,m1+Ej,m2


Download ppt "Reduced Complexity LDPC Decoder: Min Sum Algorithm (L"

Similar presentations


Ads by Google