Presentation is loading. Please wait.

Presentation is loading. Please wait.

Generalized Communication System: Error Control Coding Occurs In Right Column. 6.

Similar presentations


Presentation on theme: "Generalized Communication System: Error Control Coding Occurs In Right Column. 6."— Presentation transcript:

1

2

3

4

5

6 Generalized Communication System: Error Control Coding Occurs In Right Column. 6

7

8

9

10

11

12

13

14

15

16

17

18 Message Bits Codeword: message and parity bits.

19 For Example, H here has three parity checks on the 6 codeword bits:

20

21

22

23 Robert Gallagher, 1963: Low Density Parity Check Codes. Introduced all the key concepts: 1.) Low Density of H, 2.) Tree-based “hard” bit decoding, 3.) Probabilistic decoding from soft demodulation, more precisely, the a posteriori probabilities. The amount of computation required led to these codes being forgotten, until: David MacKay: “Good Error-Correcting Codes Based on Very Sparse Matrices.” IEEE Trans. Info. Thry. 1999. D. MacKay and R. M. Neal: “Near Shannon Limit Performance of LDPC Codes.” Electronics Letters, 1996.

24 S. Chung, G. David Forney, Jr. and Thomas Richardson: “On the Design of Low-Density Parity-Check Codes within 0.0045 dB of the Shannon Limit.” IEEE Comm. Letters, Feb. 2001. But(!) the block length (N) of the code used was The research on LDPC codes since then has been work to get results close to this, but with reasonable block length, and more efficient decoding.

25

26 i.e.

27 Construction Methods for LDPC Codes 1.) Finite Geometry Methods: Lin and Costello’s Text. 2.) Consult the recent literature! 3.) Gallagher’s original random process. Start with L many rows, staggered as shown:

28 Then “stack” column permutations of on top of itself:

29

30

31

32

33

34

35

36

37 Other methods of modifying H in obtain higher girths exist, such as deleting a row or column. See Lin and Costello’s text for further information.

38

39

40 1.) Hard Bit Flip Decoding 2.) Soft Bit Flip Decoding 3.) Modification of the Sum-Product Algorithm for SBG8 1.’) Hard Bit Flip Decoding can be summarized as “voting on which bit(s) are bad, by those checks which indicate an error,” and then one flips the bits by the idea: “worst first.” Consider the following part of an H matrix :

41 If checks C1, C2 and C3 indicate an error (i.e. incorrect parity in the bits they check), then C1 “votes” that bits b1, b2, and b3 are in error, C2 votes that b1, b4, and b5 are in error, etc. So after the voting is done, bit b1 will have received 3 votes, the other bits only 1 vote. So then bit b1 is flipped to its opposite. If after flipping the estimate is still not a codeword, the process is iterated for some fixed number of times.

42 The results gave surprisingly good performance. The sparsity of H matters to concentrate the votes: if H were dense, intuitively most rows (checks) of H would check a good proportion of the bits, and so many rows would indicate an error, and so most bits would receive roughly the same number of votes. 2.’) Soft Bit flip decoding was an attempt by the author to extend the previous idea to the case of soft demodulation to 8 levels of output. The problems in adapting hard bit flip are: (a) how to assign “votes to be in error” based on a bit’s current estimate, and (b) to what level should one “flip” the worst bits.

43 The author used the following scheme, “linear weighting:” The voting was averaged over the number of checks (rows of H) used.

44 The question now is how to flip the worst bit, i.e. the one(s) with the most votes-to-be-in-error. The following scheme was tried: the total number of votes was averaged by the column weight. Then:

45 Unfortunately, the using the extra information of soft demodulation didn’t significantly improve the performance. 3.’) Finally, the Sum-Product Algorithm gives the best performance, but has the highest complexity. It is based on estimating the Likelihood Ratio: This is the (ratio of ) probabilities that a transmitted bit is a 1, or 0, given the received estimates of the bits in the codoword.

46 There are numerous ways to estimate this ratio. Often, the Log of the ratio is used. The sum-product algorithm is an iterative update algorithm: the ratio, for each bit, is recalulated many times, with the probabilistic estimate (that each transmitted bit was a 1) being updated each time. The author eschewed a common way of calculating this ratio that uses the arctanh and tanh function, since it was felt that these transcendental functions would not be easily calculated in hardware. However, the author’s way of calculating these ratios (even with adaptations for the SBG8 channel) requires more multiplications. The author’s implementation gave exceptionial results, considering that the codes were short, and the channel not the ideal real-valued output AWGN channel of the literature, but the author’s hypothesized SBG8 channel.

47 The first two examples are taken from Lin and Costello.

48

49

50

51

52 The results for the classical block codes were obtained using Matlab/Simulink models; Simulink already has the encoding/decoding blocks available for models. The simulation of the SBG8 channel was coded as an m-file in Matlab. Each codeword transmission required estimation of the channels noise variance from the numbers of received levels. Then the iterative decoding methods (hard and soft bit flips, sum-product algorithm) were separately coded and used.

53 The major research topics in LDPC coding theory concern 1.) finding codes with good distance properties, 2.) finding codes with lower order encoding processes, 3.) finding codes whose graphs have better girths, 4.) reducing the complexity of the sum-product algorithm.

54 References


Download ppt "Generalized Communication System: Error Control Coding Occurs In Right Column. 6."

Similar presentations


Ads by Google