Presentation is loading. Please wait.

Presentation is loading. Please wait.

Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw.

Similar presentations


Presentation on theme: "Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw."— Presentation transcript:

1 exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw a graph 1 We want to construct a cyclic code with n = 7, k = 4, and m =3. confirm that G(x) = x 3 + x 2 + 1 can be a generator polynomial encode 0110 decide if 0001100 is a correct codeword or not Answers: http://apal.naist.jp/~kaji/lecture/

2 today’s class soft-decision decoding make use of “more information” from the channel convolutional code good for soft-decision decoding Shannon’s channel coding theorem error correcting codes of the next generation 2

3 the channel and modulation We considered “digital channels”. At the physical-layer, almost all channels are continuous. digital channel=modulator + continuous channel + demodulator 3 modulator ( 変調器 ) demodulator ( 復調器 ) a naive demodulator translates the waveform to 0 or 1  the risk of possible errors 0010 continuous (analogue)

4 “more informative” demodulator From the viewpoint of error correction, the waveform contains more information than the binary output of the demodulator. demodulators with multi-level output can help error correction. 4 0 1 definitely 0 maybe 0 definitely 1 maybe 1 to make use of this multi-level demodulator, the decoding algorithm must be able to handle multi-level inputs

5 hard-decision vs. soft-decision hard-decision decoding the input to the decoder is binary (0 or 1) decoding algorithms discussed so far are hard-decision type soft-decision decoding the input to the decoder can have three or more levels the “check matrix and syndrome” approach does not work more powerful, BUT, more complicated 5

6 formalization of the soft-decision decoding outputs of the demodulator 0 + (definitely 0), 0 - (maybe 0), 1 - (maybe 1), 1 + (definitely 1) code C = {00000, 01011, 10101, 11110} for a received vector 0 - 0 + 1 + 0 - 1 -, find a codeword which minimizes the penalty. 6 received 0 + 0 - 1 - 1 + penalty of “0” 0 1 2 3 penalty of “1” 3 2 1 0 (hard-decision... penalty = Hamming distance) 0-0- 0+0+ 1+1+ 0-0- 1-1- 00000 103127 r c0c0 ++++= 10101 200114 ++++= c2c2 = =

7 algorithms for the soft-decision decoding We just formalized the problem... how can we solve it? by exhaustive search?... not practical for codes with many codewords by matrix operation?... yet another formalization which is difficult to solve by approximation?... yes, this is one practical approach anyway... design special codes; whose soft-decision decoding is not “too difficult”  convolutional code ( 畳み込み符号 ) 7

8 convolutional codes the codes we studied so far... block codes a block of k-bit data is encoded to a codeword of length n the encoding is done independently for block to block convolutional codes encoding is done in a bit-by-bit manner previous inputs are stored in shift-registers in the encoder, and affects future encoding 8 input data combinatorial logic encoder outputs

9 encoding of a convolutional code at the beginning, the contents of registers are all 0 when a data bit is given, the encoder outputs several bits, and the contents of registers are shifted by one-bit after encoding, give 0’s until all registers hold 0 9 r3r3 r2r2 r1r1 encoder example constraint length = 3 ( = # of registers) the output is constrained by three previous input bits

10 encoding example (1) to encode 1101... 10 000 1 1 1 001 1 1 0 011 0 0 1 110 1 1 0 give additional 0’s to push-out 1’s in the register...

11 encoding example (2) 11 101 0 0 0 010 0 0 0 110 1 1 0 100 0 0 1 the output is 11 10 01 10 00 00 01

12 encoder as a finite-state machine constraint length k  the encoder has 2 k internal states 12 r2r2 r1r1 input output s0s0 s1s1 s2s2 s3s3 00 11 00 01 10 01 10 0 1 the encoder is a finite-state machine whose initial state is s 0 makes one transition for each input bit returns to s 0 after all data bits are provided internal state = (r 2, r 1 ) s 0 =(0, 0), s 1 =(0, 1), s 2 =(1, 0), s 3 =(1, 1)

13 at the receiver’s end the receiver knows... the definition of the encoder (finite-state machine) the encoder starts and ends at the state s 0 the transmitted sequence which can be corrupted by errors 13 encoderreceiver 01001...01100... errors to correct errors = to estimate the “real” transition of the encoder... estimation on a hidden Markov model (HMM)

14 trellis diagram a trellis diagram is obtained by... expanding the transition of the encoder to the time axis 14 possible encoding sequences (of length 5) = the set of paths connecting s 0,0 and s 5,0 the transmitted sequence = the path which is the most-likely to the received sequence = the path with the minimum penalty s0s0 s1s1 s2s2 expansion s 0,0 s 5,0 s0s0 s1s1 s2s2 time 012345 trellis

15 Viterbi algorithm given a received sequence... the demodulator defines penalties for symbols at each position the penalties are assigned to edges of the trellis diagram find the path with the minimum penalty using a good algorithm Viterbi algorithm the Dijkstra algorithm for HMM recursive width-first search 15 Andrew Viterbi 1935- s 0,0 pApA pBpB qAqA qBqB the minimum penalty of this state is min (p A +q A, p B +q B )

16 soft-decision decoding for convolutional codes the complexity of Viterbi algorithm ≈ the size of the trellis diagram for convolutional codes (with constraint length k): the size of trellis ≈ 2 k ×data length... manageable we can extract 100% performance of the code for block codes: the size of trellis ≈ 2 data length... too large it is difficult to extract the full performance 16 block code good performance, but difficult to use performance convolutional code moderated performance, full power available

17 summary of the convolutional codes advantage: the encoder is realized by a shift-register (like as cyclic codes) soft-decision decoding is practically realizable disadvantage: no good algorithm for constructing good codes code design = the wire connection of register outputs design in the “trial-and-error” manner with computer 17

18 channel capacity mutual information I(X; Y): X and Y: the input and output of the channel the average information of X given by Y depends on the statistical behavior of the input X channel capacity = max I(X; Y)... the maximum performance achievable by the channel 18 XY

19 computation of the channel capacity the channel capacity of BSC with bit error probability p: if the input is X = {0, 1} with P(0) = q, P(1) = 1 – q, then 19 the maximum of I(X; Y) is given when q = 0.5, with 01.0 0.5 p C

20 Shannon’s channel coding theorem code rate R = log 2 (# of codewords) / (code length) (= k / n for an (n, k) binary linear code) Shannon’s channel coding theorem: consider a communication channel with capacity C;  a code with rate R ≤ C, with which error at the receiver → 0 no such code exists if the code rate R > C The theorem says that such a code exists at “some place”.... How can we reach there? 20

21 investigation towards the Shannon’s limit by mid 1990s... “combine several codes” approach concatenated code ( 連接符号 ) apply multiple error correcting codes sequentially typically; Reed-Solomon + convolutional codes 21 encoder 1encoder 2 decoder 1decoder 2 channel outer code (RS code) inner code (conv. code)

22 product code product code ( 積符号 ) 2D code with more powerful codes applied for row/column 22 parities of code 1 parities of code 2 data bits 010 110 100 11 01 10 101 010 10 01 010 010 101 10 01 10 000 110 101 decoding of code 2 decoding of code 1 possible problem: in the decoding of code 1, the received information is not considered

23 idea for the breakthrough let the decoder see two inputs: the result of the previous stage + received information feed-back the decoding result of code 1, and try decode code 2 Exchange the decoding results between two decoders iteratively. 二台の復号器間で,復号結果を繰り返し交換する 23 parities of code 1 parities of code 2 data bits 010 110 100 11 01 10 101 010 10 01 010 010 101 10 01 10 000 110 101 decoding of code 2 decoding of code 1

24 the iterative decoding idea: product code + iterative decoding 24 decoder for code 1 decoder for code 2 received sequence decoding result the decoder is modified to have two inputs two soft-value inputs and one soft-value output trellis-based maximum a-posteriori decoding the result of one decoder helps the other decoder  more # of iteration, more reliable result

25 Turbo code: encoding Turbo code: Add two sets of parities for one set of data bits Use convolutional codes for the simplicity of decoding 25 encode: code 1 encode: code 2interleaver data bits coded sequence (bit reorder) data bitsparity: code 1parity: code 2 coded sequence

26 Turbo code: decoding Each decoder sends its estimation to the other decoder. Experiments shows that... code length must be sufficiently long (> thousands bits) the # of iterations can be small (≈ 10 to 20 iterations) 26 decoder: code 1 decoder: code 2 received sequence decoding result I I –1 I I interleaver I –1 de-interleaver

27 performance of Turbo codes The performance is much better than other known codes. There is some “saturation ( 飽和 )” of the performance improvement. (error-floor) 27 C. Berrou et al.: Near Shannon Limit Error- Correcting Coding and Decoding: Turbo Codes, ICC 93, pp. 1064-1070, 1993.

28 LDPC code The study of Turbo codes revealed that “the length is the power”. don’t bother too much on mathematical properties pursue long and easily decodable codes LDPC code (Low Density Parity Check code) linear block code with very sparse check matrix almost all components are 0, with small # of 1 discovered by Gallager in 1962, but forgotten for long years MacKay’s rediscovery in 1999 28 Robert Gallager 1931- David MacKay 1967-

29 the decoding of LDPC codes An LDPC code is a usual linear block code, but its sparse check matrix allows belief-propagation algorithm to work efficiently and effectively. 29 p 3 = p 1 p 4 p 6 + q 1 q 4 p 6 + q 1 p 4 q 6 + p 1 q 4 q 6 Tanner Graph 1 1 1 0 1 0 0 1 0 1 1 0 1 0 0 1 1 1 0 0 1 H = prob. of 0 prob. of 1 (= 1 – p i ) p1p1 p3p3 p4p4 p6p6 q1q1 q3q3 q4q4 q6q6

30 LDPC vs. Turbo codes performance: both codes show excellent performance the decoding complexities are “almost linear” LDPC shows “more mild” error-floor phenomenon realization: O(n) encoder for Turbo, O(n 2 ) encoder for LDPC code design: LDPC has more varieties and strategies 30

31 summary soft-decision decoding retrieve more information from the channel convolutional code good for soft-decision decoding Shannon’s channel coding theorem what we can and we cannot error correcting codes of the next generation 31

32 exercise give proof for the discussion in p.19 32


Download ppt "Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw."

Similar presentations


Ads by Google