Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Channel Coding (III) Channel Decoding. ECED4504 2 of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft.

Similar presentations


Presentation on theme: "1 Channel Coding (III) Channel Decoding. ECED4504 2 of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft."— Presentation transcript:

1 1 Channel Coding (III) Channel Decoding

2 ECED4504 2 of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft and hard decoding

3 ECED4504 3 of 15 State diagram of a convolutional code u Each new block of k bits causes a transition into new state u Hence there are 2 k branches leaving each state u Assuming encoder zero initial state, encoded word for any input k bits can thus be obtained. For instance, below for u=(1 1 1 0 1) the encoded word v=(1 1, 1 0, 0 1, 0 1, 1 1, 1 0, 1 1, 1 1) is produced: Encoder state diagram for an (n,k,L)=(2,1,2) coder Verify that you have the same result! Input state Output state

4 ECED4504 4 of 15 Decoding convolutional codes u Maximum likelihood decoding of convolutional codes means finding the code branch in the code trellis that was most likely transmitted u Therefore maximum likelihood decoding is based on calculating code Hamming distances d free for each branch forming encoded word u Assume that information symbols applied into a AWGN channel are equally alike and independent u Let’s denote by x the message bits (no errors; in the sender; unknown) and by y the decoded bits: u Probability to decode the sequence y provided x was transmitted is then u The most likely path through the trellis will maximize this metric u Also, the following metric is maximized (prob.<1) that can alleviate computations: Decoder received bits: non-erroneous bits: bit decisions

5 ECED4504 5 of 15 Example of exhaustive maximal likelihood detection u Assume a 3-bit message is to transmitted. To clear the encoder 2 zero- bits are appended after message. Thus 5 bits are inserted into encoder and 10 bits produced (each input bit causes state conversion and emits 2 output bits). Assume channel error probability is p=0.1. After the communication,10,01,10,11,00 is received. What comes after decoder’s decision, e.g. what was most likely the transmitted sequence (from the sender)?

6 ECED4504 6 of 15 errors correct weight for prob. to receive bit in-error channel error probability is p=0.1

7 ECED4504 7 of 15 Note also the Hamming distances! The largest metric, verify that you get the same result!

8 ECED4504 8 of 15 Soft and hard decoding u Regardless whether the channel outputs hard or soft decisions the decoding rule remains the same: maximize the probability u However, in soft decoding decision region energies must be accounted for, and hence Euclidean metric d E, rather that Hamming metric d free is used Transition for Pr[3|0] is indicated by the arrow

9 ECED4504 9 of 15 Decision regions u Coding can be realized by soft-decoding or hard-decoding principle u For soft-decoding reliability (measured by bit-energy) of decision region must be known u Example: decoding BPSK-signal: Matched filter output is a continuos number. In AWGN matched filter output is Gaussian u For soft-decoding several decision region partitions are used Transition probability for Pr[3|0], e.g. prob. that transmitted ‘0’ falls into region no: 3

10 ECED4504 10 of 15 The Viterbi algorithm u Exhaustive maximum likelihood method must search all paths in phase trellis for 2 k bits for a (n,k,L) code u By Viterbi-algorithm search depth can be decreased to comparing surviving paths where 2 L is the number of nodes and 2 k is the number of branches coming to each node (see the next slide!) u Problem of optimum decoding is to find the minimum distance path from the initial stage back to initial stage (below from S 0 to S 0 ). The minimum distance is the sum of all path metrics that is maximized by the correct path u The Viterbi algorithm gets its efficiency via concentrating into survivor paths of the trellis Channel output sequence at the RX TX Encoder output sequence for the m:th path

11 ECED4504 11 of 15 The survivor path u Assume for simplicity a convolutional code with k=1, and up to 2 k = 2 branches can enter each stage in trellis diagram u Assume optimal path passes S. Metric comparison is done by adding the metric of S into S1 and S2. At the survivor path the accumulated metric is naturally smaller (otherwise it could not be the optimum path) u For this reason the non-survived path can be discarded -> all path alternatives need not to be considered u Note that in principle whole transmitted sequence must be received before decision. However, in practice storing of states for input length of 5L is quite adequate

12 ECED4504 12 of 15 Example of using the Viterbi algorithm u Assume received sequence is and the (n,k,L)=(2,1,2) encoder shown below. Determine the Viterbi decoded output sequence! (Note that for this encoder code rate is 1/2 and memory depth L = 2)

13 ECED4504 13 of 15 The maximum likelihood path The decoded ML code sequence is 11 10 10 11 00 00 00 whose Hamming distance to the received sequence is 4 and the respective decoded sequence is 1 1 0 0 0 0 0 (why?). Note that this is the minimum distance path. (Black circles denote the deleted branches, dashed lines: '1' was applied) 1 1 Smaller accumulated metric selected First depth with two entries to the node After register length L+1=3 branch pattern begins to repeat (Branch Hamming distance in parenthesis)

14 ECED4504 14 of 15 How to end-up decoding? u In the previous example it was assumed that the register was finally filled with zeros thus finding the minimum distance path u In practice with long code words zeroing requires feeding of long sequence of zeros to the end of the message bits: wastes channel capacity & introduces delay u To avoid this path memory truncation is applied: –Trace all the surviving paths to the depth where they merge –Figure right shows a common point at a memory depth J –J is a random variable whose magnitude shown in the figure (5L) has been experimentally tested for negligible error rate increase –Note that this also introduces the delay of 5L!

15 ECED4504 15 of 15 Error rate of convolutional codes: Weight spectrum and error-event probability u Error rate depends on –channel SNR –input sequence length, number of errors is scaled to sequence length –code trellis topology u These determine which path in trellis was followed while decoding u An error event happens when an erroneous path is followed by the decoder u All the paths producing errors must have a distance that is larger than the path having distance d free, e.g. there exists the upper bound for following all the erroneous paths (error-event probability): Number of paths (the weight spectrum) at the Hamming distance d Probability of the path at the Hamming distance d


Download ppt "1 Channel Coding (III) Channel Decoding. ECED4504 2 of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft."

Similar presentations


Ads by Google