The Viterbi Decoding Algorithm

Slides:



Advertisements
Similar presentations
Convolutional Codes Mohammad Hanaysheh Mahdi Barhoush.
Advertisements

Convolutional Codes Representation and Encoding  Many known codes can be modified by an extra code symbol or by deleting a symbol * Can create codes of.
Decoding of Convolutional Codes  Let C m be the set of allowable code sequences of length m.  Not all sequences in {0,1}m are allowable code sequences!
1 The 2-to-4 decoder is a block which decodes the 2-bit binary inputs and produces four output All but one outputs are zero One output corresponding to.
CHANNEL CODING REED SOLOMON CODES.
6.375 Project Arthur Chang Omid Salehi-Abari Sung Sik Woo May 11, 2011
Maximum Likelihood Sequence Detection (MLSD) and the Viterbi Algorithm
Cellular Communications
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
Figure 6.1. A convolutional encoder. Figure 6.2. Structure of a systematic convolutional encoder of rate.
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
EE 3220: Digital Communication Dr Hassan Yousif 1 Dr. Hassan Yousif Ahmed Department of Electrical Engineering College of Engineering at Wadi Aldwasser.
EE 3220: Digital Communication Dr Hassan Yousif 1 Dr. Hassan Yousif Ahmed Department of Electrical Engineering College of Engineering at Wadi Aldwasser.
Analysis of Iterative Decoding
林茂昭 教授 台大電機系 個人專長 錯誤更正碼 數位通訊
ECED 4504 Digital Transmission Theory
S Advanced Digital Communication (4 cr)
1 Channel Coding (II) Cyclic Codes and Convolutional Codes.
Information Coding in noisy channel error protection:-- improve tolerance of errors error detection: --- indicate occurrence of errors. Source.
CODING/DECODING CONCEPTS AND BLOCK CODING. ERROR DETECTION CORRECTION Increase signal power Decrease signal power Reduce Diversity Retransmission Forward.
Introduction to Coding Theory. p2. Outline [1] Introduction [2] Basic assumptions [3] Correcting and detecting error patterns [4] Information rate [5]
Wireless Mobile Communication and Transmission Lab. Theory and Technology of Error Control Coding Chapter 5 Turbo Code.
Medicaps Institute of Technology & Management Submitted by :- Prasanna Panse Priyanka Shukla Savita Deshmukh Guided by :- Mr. Anshul Shrotriya Assistant.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 12.
Basic Characteristics of Block Codes
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Coding Theory. 2 Communication System Channel encoder Source encoder Modulator Demodulator Channel Voice Image Data CRC encoder Interleaver Deinterleaver.
DIGITAL COMMUNICATIONS Linear Block Codes
Word : Let F be a field then the expression of the form a 1, a 2, …, a n where a i  F  i is called a word of length n over the field F. We denote the.
1 Coded modulation So far: Binary coding Binary modulation Will send R bits/symbol (spectral efficiency = R) Constant transmission rate: Requires bandwidth.
1 Channel Coding (III) Channel Decoding. ECED of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft.
Last time, we talked about:
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Timo O. Korhonen, HUT Communication Laboratory 1 Convolutional encoding u Convolutional codes are applied in applications that require good performance.
Basic Concepts of Encoding Codes and Error Correction 1.
Error Correction Code (2)
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
A simple rate ½ convolutional code encoder is shown below. The rectangular box represents one element of a serial shift register. The contents of the shift.
Wireless Communication Research Lab. CGU What is Convolution Code? 指導教授:黃文傑 博士 學生:吳濟廷
Dr. Muqaibel \ EE430 Convolutional Codes 1 Convolutional Codes.
SNS COLLEGE OF ENGINEERING Department of Electronics and Communication Engineering Subject: Digital communication Sem: V Convolutional Codes.
Convolutional Coding In telecommunication, a convolutional code is a type of error- correcting code in which m-bit information symbol to be encoded is.
Hamming Distance & Hamming Code
Error Control Coding. Purpose To detect and correct error(s) that is introduced during transmission of digital signal.
Digital Communications I: Modulation and Coding Course Spring Jeffrey N. Denenberg Lecture 3c: Signal Detection in AWGN.
1 Code design: Computer search Low rate: Represent code by its generator matrix Find one representative for each equivalence class of codes Permutation.
Institute for Experimental Mathematics Ellernstrasse Essen - Germany DATA COMMUNICATION introduction A.J. Han Vinck May 10, 2003.
MD. TARIQ HASAN SoC Design LAB Department of Information and Communication Engineering College of Electronics and Information Engineering Chosun University.
1 Chapter 8 The Discrete Fourier Transform (cont.)
Error Detection and Correction
Subject Name: COMPUTER NETWORKS-1
What is this “Viterbi Decoding”
Chapter 6.
Trellis Codes With Low Ones Density For The OR Multiple Access Channel
S Digital Communication Systems
COS 463: Wireless Networks Lecture 9 Kyle Jamieson
Subject Name: Information Theory Coding Subject Code: 10EC55
Chapter 6.
Error rate due to noise In this section, an expression for the probability of error will be derived The analysis technique, will be demonstrated on a binary.
II. Linear Block Codes.
Error Correction Code (2)
Error Correction Code (2)
Error Detection and Correction
Error Correction Code (2)
COS 463: Wireless Networks Lecture 9 Kyle Jamieson
Homework #2 Due May 29 , Consider a (2,1,4) convolutional code with g(1) = 1+ D2, g(2) = 1+ D + D2 + D3 a. Draw the.
Error Correction Coding
Theory of Information Lecture 13
IV. Convolutional Codes
Presentation transcript:

The Viterbi Decoding Algorithm

The start chart 1

Step:1

Step: 2

Step:3

Step:4

Step:5

Step:6

Step:7

Step:8

Viterbi Algorithm Summary:

Distance Properties of Convolutional Codes

Fact: Convolutional codes are linear. the all-zero codeword is a codeword in any convolutional code There is no loss in generality, therefore, in finding the minimum distance between the all-zero word and each codeword.

与全零码字的汉明距 1

Def: Minimum Free Distance dfree is the minimum Hamming distance of all paths through the trellis to the all-zero path .

From the theory of block codes, we know that the code with the minimum distance dmin can correct all t or less errors, where means the largest integer no greater than y.

In the convolutional code, however, dmin is to be replaced by dfree, which is called the (minimum) free distance, so that the error-correcting capability of a convolutional code is

Transfer Functions of Convolutional Codes We need to know not only the minimum free distance but also all possible distance properties of the convolutional code under consideration to assess the error-correcting capabilities of the code in an average statistical sense. The transfer Function of Convolutional codes can be used to Give the all possible distance properties of Convolutional codes

How to construct Transfer Functions ? State diagram of the convolutional encoder with branch words replaced with Dj, where j is the weight of the branch sequence.

State diagram of Figure split open at the node a = (00).

The state equation of each node:

By solving the simultaneous equations, we obtain the result, The generating function or transfer function, denoted T(D), is defined as By solving the simultaneous equations, we obtain the result, 1条重量(距离)为5,从a e的通路 2条重量(距离)为6,从a e的通路

Result analysis: The first term (the term with the lowest exponent) is the minimum free distance df of the convolutional code.

the modified state transfer chart 1 State diagram with branches labeled with Hamming weight D, length L, and N for branches caused by an input data 1.

the modified state equation

The transfer function T(D,L,N) in this case is found to be

There is one path of free distance 5, which differs in one input bit position from the all-zeros path and has length 3; There are two paths of distance 6, one having length 4 and the other having length 5, and both differ in two input bits from the all-zeros path; If we are interested in the jth node level, we consider all terms up to and including the jth power (Lj) term.

Performance Bounds for Viterbi Decoding of Convolutional Codes

The transfer function T(D,L,N)

The First-Event probability Def: the first-event probability is defined as the probability that the correct path is rejected (not a survivor) for the first time at time t = t j .

The error probability of df=5 m0 m1 the situation is equivalent to that of a binary communications system where m0 and m1 are two equally likely messages to be communicated using codewords C0 = 0 0 0 0 0 0 and C1 = 1 1 1 0 1 1 , respectively. The distance between C0 and C1 is The error-correcting capability of this two-codeword system is given by

that is, as long as the channel induces one or two errors, the all-zeros sequence will be favored. If, however, three or more errors are made in the channel, then the correct all-zeros sequence will be rejected and the sequence 1 1 10 1 1 will be favored . Let us consider a BSC with p as the symbol crossover probability. Then the probability of a path sequence with free distance 5, denoted P2(e; 5), is given by the binomial distribution

where p is the symbol error probability of the BSC, which depends on the modulation used for transmission of the channel symbols. The subscript 2 indicates a two-codeword system that pertains to the convolutional decoder decision at state a for the first time.

考虑一般情况下: Suppose that the path being compared with the all-zeros path at some node j at time t = tj has distance d from the all-zeros path.

If d is odd, the all-zeros path will be correctly chosen if the number of errors in the received sequence is less than (d + 1)/2 ; otherwise, the incorrect path is chosen. Thus, the probability of selecting the incorrect path is:

If d is even, the incorrect path is selected when the number errors exceeds d/2 ; that is, when k > d/2 + 1. If the number of errors equals d/2, we must flip an honest coin and one of the two selected. Thus, in this situation, we have

the probability of incorrectly selecting a path with Hamming distance d is

We have seen from the transfer function T(D) given in series form that there are many paths with different distances that merge with the all-zeros path at a given node, indicating thereby that there is no simple exact expression for the first-event error probability. Thus, we overbound the error probability using the union bound, which is the sum of the pairwise error probabilities P2(e; k) over all possible paths that merge with the all zeros path at the given node.

where ak is the number of nonzero paths with Hamming distance k, with respect to the all-zeros path, which merge with the all-zeros path at node j and diverge once from the all-zeros path at some previous node.

If there are ak paths with weight k, and if we still use P2(e; k) to denote the pairwise error probability of paths with weight k, the union bound on the probability of decoding error is given by

For the encoder characterized by the connection vectors g1 = (1 1 1) and g2 = (1 0 I), we had

Viterbi has also shown that the expression given in (P60 ) for the probability of incorrectly selecting a path with Hamming distance d can be bounded by where p is the symbol crossover probability (symbol error probability) of the BSC.

Bit-Error Probability for the BSC channel for Viterbi Decoding of Convolutional Codes `

In the series expansion for the transfer function T(D, N), the exponents of N indicate the number of nonzero information bits that are in error when an incorrect path is selected over the all-zeros path. Consider the transfer function given

By differentiating T(D, N) with respect to N and setting N = 1, the exponents of N become multiplication factors of the corresponding P2(e; d). From this expression, we obtain the bit-error probability PB(e) as we have obtained for the first-event error probability

Probability of Error for Convolutional Soft-Decision Decoding 1: the system model

b: the branch metrics and path metrics the matched-filter output of mth bit in jth branch the transmitted bit of the mth bit of the jth branch the additive noise of the mth bit of the jth branch the transmitted signal energy for each code word b: the branch metrics and path metrics Branch metrics: the branch metrics for the jth branch of ith path the match-filter output for jth branch

the transmitted sequence for jth branch of ith path for soft-decision decoding Neglecting the term that are common to all branch

c: The Pairwise Error probability at node B the all zero e sequence (path) is assumed to transmit

As Defining

d: the first event error probability denotes the number of paths of distance d from the all-zero path

e: The bit error probability As with the path property of transfer-function e: The bit error probability In the transfer function T(DN), the exponents in the factor N indicate the number of information bit error (number of is) in selecting an incorrect path

Probability of Error for Hard-Decision Decoding a: the system mode 1 in BSC channel, ML Criterion minimum distance Criterion

b: the pair wise Error probability The all-zero path is assumed to be transmitted If d is odd, the all-zero path will be correcting selected of the number of errors in the received sequence is less than

e: the first error probability of mode B is the probability of bit error for BSC channel If d is even e: the first error probability of mode B : denotes the number of paths corresponding to the set of distance

f: a upper bound