Presentation is loading. Please wait.

Presentation is loading. Please wait.

Channel coding.

Similar presentations


Presentation on theme: "Channel coding."— Presentation transcript:

1 Channel coding

2 Why channel coding? The challenge in digital communication system is that of providing a cost effective facility for transmitting Information , at a rate and a level of reliability and quality that are acceptable to a user at the receiver. The two key system parameters : Transmitted signal power. Channel bandwidth. Power spectral density of receiver noise These parameters determine the signal energy per bit-to-noise power spectral density ratio Eb/No.

3 Why channel coding? In practical , there are a limit on the value that we can assign to Eb /No. So, it is impossible to provide acceptable data quality (i.e. , low enough error performance). For a fixed Eb/No , the only practical option available for changing data quality is to use ERROR-CONTROL CODING. The two main methods of error control are: Forward Error Correction (FEC). Automatic Repeat request (ARQ).

4 CHANNEL CODING Block Diagram

5 Forward Error Correction (FEC)
The key idea of FEC is to transmit enough redundant data to allow receiver to recover from errors all by itself. No sender retransmission required. The major categories of FEC codes are Block codes, Cyclic codes, Reed-Solomon codes . Convolutional codes Turbo codes, etc.

6 Forward Error Correction (FEC)
FEC require only a one-way link between the transmitter and receiver. In the use of error-control coding there are trade offs between: Efficiency & Reliability. Encoding/Decoding complexity & Bandwidth .

7 Automatic Repeat request (ARQ)
Upon the detection of an error in a transmitted code word , the receiver requests a repeat transmission of the corrupted code word (on feedback channel). As such , ARQ can be used only on Half-duplex . Full-duplex links. In a half-duplex link , data transmission over the link can be made in either direction but not simultaneously . On the other hand , in a full-duplex link , it is possible for data transmission to proceed over the link in both direction simultaneously.

8 ARQ

9 types of ARQ Stop-And-Wait ARQ (SAW ARQ).
Go-Back-N ARQ(GBN ARQ)(pullback) Selective-Repeat ARQ (SR ARQ).

10 SAW ARQ

11 Go-Back-N ARQ

12 Selective-Repeat ARQ

13 ARQ FEC Require only one-way link. a half-duplex or full-duplex link.
Increased decoding complexity. But, the decoder usually lends itself to uprocessor or VLSI implementation in a cost-effective manner 3.Used in application much than ARQ. a half-duplex or full-duplex link. Simple decoder design. Noiseless feedback channel. Widely used in computer communication system.

14 Discrete-memoryless channels
The simplest Discrete-memoryless channels result from the use of binary input and output. The decoder has only binary inputs if binary quantization of the demodulator output is used , that is a hard decision is made on the demodulator output as to which symbol was actually transmitted . In this situation we have binary symmetric channel(BSC). Assume channel noise module as AWGN. The Majority of digital communication systems employ binary coding with hard decision decoding , due to the simplicity of implementation offered by such an approach.

15 Discrete-memoryless channels
The use of hard decisions prior to decoding causes an irreversible loss of info in the receiver. To reduce this loss we use soft decision coding this achieved by including multilevel quantizer at the demodulator output. The modulator has only binary symbol 0 and 1 as inputs but the demodulator output has now an alphapet with Q symbol .such achannel is called a binary input Q-ary output discrete memoryless channel . The performance of the encoder depend on the location of the representation level of the quantizer which in turn depend on signal level and noise variance. The use of soft decision complicates the implemintation of the decoder . Soft decision offer significant improvement in performance over hard decision decoding by taking a probabilistic rather than algebric approach.

16 Discrete-memoryless channels

17 Discrete-memoryless channels

18 Channel coding theorem
The channel coding theorem states that if a discrete memoryless channel has capacity C and the source generate info at rate less than C ,then there exists a coding technique that the output of the source may be transmitted over the channel with an arbitrarily low probability of symbol error. For the special case of BSC the theorem tell us that it is possible to find a code that achieves error free transmission over the channel. The issue that mater not the signal to noise ratio put how the channel input is encoded. The theorem asserts the existence of good codes but dose not tell us how to find them. By good codes we mean families of channel codes that are capable of providing reliable (error-free) transmission of info over a noisy channel of interest at bit rate up to a max value less than the capacity of the channel.

19 Linear block codes A code is said to be linear if any 2 code wards can be added in modulo-2 arithmetic to produce a third code ward in the code. Consider then an (n, k) linear block code ,in which k bits of the n code bits are always identical to the message sequence to be transmitted ,the n-k bits are referred to as generalized parity or simply parity bits . For application requiring both error detection and correction the use of systematic block codes simplifies implementation of the decoder. Let m0 ,m1 ,…..,mk-1 constitute a block of k arbitrary message bits, thus we have 2^k distinct message blocks. Let this sequence of message bits be applied to a linear block encoder ,producing an n- bit code ward whose elements are denoted by C0 ,C1 ,…….,Cn-1 ,Let b0,b1,…….,b(n-k) denote the parity bits in the code ward. Code ward divided into two parts, one of which occupied by the message bits and the other by the parity bits, we have the option of sending the message bits of the code ward before the parity bits or vice versa.

20 Linear block codes

21 Linear block codes This system of equation may be written in a compact form using matrix notation.

22 Linear block codes

23 Linear block codes The full set of code ward referred to simply as the code, is generated in accordance with equ(10.13) by letting the message vector m range through the set of all 2^k (1-by-k vector). The sum of any two code wards is another code ward. This basic property of linear block codes is called closure. Prove it:- Ci +Cj = mi G+ mj G = (mi + mj )G

24 Linear block codes

25 Linear block codes The matrix H is called the parity-check matrix of the code ,and the set of equation specified by equation (10.16)are called parity-check equations. The generator equation and the parity –check detector equations are basics to the description of and operation of a linear block code.

26 Repetition codes Repetition codes are the simplest type of linear block codes . In particular ,a single message bit is encoded into a block of n identical bits, producing an (n,1)block code, there are two code wards in the code :an all zero code and an all one code ward. Example: consider K=1, n=5, in this case we have four parity check bits that are the same the message bit, hence the identity matrix Ik =1, and the coefficient matrix p consist of a 1-by- vector that has 1 for all of its elements, the generator matrix is : G= [ : 1], the parity check matrix is :

27 Syndrome Let r denote the 1-by-n received vector that result from sending the code vector C over a noisy channel ,we express the r vector : r=c+e. The vector e is called the error vector or error pattern. The ith element of e equals 1 if the corresponding element of the r different from that of c

28 Syndrome The receiver has the task of decoding the code vector c from the received vector r. The algorithm commonly used to perform this decoding operation start with the computation of a 1-by-(n-k) vector called error-syndrome vector or simply the syndrome. The importance of the syndrome lies in the fact that it depends only upon the error pattern s = r HT

29 Syndrome Property 1 The syndrome depends only on the error pattern and not on the transmitted code ward. Prove it : s = (c + e) HT = c HT + e HT = e HT The parity-check matrix H of a code permits us to compute the syndrome s property 2 All error pattern that differ by a code ward have the same syndrome. For the k message bits there are 2k distinct code vectors denoted as Ci i=0,1,….., 2k -1. Correspondingly, for any error pattern e, we define the 2k distinct vectors e, as : ei = e + ci i= 0,1,………, 2k - 1 The set of vectors {ei, i= 0,1,………, 2k - 1 }is called a coset of the code The coset has exactly 2k elements that differ at most by a code vector. Thus, an (n,k) linear block code has 2n-k possible cosets. ei HT = e HT+ ci HT = e HT.

30 Syndrome This set of (n -k) linear equations clearly shows that the syndrome contains info about the error pattern and may be used for error detection. The set of equations is undetermined as we have more unknown than equations, there is no unique solution for error pattern. With 2n-k possible syndrome vectors ,the info contained in the syndrome s about the error pattern e is not enough for the detector to compute the exact value of the transmitted code vector. Knowledge of the syndrome s reduces the search for the true error pattern e from 2n to 2n-k possiblities.

31 Minimum Distance Consideration
The hamming distance d(c1,c2) between such a pair of code vectors ,where c1 and c2 have the same number of elements, hamming distance is defined as the number of location in which their respective elements differ. The hamming weight w(c) is defined as the number of non-zero elements in the code vector, and also we can defined it as the distance between the code vector and the all zero code vector. The minimum distance dmin of linear block code is defined as the smallest hamming distance between any pair of code vectors that is the same as the smallest hamming weight. The minimum distance of linear block code is very important, it determines the error correcting capability of the code.

32 Minimum Distance Consideration
Suppose an (n, k) linear block code is required to detect and correct all error patterns over BSC, that is if a code vector ci is transmitted and the received vector is r = ci + e where we want the decoder output equal ci ,the error pattern has hamming weight w(e)<= t , we assume 2k code vector transmitted with equal probability. The best strategy for the decoder then is to pick the code vector closest to the received vector r, that is the smallest hamming distance d(ci, r) , with such a strategy the decoder will be able to detect and correct all error patterns of hamming weight w(e)<=t(error). In particular the 1-by-n code vector and received vector are represent as point in n-dimensional space ,we construct 2 spheres of radius t

33 Minimum Distance Consideration
The figure 10.6a satisfy the condition d(ci,cj)>=2t+1and the hamming distance d(ci,r)<= t ,it is clear that the decoder will pick ci as it is the closest to the received vector r The figure 10.6b don’t satisfy the condition d(ci,cj)>=2t+1 as d(ci,cj)<=2t so there is a possibility of the decoder picking vector cj which is wrong. By definition we may state that an (n,k)linear block code of minimum distance dmin can correct up to t errors if:

34 Minimum Distance Consideration
Example: Consider the following set of code words: C1 = 0000, C2 = 0101, C3 = 1010, C4 = 1111 The distance, d: – between C2 and C3 is 4 – between C2 and C4 is 2 – between C3 and C4 is 2 The Hamming distance (the smallest distance between any pair) = dmin = 2.

35 Syndrome decoding We have c1,c2,……c2k denote the 2k code vector of (n,k). r denote the received vector which have 2n possible value. The receiver will partitioning the 2n received vector into 2k disjoint subsets D1,D2,….D2k,where the Di corresponds to code vector ci The received vector r is decoded into ci if it’s in ith subset... For decoding to be correct r must be in the subset belongs to the code vector ci that was actually sent.

36 Syndrome decoding The 2k columns represent the disjoint subset D1,D2,…D2k The 2n-k rows represent the cosets of the code, the first element e2,,,,.e2n-k are called coset leader. Now the probability of decoding error is minimized when the most likely error patterns are chosen as coset leader.

37 Syndrome decoding

38 Hamming code Consider (n, k) linear block codes that have the following parameter: block length n = 2m – 1 number of message bits k = 2m -m-1 number of parity bit n – k = m the condition of hamming code is m >= 3 Hamming codes are single error correcting binary perfect codes. Example: the (7, 4)hamming code have n = 7 , k= 4, and so m =3the generator matrix is

39 Hamming code With k=4 there is 2^4=16 distinct message ward.
The code ward is obtained by c=m G. From the table : the minimum distance dmin= 3. From we note that t=1 This mean that the hamming codes are single error correcting binary codes

40 Hamming code Assuming single-error pattern, we formulate the seven coset leader as shown in the table. The 2^3 syndromes that we obtained it from equ :s= e HT as showen in the table. For example the code vector [ ]is sent and received as [ ] ,so the error in third bit ,so by applying the equation s=r HT we get that s=[001] from table its error pattern is ,and if we apply the equation c =r+e we get [ ]the correct code ward

41 Dual code Given linear block code (n, k) we may define its dual as follows ,taking the transpose of the equation HGT=0 ,we get GHT=0 ,Where 0 is a new zero matrix. This equation suggest that every (n, k) linear block code with generator matrix G and parity-check matrix H has a dual code with parameters (n, n- k), generator matrix H and parity-check matrix G.

42 Cyclic codes They are a subclass of linear block codes.
The main Advantage of cyclic codes is that they are easy to encode. furthermore they posses a well-defined mathematical structure(polynomial) which leads to the high efficient decoding schemes.

43 Properties of cyclic codes
The code is cyclic if two properties achieved : 1-linearity property: c1+c2=m1G+m2G=(m1+m2)Ganew code word in the code. 2- cyclic property : If c c1 c2 …. cn-2 cn-1 is a codeword, then cn-1 c c1 …. cn-3 cn-2 c1 c c3 …. cn-1 c0 are all codewords

44 Example Given the following code : c(n) = c
Thus, it is not a cyclic code since, for example, the cyclic shift of [10111] is [11011] . c(n) = c Left cyclic shift i positions c(−i) is same c(n−i).

45 c(X) = c0 + c1X+ c2 X2 + …. + cn-1 Xn-1
Properties cont…. The codeword C = (c0, c1, , cn−1) can be expressed in the polynomial form , order of X determines the element position . c(X) = c0 + c1X+ c2 X2 + …. + cn-1 Xn-1 so ,The right cyclic shift can be expressed as follow: Xc(X)= c0X +c1X 2 + ….+ cn-2 Xn-1 + cn-1 X n c(1)(X) = cn-1 + c0X +c1X 2 + ….+ cn-2 Xn-1 Subjected to the constrain Xn =1 we can prove that Xic(X) = q(X)(Xn +1) + c(i)(X) --c(i)(X) is the remainder from dividing Xic(X) by (Xn +1). --c(i)(X) = Rem[Xic(X)/ (Xn +1)] = Xic(X) mod (Xn +1).

46 Properties cont…. Example: given : C = find c(3) . c(X) = X + X3 + X4 + X5. after three cyclic shifts X3c(X) = X4 + X6 + X7 + X8 Rem[X3c(X)/ (X7 +1)] = 1 + X + X4 + X6 . c(3) = Each codeword is represented by a polynomial of degree less than or equal n-1. Note That : Xic(X)|Xn=1 = q(X)(Xn +1) |Xn=1 + c(i)(X) |Xn=1 That is c(i)(X) = Xic(X)|Xn=1 That is ashortcut method instead long division.

47 Generator polynomial Theorem: Let C be an (n, k) cyclic code then:
There exists only one polynomial g(x) of the minimum degree (n-k). Properties of g(X) 1) Unique . (If not, the sum of the two polynomials will be a code polynomial of degree less than the minimum. Contradiction).

48 Properties of g(X) cont…
2) polynomial of degree (n-1) or less is acodword if and only if it is a multiple of g(X). All code polynomials are generated from the multiplication c(X) = a(X)g(X). Ex: Xg(X), X2g(X),…., Xk-1 g(X) are all codewords. c(X) = m(X)g(X). Where :polynomial m(X) is a polynomial whose coefficients are the information bits.

49 Example:(7,4) Code Generated by 1+X+X3
Infor Code Code polynomials = 0 . g(X) X + X3 = 1 . g(X) X + X2 + X4 = X . g(X) X2 + X3 + X4 = (1 + X) g(X) X2 + X3 + X5 = X2 . g(X) X+ X2 + X5 = (1 + X2) g(X) X+ X3 + X4 + X5 = (X+ X2) g(X) X4 + X5 = (1 + X + X2) g(X) X3 + X4 + X6 = X3 . g(X)

50 Properties of g(X)cont…
3) It has been proved that g(X) is a factor of Xn+1 of degree (n-k) . Example: What cyclic codes of length 7 can be constructed? X7+1 = (1 + X)(1 + X + X3)(1 + X2 + X3) g(X) Code g(X) code (1 + X) (7,6) (1 + X)(1 + X + X3) (7,3) (1 + X + X3) (7,4) (1 + X) (1 + X2 + X3) (7,3) (1 + X2 + X3) (7,4) (1 + X + X3)(1 + X2 + X3) (7,6) hint :The code generated this way is guaranteed to be cyclic. But we know nothing yet about its minimum distance. The generated code may be good or bad.

51 Encoding Procedures Multiply m(X) by Xn-k
2. Divide Xn-k m(X) by g(X), obtaining the remainder b(X). 3. Add b(X) to Xn-k m(X), obtaining c(X) in systematic form.

52 Encoder circuit

53 Example on encoding procedures
Consider the (7,4) cyclic code generated by g(X) = 1 + X + X3. Find the systematic codeword for the message 1001. Solution m(X) = 1 + X3 X3m(X)=X3 + X6 b(X) = Rem[X3m(x)/g(X)] = X + X2 c(X) = b(X) + Xn-k m(X) = X + X2 + X3 + X6 Therefore, c =

54 Example :(7,4) Encoder Based on 1 + X + X3
Input Register : initial st shift nd shift rd shift th shift Codeword: ff + Gate

55 parity check polynomial
Define : cyclic code(n,k). It is also specified by another polynomial called h(X) of degree K. h(X) is called the parity-check polynomial. It plays the rule of the H matrix for linear codes. We can prove that h(x)&g(X) are factors of Xn+1 . Xn +1 = g(X)h(X) deg[g(x)] = n-k, deg[h(x)] = k

56 The syndrome In cyclic codes :
Earlier, in block codes we define s = r HT. s = e HT. e is the error pattern. S Independent on the index of codeword& the sent message. In cyclic codes : Received word: r(X) = r0 + r1X +….+ rn-1Xn-1 = a(X)g(X) + e(X) If r(X) is a correct codeword, it is divisible by g(X). Otherwise: r(X) = q(X)g(X) + s(X). deg[s(X)] n-k-1. s(X) = Rem[r(X)/g(X)] = Rem[ (a(X)g(X) + e(X))/g(x)] = Rem[e(X)/g(X)] The syndrome polynomial depends on the error pattern only. s(X) is obtained by shifting r(X) into a divider-by-g(X) circuit. The register contents are the syndrome bits.

57 Circuit for Syndrome Computation

58 Circuit for Syndrome Computation Based on 1 + X + X3 of (7,4) cyclic code
Gate r =

59 cyclic redundancy check (CRC)
A CRC-enabled device calculates a short, fixed-length binary sequence, known as the CRC code or just CRC, for each block of data and sends or stores them both together. When a block is read or received the device repeats the calculation; if the new CRC does not match the one calculated earlier, then the block contains a data error and the device may take corrective action such as rereading or requesting the block be sent again, otherwise the data is assumed to be error free (though, with some small probability, it may contain undetected errors; this is the fundamental nature of error-checking)

60

61 Why CRC ?? They can be designed to detect common errors
Implementation of detecting ccodes and encoding circuits are practical

62 What CRC capable to detect??
Crc can detect all single bit errors Crc can detect all double bit errors( three1`s) Crc can detect any odd number of errors Crc can detect all burst errors of less than the degree of the polynomial Crc detects most of larger burst errors with high probability

63 Digital circuit for polynomial division
the crc process can be very easily implemented in hardware using linear feedback shift register (LFSR) The LFSR divides a message polynomial by a suitably chosen divisor polynomialthe reminder constitutes the FCS No of FF exactly = N No of XORs at most = N Divisor pinomial is at degree N

64 Bose_chaudhuri_hocquenghem codes
One of the most important classes of linear codes are BCH codes It has wide variety of parameters each BCH is a t_error correcting code in that it can detect and correct up to t random errors per code word The hamming single error correcting codes can be described as BCH codes The BCH offers flexibility in the choice of code parameters

65 BCH How to determine the generator polynomial g(x)

66

67 Re from the table we have ( ) for the coefficients of generator polynomial Re from the table we have ( ) for the coefficients of generator polynomial

68 Decoding of BCH code Block Diagram:

69 BCH

70 .

71 .

72 Reed_solomon codes The reed Solomon code s are an important subclass of nonbinary BCH CODES The encoder for an RS code differs from binary encoder in that it operates on multiple bits rather than individual bits.specifally , an RS (n,k) code is used to encode m-bit symbols into blocks consisting of n=2^m-1 symbols that is m(2^m-1)where m>1 thus the encoding algorithm expands a block of k symbols to n symbols by adding n-k redundant symbols when m is an integer power of two the m-bits symbols are called bytes

73 Cont.. Before data transmission, the encoder attaches parity symbols to the data using a predetermined algorithm before transmission At the receiving side, the decoder detects and corrects a limited predetermined number of errors occurred during transmission transmitting additional symbols introduced by FEC is better than retransmitting the whole package when at least an error has been detected by the receiver

74 A Reed-Solomon code is a block code and can be specified as RS(n,k)
n is the size of the codeword k is the number of data symbols 2t is the number of parity symbols Each symbol contains s number of bits.

75 Cont.. The relationship between the symbol size(s) and the size of the codeword n n = 2^s − 1 The RS code allows to correct up to t number of symbol errors where t is given by t =(n − k)/2

76 Cont.. Example For example, the RS(255,239) has the size of its codeword, n, to be 255 symbols, the number of data symbols, k, be 239 symbols. The maximum number of symbol errors that RS(255,239) decoder can correct is equal to T= ( )/2 = 8 The size of each symbol is S=(log(n)+1)/log(2)=8

77 Cont.. Every element, except zero, can be expressed as a power of a primitive element of the field An addition of two elements in the Galois field is simply the exclusive-OR (XOR) operation

78 Hardware implementation of RS encoder
To get g(x) of RS encoder

79 Cont..

80 Decoding of RS code Block Diagram :

81 Cont.. r(x) = c(x) + e(x) c(x) = r(x) + e(x) c(x) + e(x) + e(x) = c(x)
After going through a noisy transmission channel, the encoded data can be represented as r(x) = c(x) + e(x) where e(x) represents the error polynomial with the same degree as c(x) and r(x). Once the decoder evaluates e(x), the transmitted message, c(x), is then recovered by adding the received message, r(x) to the error polynomial, e(x), . c(x) = r(x) + e(x) c(x) + e(x) + e(x) = c(x) Note that e(x) + e(x) = 0 because addition in Galois field is equivalent to an exclusive-OR and e(x) XOR e(x) = 0

82 Cont.. Coding Gain The advantage of using Reed-Solomon codes is that the probability of an error remaining in the decoded data is (usually) much lower than the probability of an error if Reed-Solomon is not used. This is often described as coding gain. Example A digital communication system is designed to operate at a Bit Error Ratio (BER) of 10-9, i.e. no more than 1 in 109 bits are received in error. This can be achieved by boosting the power of the transmitter or by adding Reed-Solomon (or another type of Forward Error Correction). Reed-Solomon allows the system to achieve this target BER with a lower transmitter output power. The power "saving" given by Reed-Solomon (in decibels) is the coding gain.

83 the block length of the RS code is one less than the size of a code symbol and the min distance is one greater than the no of parity check symbols The RS make highly efficient use of redundancy and block lengths and symbol size can be adjusted readily to accommodate a wide range of messages size and optimize performance finally efficient decoding tech are available for use with RS codes which is one more reason for their wide application

84 Interleaving Transmission without interleaving:
Original transmitted sentence: ThisIsAnExampleOfInterleaving Received sentence with a burst error: ThisIs______pleOfInterleaving The term "AnExample" ends up mostly unintelligible and difficult to correct.

85 With interleaving: Transmitted sentence: ThisIsAnExampleOfInterleaving... Error-free transmission: TIEpfeaghsxlIrv.iAaenli.snmOten.Received sentence with a burst error: TIEpfe______Irv.iAaenli.snmOten.Received sentence after deinterleaving: T_isI_AnE_amp_eOfInterle_vin

86 Disadvantages of interleaving
No word is completely lost and the missing letters can be recovered with minimal guesswork. Disadvantages of interleaving Use of interleaving techniques increases latency . This is because the entire interleaved block must be received before the packets can be decoded

87 Convolutional Codes Introduction:
Convolutional codes, invented in 1954 by P. Elias In block coding the encoder accepts k-bit message and generates n-bit code word . Thus ,code word are produced on a block–by-block basis.

88 Convolutional Codes So the encoder will buffer an entire message block before generating the associated code word. There are applications ,where the message bits come in serially rather than in large blocks , in this case the use of a buffer may be undesirable. In such case the convolutional code is preferred than block code.

89 Convolutional Codes What is Convolutional code ?
Convolutional code : generate redundant bits by using modulo-2 convolutions and may be viewed as a finite state machine that consist of an M stage shift register with prescribed connections to n modulo-2 adders, and multiplexer that serializes the outputs of the adder. The input bits are stored in the fixed shift register and they are combined with the help of mo-2 adders. This operation is equivalent to binary convolution .

90 Block Diagram General Block Diagram:

91 Block Diagram Special Case:

92 M  Number of memory register
Convolutional Codes Properties: Encoding of information stream rather than information blocks. Value of certain information symbol also affects the encoding of next M information symbols . Easy implementation using shift register C code is represented by 3 parameters (n , k , m) N  Number of op bits K  Number of in bits M  Number of memory register

93 Operation Let’s take that example

94 Operation Whenever the message bit is shifted to position 'm‘ ,The new values of x1 x2 are generated depending upon m,m1,m2 X1=m+m1+m2 [Path 1] X2=m+m [Path 2] The op switch then samples x1&x2and op stream bit X=x1x2 x1x2 x1x2……..

95 Definitions Code rate: Is a measure of the efficiency of the code.
R=k/n=1/2 For single bit ,it remains in ‘m’ during first shift ,in ‘m1’ during second ,in ‘m2’ during third .i.e. it influences output x1 and x2 for three successive shifts ,at the fourth shift the message bit is lost and has no effect on the output.

96 Definitions Constrain length*:
It's defined as the number of shifts over which a single message bit can influence the encoder output. K=m+1 shifts K in the example = 3 shifts Some book define the code by (r , K)

97 Definitions The use of Nonsystematic codes in ordinarily preferred over systematic codes in convolutional coding. Non-systematic code : the output does not contain the input symbols. A systematic code : is any error-controlling code in which the input data is embedded in the encoded output

98 Example Find all parameters ?

99 impulse response Each path connecting the Output to the Input of C encoder may be characterized in term of its impulse response impulse response : response of the path to symbol 1 applied to its input. These impulse response are also called Generator Sequences of the code.

100 Generator Sequences Let the incoming message {m0,m1.m2….}
Output sequence:

101 Example on Generator Sequences
Block Diagram M=[ m0 m1 m2 m3 m4 ]=[ ]

102 Example on Generator Sequences
Definition of the code(2,1,2) Code rate = k/n =1/2 Constrain = 3 bits Generating sequences for path 1

103 Example on Generator Sequences
Generating sequences for path 1 To obtain output sequences

104 Example on Generator Sequences
Determine X1 for each single Input bit

105 Example on Generator Sequences
Determine X1 for each single Input bit

106 Example on Generator Sequences
Finally the output of X1 will be Similarly for X2 Output sequences

107 Transform Domain As we can see from the previous example , using the generator sequence to get Convolutional code is too long !!!!!! Transform Domain approach to analysis of Convolutional Encoder using Generator Poly. X1  X2 

108 Transform Domain P is unit delay operator : represent the time delay of the bits in impulse response The same for the input message L: length of the message sequences The convolution sum are converted to polynomial multiplication in the transform domain.

109 Transform Domain Repeat the previous example using transform domain calculations to obtain generating polynomial for adder1 to obtain generating polynomial for adder2

110 Transform Domain obtain message polynomial for  m = 10011
Output of X1

111 Transform Domain Output of X1 The output code

112 how polynomial are selected
There are many choices for polynomial for any m order code , but they don't all result in output sequences that have good error protection properties .Petersen and Weldon's book contains a complete list of these polynomials.

113 Concepts A code is linear if the modulo-2 sum of any two codeword is also a codeword. Mathematically Addition is mod-2 As ci+ci =0 ,it follows that all linear codes must contain the all zeros codeword

114 initial and terminating states
The usual convention is to start in all zeros state and then force the encoder to terminate in all zeros states. termination in all zeros state requires a tail of m zeros . The tail results in a fractional rate loss  sol is to use Tailbiting convolutional codes which operate such that the initial and terminating state are the same(but not necessary all zeros). Tailbiting codes don't require a tail and have no fractional rate loss

115 code tree , trellis , state diagram
States of the encoder: Let the initial values of bits stored in m1 and m2 be zeron1m2=00

116 Code tree Each branch of the tree represents an input symbol , with the pair of output binary symbols indicated on the branch. An input 0 specifies the upper path ,where input 1 0 specifies the lower path. For input Message Sequence M=10011 Code =

117

118 Trellis The tree becomes repetitive after the first three branches , so we may collapse the code tree into the new form which called a trellis It 's more instructive than a tree , it contain(L+K)levels The left nodes represent the four possible current states of the encoder , whereas the right nodes represent the next states. Solid line represent an input 0. Dashed branch represent an input 1.

119 Trellis Block diagram

120 State diagram We may coalesce the left and the right nodes ,by doing this ,we obtain the state diagram of the encoder . The nodes represent the 4 possible states of the encoder , with each node having two incoming branches and two outgoing branches , where a transition in response to input 0 is represented by solid branch, where's a transition in response to input 1 is represented by dashed branch, the binary label on each branch represent the encoder's output as it moves from one state to another.

121 State diagram

122 The tailbiting concept
The idea behind a tailbiting convlutional codeis to equate the initial and terminating states. The benefits is that there is no tail or fractional rate loss

123 The tailbiting concept
Example m=[1101] To encode ,determine final state from the last m bits , then set the initial state. Since the last two bits are 01,the final state is 10

124 Encoding using a cyclic prefix
To encode ,copy the last m data bits to the beginning of the message. The bits were used only to determine the initial state ,so don't transmit the associated code bits

125 Decoding the tailbiting code
A tailbiting trellis can be visualized as a cylinder by connecting the starting and ending states The Viterbi algorithm can be run on the cylinder In theory the algorithm would have to run for ever by cycling around the cylinder In practice its sufficient to limit the cycling around the cylinder

126 Comparison

127 Decoding For 3 bits input

128 (Hard decision decoding)
We can compare this received sequence to all permissible sequences and pick the one with the smallest hamming distance (Hard decision decoding) We can do a correlation and pick the sequences with the best correlation (soft decision decoding) But if the number of bits increase ???

129 Decoding the convolutional code
Main method of Deciding: Maximum likely hood decoding [viterbi method] Sequential decoding Feed back decoding

130 log-like hood function
M : denote the message vector C : denote the corresponding code vector to the input of DML channel. R : denote the received vector which may differ from C The decoder will estimate m  m’ M’=M iff C’=C The decoding rule will be optimum when probability of decoding error is minimized

131 log-like hood function
probability of decoding error is minimized if the estimate C’ is chosen to maximize the log-like hood function log-like hood function = p(r |C) For BS channel R & C represent binary sequences of length N , and they may differ from each other in some locations because of error due to channel noise .

132 log-like hood function
Cj & rj denote the Ith elements of C and r The received vector r differs from the transmitted C in d positions (Hamming Distance )

133 log-like hood function
Assume : P <0.5 N log(1-p) is const for all C Max like hood decoding rule for BSC : Choose the estimate c’ that minimize the hamming distance bet the received vector r and the transmitted vector c

134 Viterbi method The viterbi decoder examines an entire received sequences of a given length The decoder computes a metric for each path and makes a decision based on the metric All paths are followed until two paths converge on one node The viterbi algorithm applies the maximum like hood principles to limit the comparison to 2 to the power of (K-1) surviving paths instead of checking all paths.

135 Viterbi method Metric : it’s the discrepancy between the received signal y and the decoded signal at particular node , this metric can be added over few nodes for particular path. Surviving path : this is the path of the decoded signal with minimum metric. A difficulty that may arise of the viterbi algorithm is the possibility that when the paths entering a state are compared , their metrics are found to be identical , So we make a guess.

136 Example Suppose that the input to the convolutional encoder is :
Then the output of the encoder is : C=[ ,1 0 ,1 0 , ,0 1 , 1 1] Suppose every fourth bit is received in error R=[ ,1 1 ,1 0 , ,0 1 , 1 1]

137 Trellis Diagram

138 Trellis Diagram

139 Trellis Diagram

140 Trellis Diagram

141 Trellis Diagram

142 Trellis Diagram

143 Trellis Diagram

144 Trellis Diagram

145 Trellis Diagram

146 Sequential decoding Dealing with just one path at a time.
Ability of giving up a path at any time and turn back to follow an another path. Allows both forward and back word movements through the trellis. For example The code (2,1,4) M = [ ] C = R =

147 Sequential decoding The Trellis Diagram

148 Sequential decoding R =

149 Sequential decoding R =

150 Sequential decoding R =

151 Sequential decoding R =

152 Free distance concept The most important single measure of convolutional code ‘s ability to combat channel noise . The free distance of a convolutional code : The minimum Hamming distance between any two codes word in the code For dfree > 2t  C code can correct t errors . It can be obtained from the state diagram of the C code.

153 Free distance State diagram to single flow diagram

154 Free distance D : describes the Hamming weight of the encoder output corresponding to the branch The exponent of L is always =1 Transfer function (T(D.L))

155 The transfer function The transfer function will be
Using the binomial expansion The distance transfer function expressed in the form of the form of a power series

156 The transfer function The exponent of the first term in the expansion of T(D,1) defines the free distance i.e. dfree=5. This result means that up to 2 errors in the received sequence are correctable. It’s assumed that the sum of the power series has a finite value. When T(D,1) is nonconvergent , an infinite number of decoding errors are caused by a finite number of transmission errors.

157 Asymptotic Coding Gain
Assuming the use of PSK with coherent detection Binary symmetric channel : The bit error rate for binary PSK without coding dominated by exp(-Eb/No) But with CC is dominated by (-dfree*r*Eb/2No) Asymptotic coding gain

158 Asymptotic Coding Gain
2-binary input AWGN channel The bit error rate for binary PSK without coding dominated by exp(-Eb/No) But with CC is dominated by (-dfree*r*Eb/No) Asymptotic coding gain


Download ppt "Channel coding."

Similar presentations


Ads by Google