Presentation is loading. Please wait.

Presentation is loading. Please wait.

MSc Data Communications

Similar presentations


Presentation on theme: "MSc Data Communications"— Presentation transcript:

1 MSc Data Communications
By R. A. Carrasco Professor in Mobile Communications School of Electrical, Electronic and Computing Engineering University of Newcastle-upon-Tyne 2006

2 Recommended Text Books
“Essentials of Error-Control Coding”, Jorge Costineira, Patrick Guy Farrell “Digital Communications”, John G. Proakis, Fourth Edition

3 Goals Of A Digital Communication System
Deliver/Data from the source to the user in a: FAST INEXPENSIVE (Efficient) RELIABLE WAY

4 Digital Modulation Schemes
Task: to compare different modulation schemes with different values of M Choice of modulation scheme involve trading off Bandwidth Power Complexity Define: Signal-to-noise ration Error probability

5 Examples: Memoryless Modulation (Waveforms are chosen independently – each waveform depends only on mi) Source Symbols 1 1 1 1 1 Ts A a t -A M=2 T=Ts S1(t) = A, 0t<T S2(t) = -A T 01 01 01 01 10 b t M=4 T=2Ts Sinusoids with 4 different phases 010 011 c t M=8 T = 3Ts 8 different amplitude levels T 101 011001 d 010101 T

6 A crucial question is raised what is the difference?
If T is kept constant, the waveforms of scheme C requires less bandwidth than those of 2, because the pulse duration is longer In the presence of noise, and if the same average signal power is used, it is more difficult to distinguish among the waveforms of c. AM/AM = Amplitude Modulation – Amplitude Modulation conversion AM/PM = Amplitude Modulation – Phase Modulation conversion

7 Notice: Waveforms b have constant envelopes. This choice is good for nonlinear radio channels A Output Envelope B Input Envelope A: Output envelop (AM/AM conversion) B: Output phase shift (AM/PM conversion)

8 TRADE-OFF BETWEEN BANDWIDTH AND POWER
In a Power-Limited Environment, use low values of M In a Band-Limited Environment, use high values of M What if both Bandwidth and Power are Limited ? Expand Complexity DEFINE: BANDWIDTH SIGNAL-TO-NOISE RATIO ERROR PROBABILITY

9 Performance of Different Modulation Schemes

10 DIGITAL MODULATION TRADE-OFFS
SHANNON CAPACITY LIMIT FOR AWGN C = W LOG (1 + S/N) S = Signal Power = e/T N = Noise Power = 2NoW W = Bandwidth

11 Define Bandwidth W In general, W = /T Hz,
dB S(f) Different bandwidth definitions of the power density spectrum of (5.2). B1 is the half-power bandwidth: B2 is the equivalent noise bandwidth: B3 is the null-to-null bandwidth: B4 is the fractional power containment bandwidth at an arbitrary level: B5 is the bounded power spectral density at a level of about 18dB. Notice that the depicted bandwidths are those around the frequency f0. -10 -20 -3 -2 -1 1 2 3 fT B1 Half – power Equivalent – noise Null – to – null Fractional power containment Bounded power spectral density B2 B3 B4 B5 In general, W = /T Hz,  depends on modulation scheme and on bandwidth definition

12 DEFINE SNR Average Signal Power Signal-to-Noise Ratio
Rate at which the source outputs binary symbols Average signal energy Average Signal Power b = average energy per bit Signal-to-Noise Ratio Noise power spectral density. Bits/sec Hz (Bandwidth

13 BPS/HZ Comparison

14 Digital Modulation Trade – Offs (Comparison among different schemes)
16 SHANNON CAPACITY BOUND 32 16 8 16 8 8 4 x 16 4 8 x 16 4 2 x 8 BANDWIDTH LIMITED REGION 2 4 2 2 x 1 1 4 2 POWER LIMITED REGION 8 0.5 4 2 16 8 0.25 32 16 0.125 64 32 (dB) PAM (SSB) (COHERENT) PSK AM-PM DCPSK COHERENT FSK INCOHERENT FSK x

15 [1], pages 157 - 179 k=1 u1 Encoder for the (3, 1) repetition code
x1= u1, x2=u1, x3=u3 x1 x2 x3 n=3 k=2 u2 u1 The whole word is defined by x1=u1, x2=u2 x3=u1+u2 Encoder for the (3,2) parity-check code x3 x x1 n=3 [1], pages

16 Hamming Code (7,4) Block Encoders (7, 4) Hamming Code
The codeword is defined by xi = ui, i = 1,2,3,4 x5 = u1 + u2 + u3 x6 = u2 + u3 + u4 x7 = u1 + u2 + u4 Source symbols encoded symbols (7, 4) Hamming Code Block Encoders Notice that only 16 of 128 sequences of length 7 are used for transmission

17 Convolutionally encoding the sequence 101000...
Rate , K = 3 From B Sklar, Digital Communiations. Prentice-Hall, 1988

18 Convolutionally encoding the sequence 101000...
Output sequence:

19 (7, 5) Convolutional Encoder
1 d3 1 d2 Input Data d d d1 Constraint length K = 3 Code Rate = 1/2 Output 2 d1 d3 c2

20 The Finite State Machine
b=01 c=10 d=11 1/10 Example message: 00 10 11 01 Output 1 Input 11 d 0/01 1/01 1/00 b 01 10 c 0/10 1/11 a 0/11 output bit 00 0/00 The 0 or 1 input bit The coder in state a= 00 A 1 appearing at the input produces 11 at the output, the system moves to state b = 01

21 If in state b, a 1 at the input produces 01 as the output bits
If in state b, a 1 at the input produces 01 as the output bits. The system then moves to state d (11). If a 0 appears at the input while the system is in state b, the bit sequence 10 will appear at the output, and the system will move to state c (10).

22 Tree Representation Time K=3 Rate= Input data bits 1 2 3 4 0 bit
00 a b c d 00 11 00 10 11 00 01 10 11 a b 00 11 01 01 c d Time 10 00 1 11 a b 10 11 K=3 Rate= Input data bits 10 00 c d 11 01 11 01 a b 00 01 01 10 c d 10 0 bit 1 bit Upward transition Downward transition

23 Signal-flow Graph D2

24 Transfer Function T(D)
Therefore,

25 Transfer Function T(D)
Performing long division gives: This gives the number of paths in the state diagram with their corresponding distances. In this case, the minimum distance of the code is 5

26 Block Encoders by Professor R.A Carrasco
STATE 000 U Ui Ui Ui-2 S1 00 111 001 110 S3 10 S2 01 s1=(00) s2=(01) s3=(10) s4=(11) 011 100 010 X S4 11 Source Symbol 101 1 “State Diagram” of Code u= ( ) corresponds to the paths s1 s3 s4 s2 s3 s4 through the state diagram and the output sequence is x=( )

27 Tree Diagram The path corresponding to the input sequence
11011 is shown as an example.

28 Signal Flow Graph Xa = S1 Xb = S3 Xc = S4 Xd = S2

29 Transfer Function T(D)

30 Transfer Function of the State Diagram
We have dfree = 6, for error events: S1 to S3 to S2 to S1 and S1 to S3 to S4 to S2 to S1

31 Trellis Diagram for Code (Periodic from time 2 on)
Legend Input 0 Input 1 The minimum distance of the convolutional code l = N dmin = dc (N), the column distance. The free distance dfree of the convolutional code

32 Trellis Diagram for the computation of dfree
Trellis labels are the Hamming distances of encoder outputs and the all-zero sequence.

33 Viterbi Algorithm We want to compute
Functions whose arguments l can take on a finite number of values Simplest situation arises when T2, T1…….are “independent” (The value taken on by each one of them does not influence the other variables) 1 Then D C 0 [1], pages 181 – 185 A B

34 Viterbi Decoding of Convolutional Codes
is a maximum received sequence transmitted symbols Observe (memoryless channel) received no-tuple n0-tuple of coded digits 2. We have, for a binary symmetric channel: 1-P P Tx Rx Irrelevant multiplicative constant P Irrelevant additive constant 1 1 1-P Hamming distance Between xl and yl

35 Brute force approach: Compute all the values of the function and choose the smallest one. We want a sequential algorithm

36 Viterbi Algorithm What if 0, 1, … are not independent?
0 = A  1 = C or 1 = D 1 D C 0 0 = B  1 = D A B What is the shortest route from Los Angeles to Denver?

37 Viterbi Algorithm l=0 l=1 l=2 l=3 l=4 l=5 2 1 3 1 3 2 2 1 2 1 1 3 4 4
1 3 2 2 1 2 1 1 3 4 4 2 1 1 1 2 2 l=1 2 l=2 2 l=3 2 1 4 3 3 1 1 2 2 1 3 1 4 1 2 4 6 (a) 1 2 2 (b) (c) 4 2 l=4 l=5 4 4 1 4 5 3 1 6 5 (d) 4 (e)

38 Conclusion We maximise P(y|x) by minimising , the Hamming distance between the received sequence and coded sequence. Brute-Force Decoding Compute all the distances between y and all the possible x’s. Choose x that gives the minimum distance. Problems with Brute-Force Decoding - Complexity - Delay Viterbi algorithm solves the complexity problem (complexity increases only linearly with sequence length) Truncated Viterbi algorithm also solves delay problem

39 Trellis-Coded Modulation
A.K.A - Ungerboeck Codes - Amplitude-Redundant Codes - Modulation How to increase transmission efficiency? Reliability or Speed Use higher-order modulation schemes. (8 PSK instead of 4 PSK) Same BW, More bit/s per hz, more power Use coding: Less power, less bit/s per hz, BW expanded Band-limited environment (Terrestrial radio communications) Power-limited environment (Satellite radio communications) [2], pages G. Ungerboeck, "Channel coding with multilevel/phase signals," IEEE Trans. Inform. Theory, vol. IT-28, pp , 1982.

40 Construction of TCM Constellation is divided into smaller constellations with larger euclidean distances between constellation points. Construction of Trellis (Ungerboeck’s rules) Parallel transitions are assigned members of the same partition Adjacent transitions are assigned members of the next larger transitions Signals are used equally often

41 Model for TCM Memory Part Select n Constellation an Select Signal
From Constellation

42 Some examples of TCM schemes
Consider transmission of 2 bits/signal We examine TCM schemes using 8-PSK With uncoded 4-PSK we have We use the octogonary constellation 2 3 1 4 5 d’ 7 6

43 The Free Distance of a convolutional code is the Hamming distance between two encoded signals
dfree is a measure of the separation among encoded sequences: the larger dfree, the better the code (at least for large enough SNR). Fact: To compute dfree for a linear convolutional code we may consider the distances with respect to the all-zero sequence.

44 Split Remerge (00) (00) An “error event” A simple algorithm to compute dfree is Compute dc(l) for l = 1,2,… If the sequence giving dc(l) merges into the all-zero sequence, store its weight as dfree

45 First Key Point Constellation size must be increased to get the same rate of information Minimum distance between sequences We gain Minimum distance for uncoded transmission Energy with coding We lose Energy without coding Gain is

46 Second Key Point How to introduce the dependence among signals
Transmitted symbol at time nT Source symbol at time nT Previous source Symbols = “state” n (Describes output as a function of input symbol + encoder state) We write (Describes transitions between states)

47 TCM Example 1 Consider the 8-PSK TCM scheme, which involves the transmission of 2 bits/symbol using an uncoded 4-PSK constellation and the coded 8-PSK constellation for the TCM scheme as shown below Show that and

48 TCM Example 1: Solution We have from the uncoded 4-PSK constellation
We have from the coded 8-PSK constellation Using

49 TCM Scheme Based on 2-State Trellis
4 5 6 1 2 1 3 7 Hence, we get the coding gain a0 a1 I 4PSK 8PSK Can this performance gain Trellis coded QPSK be improved? The answer is yes by going to more trellis states.

50 TCM Example 2 Draw the set partition of 8-PSK with maximum Euclidean distance between two points. By how much is the distance between adjacent signal points increased as a result of partitioning?

51 TCM Example 2: Solution 1 1 1 (0, 4) (2, 6) (3, 7) (1, 5) 3 1 4 7 5
7 1 5 010 6 011 001 100 000 111 101 110 1 1 010 001 011 100 000 111 101 (0, 4) (2, 6) (1, 5) (3, 7) 110

52 TCM Scheme Based On A 4-State Trellis
Let us now use a TCM scheme with a more complex structure, in order to increase the coding gain. Take a trellis with four states as shown below. And hence 0426 4 2 6 1537 2 6 2 4 1 1 2640 5 7 3 7 3 1 3715 5 Calculating the Euclidian distance for each, one such path s2 – s1 – s2, leaving and returning to s0, or s6 – s1 – s2

53 TCM Code Worked Example
16-QAM c4 a2 c3 8-PSK Encoder a1 c2 Output c1 S1 S2 Rate ½ 4-state Convolutional Code

54 State Table for TCM Code
Inputs Initial State Next State Outputs

55 Trellis Diagram of TCM Code
State 00 0426 4 2 2 6 6 6 10 2640 4 1 OR 01 5 1537 3 3 7 7 1 11 3715 5

56 Coding Gain over Uncoded QPSK Modulation
dmin = Gain  = or 3 dB Uncoded QPSK

57 TCM Problems 16-QAM c4 a2 c3 a1 8-PSK Encoder Output c2 c1 16-QAM c4

58 16-QAM c4 a2 c3 8-PSK Encoder a1 c2 Output c1 S1 S2 S3 S4

59 The trellis-coded signal is formed as shown below, by encoding one bit using a rate
½ convolutional code with three additional information bits left uncoded. Perform the set partitioning of a 32-QAM (cross) constellation and indicate the subsets in the partition. By how much is the distance between the adjacent signal points increased as a result of partitioning. c1 a1 c2 a2 c3 a3 c4 a4 c5

60 TCM and Decoding Viterbi Algorithm is used with soft decisions of the demodulator for maximum likelihood estimation of the sequence being transmitted

61 Turbo Encoding / Decoding
By R. A. Carrasco Professor in Mobile Communications School of Electrical, Electronic and Computing Engineering University of Newcastle-upon-Tyne [1], pages 209 – 215

62 Introduction The Turbo Encoder The Turbo Decoder Overview
Component encoders and their construction Tail bits Interleaving Puncturing The Turbo Decoder Scaling

63 Introduction Cont’d Results AWGN results Performance Conclusions

64 Concatenated Coding and Turbo Codes
Input data Outer encoder Inner encoder Input data Systematic bits encoder Parity bits#1 channel interleaver Parity bits#2 Output data encoder Outer decoder Inner decoder Parallel-concatenated (Turbo encoder) Serially concatenated codes Convolutional codes Non-systematic convolutional codes (NSC) Have no fixed back paths; They act like a finite impulse response (FIR) digital filter; NSC codes do not lead themselves to parallel concatenation; At high SNR the BER performance of a classical NSC code is better than the systematic convolutional codes of the same constraint length.

65 The Turbo Encoder Component Encoder #1 sk = dk dk pk1 Interleaver
Recursive systematic convolutional encoders in parallel concatenation, separated by pseudo-random interleaver Second systematic is interleaved version of first systematic Interleaver process is known at the decoder, therefore this is surplus to our needs

66 Component Encoders [7;5]8 RSC component encoder
4 state trellis representation sk D dk pk [23;33]8 RSC component encoder 16 state trellis representation sk D dk pk

67 Systematic convolutional codes
Recursive Systematic Convolutional (RSC) codes can be generated from NSC codes by connecting the output of the encoder directly to the input; At low SNR the BER performance of an RSC code is better than the NSC. The operation of the Turbo encoder is as follow: The input data sequence is applied directly to encoder 1 and the interleaved version of the same input data sequence is applied to encoder 2. The systematic bits (i.e. the original message bits) and the two parity check bit streams (generated by the two encoders) are multiplexed together to form the output of the encoders.

68 Turbo code interleavers
The novelty of the parallel-concatenated turbo encoder lies in the use of RSC codes and the introduction of an interleaver between the two encoders; The interleaver ensures that two permutations the same input data are encoded to produce two different parity sequences; The effect of the interleaver is to tie together errors that are easily made in one half of the turbo encoder to errors that are exceptionally unlikely to occur in the other half; This ensures robust performance in the event that the channel characteristics are not known and is the reason why turbo codes perform better than traditional codes.

69 Turbo code interleavers (Cont’d)
The choice of interleaver is therefore the key to be performance of a turbo coding system; Turbo code performance can be analysed in terms of the Hamming distance between the code words; If the applied input sequence happens to terminate one of the encoders, it is unlikely that, once interleaved, the sequence will terminate the other leading to a large hamming distance in at least one of the two encoders; A Pseudo-random interleaver is a good choice.

70 Interleaving Shannon states that large frame length random codes can achieve channel capacity By their very nature, random codes are impossible to decode Pseudo-random interleavers make turbo codes appear random while maintaining a decodable structure

71 Interleaving cont’d Primary use To increase average codeword weight Altering bit position does not alter data-word weight but can increase codeword weight Thus a low weight convolutional output from encoder #1 does not mean a low-weight turbo output DATAWORD CODEWORD CODEWORD WEIGHT 01100 4 01010 5 10010 6

72 Interleavers An interleaver takes a given sequence of symbols and permutes their positions, arranging them in a different temporal order; The basis goal of an interleaver is to randomise the data sequence when used against burst errors; In general, data interleavers can be classified into: block, convolutional, random and linear interleavers; Block interleaver: data are first written in row format in a permutation matrix, and then read in a column format; A pseudo – random interleaver is a variation of a block interleaver when data are stored in a register at position that are deinterleaved randomly; Convolutional interleaver are characterised by a shift of the data, usually applied in a fixed and cumulative way.

73 Example: Block interleaver
Data sequence: 1 5 9 13 2 6 10 14 3 7 11 15 4 8 12 16 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 read out: transmit read in (interleave): read out: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 receive Channel read in (De-interleave):

74 Example: Pseudo - random interleaver
Data sequence: read out (by a random position pattern): 1 6 11 15 2 5 9 13 4 7 12 14 3 8 16 10 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 transmit read in (interleave): read out: (by the inverse of random position pattern) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 receive Channel read in (De-interleave):

75 Convolutional interleaver
(N-1)/L …………. 2L …………. Channel 2L (N-1)/L L

76 Continued… Interleave: De-interleave: Channel
Input: …, x0, x1, x2, x3, x4, x5, x6, … x0 x1 x-3 Output: …, x0, x-3, x-6, x-9, x4, x1, x-2, x-5, x8, x5, x2, x-1 … D x2 x-2 x-6 D D Corresponds to a delay of 4 symbols in this example. D x3 x-1 x-5 x-9 D D D De-interleave: x12 x8 x4 x0 D D D transmit receive x9 x5 x1 Channel D D x6 x2 D Input: …, x0, x-3, x-6, x-9, x4, x1, x-2, x-5, x8, x5, x2, x-1 … x3 Output: …, x0, x1, x2, x3, x4, x5, x6, …

77 Puncturing High rate codes are usually generated by a procedure known as puncturing; A change in the code rate to ½ could be achieved by puncturing the two parity sequences prior to the multiplexer. One bit might be deleted from code parity output in turn, such that one parity bit remains for each data bit.

78 Puncturing Used to reduce code rate
Omits certain output bits according to a pre- arranged method Standard method reduces turbo codeword from rate n/k = 1/3 to rate n/k = 1/2 puncturer s1,1 s1,2 s1,3 s1,4 p1,1 p1,2 p1,3 p1,4 s1,1 p1,1 s1,2 p2,2 s1,3 p1,3 s1,4 p2,4 p2,1 p2,2 p2,3 p2,4

79 Tail Bits Tail bits are added to the dataword such that the first component encoders codeword terminates at the all-zero state Look up table is most common method Data bits Tail S0 S1 = 1 = 0 S2 S3

80 Turbo decoding A key component of iterative (Turbo) decoding is the soft-in, soft-out (SISO) decoder; 8-level 3-bit quantization Combined soft decision error control decoding Matched filter S+H Soft decisions Hard decisions Binary quantization Error control Matched filter S+H [1], pages Hard decisions

81 Soft decision Digital 0 Digital 1 1

82 Soft decision Hard decision vs. Soft decision
Soft (multi-level) decisions; Hard (two-level) decisions; Each soft decision contains not only information about the most likely transmitted symbol 000 to 011 indicating a likely 0 100 to 111 indicating a likely 1 but also information about the confidence or likelihood which can be placed on this decision.

83 The log-likelihood ratio
It is based on modulo-2 addition of the binary random variable u, which is -1 for logic 0, and +1 for logic 1; L(u) is the log-likelihood ratio for the binary random variable and is defined as: This is described as the ‘soft’ value of the binary random variable u. The sign of the value is the hard decision while the magnitude represents the reliability of this decision;

84 The log-likelihood ratio
As L(u) increases towards +∞,the probability that u=+1 also increases. As L(u) increases towards - ∞, the probability that u=-1 increases. For the conditional log-likelihood ration L(u/y) defined as:

85 The log-likelihood ratio
The information u is mapped to the encoded bits x. These encoded bits are received by the decoder as y. All with the time index k. From this the log-likelihood ration for the system is: From Bayes theorem, this is equivalent to:

86 The log-likelihood ratio
Assuming the ‘channel’ to be flat fading with Gaussian noise, the Gaussian pdf, G(x): With q representing the mean and representing the variance, showing that:

87 The log-likelihood ratio
where represent the signal to noise ratio per bit and a being the fading amplitude (a = 1 for a non- fading Gaussian Channel). From equation

88 The log-likelihood ratio
The log likelihood ratio of xk depends on yk is: where is the channel reliability. Therefore L(xk/yk) is the weighted received value.

89 Turbo Decoding: Scaling by Channel Reliability
Channel Reliability = 4*Eb/N0*Channel Amplitude Channel Amplitude for AWGN = 1 Channel Amplitude for Fading varies 4*Eb/N0*A Corrupted, Received codeword Scaled, Corrupted, Received codeword

90 Performance of Turbo Codes
100 10-1 10-2 Uncoded 10-3 10-4 Turbo codes Shannon limit 10-5 10-6 At a bit error rate of 10-5, the turbo code is less than 0.5 dB from Shannon's theoretical limits.

91 Block diagram of Turbo Decoder
De-interleaver Noise Systematic bits Decoder Stage 1 Decoder Stage 2 Interleaver De-interleaver Noise parity-check bits ε1 Noise parity-check bits ε2 Hard-limiter Decoder bits Block diagram of Turbo Decoder

92 Turbo Decoder Figure shows the basic structure of the turbo decoder. It operates on noisy versions of the systematic bits and the two noisy version of the parity bits in two decoding stages to produce an estimate of the original message bits. Set - + - BCJR I BCJR D + Hard Limiter ε2 u ε1 Stage 1 u Stage 2

93 Turbo Decoding The BCJR algorithm is a soft input –soft output decoding algorithm with two recursions, one forward and the other backward. At stage 1, the BCJR algorithm uses extrinsic information I2(x) added to the input (u). At the output of the decoder the ‘input’ is subtracted from the ‘output’ and only the information generated by the 1st decoder is passed on I1(x). For the first ‘run’ I2(x) is set to zero as there is no ‘prior’ information.

94 Turbo Decoding At stage 2, the BCJR algorithm uses extrinsic information I1(x) added to the input (u). The input is then interleaved so that the data sequence matches the previously interleaved parity (ε2). The decoder output is then de- interleaved and the decoder’s ‘input’ is subtracted so that only decoder 2’s information is passed on I2(x). After this loop has repeated many times, the output of the 2nd decoder is hard limited to form the output data.

95 Turbo Decoding The first decoding stage use the BCJR Algorithm to produce a soft estimate of systematic bit xJ, expressed as the log- likelihood ratio: where u is the set of noisy systematic bits, ε1 is the set of noisy parity-check bits generated by encoder 1.

96 Turbo Decoding I2(x) is the extrinsic information about the set of message bits x derived from the second decoding stage and fed back to the first stage. The total log-likelihood ratio at the output of the first decoding stage is therefore:

97 Turbo Decoding The extrinsic information about the message bits derived from the first decoding stage is: where is to be defined. Extrinsic information Soft-input Soft-output Other information Intrinsic information Raw data At the output of the SISO decoder, the ‘input’ is subtracted from the ‘output’ and only the reliability information generated by the decoder is passed on as extrinsic information to the next decoder.

98 Turbo Decoding The extrinsic information fed back to the first
decoding stage is therefore: where is itself defined before and is the log- likelihood ratio computed by the second storage.

99 Turbo Decoding An estimate of the message bits x is computed by
hard-limiting the log-likelihood ratio at the output of the second stage, as shown by: we set on the first iteration of the algorithm.

100 Turbo Decoding: Serial to Parallel Conversion and Erasure Insertion
Received, corrupted codeword is returned to original three bit streams Erasures are replaced with a ‘null’ Serial to Parallel s1,1 s1,2 s1,3 s1,4 s1,1 p1,1 s1,2 p2,2 s1,3 p1,3 s1,4 p2,4 p1,1 p1,3 p2,2 p2,4

101 Results over AWGN

102 Questions What is the MAP algorithm first of all? Who found it?
I have heard about the Viterbi algorithm and ML sequence estimation for decoding coded sequences. What is the essential difference between these two methods? But I haven’t heard about the MAP algorithm until recently (even though it was discovered in 1974). Why? What are SISO (Soft-Input-Soft-Output) algorithms first of all? Well! I am quite comfortable with the basics of SISO algorithms. But tell me one thing. Why should a decoder output soft values? I presume there is no need for it to do that. How does the MAP algorithm work? Well then! Explain MAP as an algorithm. (Some flow-charts or steps will do). Are there any simplified versions of the MAP algorithm? (The standard one involves a lot of multiplication and log business and requires a number of clock cycles to execute.) Is there any demo source code available for the MAP algorithm? References

103 Problem 1 Let rc1=p/q1 and rc2=p/q2 be the codes rates of RSC encoders 1 and 2 in the turbo encoder of figure 1. Determine the code rate of the turbo code. The turbo encoder of figure 1 involves the use of two RSC encoders . (i) Generalise this encoder to encompass a total of M interleavers . (ii) construct the block diagram of the turbo decoder that exploits the M sets of parity-check bits generated by such a generalization. p p q1 ENC 1 p q2 ENC 2 Figure 1

104 Problem 2 Consider the following generator matrices for rates ½ turbo codes: 4-state encoder: g (D) = 8-state encoder: g (D)= Construct the block diagram for each one of these RSC encoders. Construct the parity-check equation associated with each encoder.

105 Problem 3 Explain the principle of Non-systematic convolutional codes (NSC) and Recursive systematic convolutional codes (RSC) and make comparisons between the two Describe the operation of the turbo encoder Explain how important the interleaver process is to the performance of a turbo coding system

106 Problem 4 Describe the meaning of Hard decision and soft decision for the turbo decoder process Discuss the log-likelihood ratio principle for turbo decoding system Describe the iterative turbo decoding process Explain the operation of the soft-input-soft-output (SISO) decoder

107 Low Density Parity Check Codes: An Overview
School of Electrical, Electronics and Computer Engineering Low Density Parity Check Codes: An Overview By R.A. Carrasco Professor in Mobile Communications University of Newcastle-upon-Tyne [1], pages 277 – 287

108 Outline What are LDPC Codes? Introduction and Background
Parity check codes What are LDPC Codes? Introduction and Background Message Passing Algorithm LDPC Decoding Process Sum-Product Algorithm Example of rate 1/3 LDPC (2,3) code Construction of LDPC codes Protograph Method Finite Geometries Combinatorial design Results of LDPC codes constructed using BIBD design

109 Parity Check Code A binary parity check code is a block code: i.e., a collection of binary vectors of fixed length n. The symbols in the code satisfy m parity check equations of the form: xa xb  xc  …  xz = 0 where  means modulo 2 addition and xa, xb, xc , … , xz are the code symbols in the equation. Each codeword of length n can contain (n-m)=k information digits and m check digits.

110 Example: Hamming Code with n=7, k=4, and m=3
For a code word of the form c1, c2, c3, c4, c5, c6, c7, the equations are: c1  c2  c3  c5 = 0 c1  c2  c4  c6 = 0 c1  c3  c4  c7 = 0 The parity check matrix for this code is then: Note that c1 is contained in all three equations while c2 is contained in only the first two equations.

111 What are Low Density Parity Check Codes?
The percentage of 1’s in the parity check matrix for a LDPC code is low. A regular LDPC code has the property that: every code digit is contained in the same number of equations, each equation contains the same number of code symbols. An irregular LDPC code relaxes these conditions.

112 Equations for Simple LDPC Code with n=12 and m=9
c3  c6  c7  c8 = 0 c1  c2  c5  c12 = 0 c4  c9  c10  c11 = 0 c2  c6  c7  c10 = 0 c1  c3  c8  c11 = 0 c4  c5  c9  c12 = 0 c1  c4  c5  c7 = 0 c6  c8  c11  c12= 0 c2  c3  c9  c10 = 0

113 The Parity Check Matrix for the LDPC Code
c1 c2 c3 c4 c5 c6 c7 c8 c9c10c11c12 c3  c6  c7  c8 = 0 c1  c2  c5  c12 = 0 c4  c9  c10  c11 = 0 c2  c6  c7  c10 = 0 c1  c3  c8  c11 = 0 c4  c5  c9  c12 = 0 c1  c4  c5  c7 = 0 c6  c8  c11  c12= 0 c2  c3  c9  c10 = 0

114 Introduction - LDPC codes were originally invented by Robert Gallager in the early 1960s but were largely ignored until they were rediscovered in the mid- 1990s by MacKay. - Defined in terms of a parity check matrix that has a small number of non- zero entries in each column - Randomly distributed non-zero entries Regular LDPC codes Irregular LDPC codes - Sum and Product Algorithm used for decoding - Linear block code with sparse (small fraction of ones) parity-check matrix - Have natural representation in terms of bipartite graph - Simple and efficient iterative decoding

115 Introduction Review of parity check matrices:
Low Density Parity Check (LDPC) codes are a class of linear block codes characterized by sparse parity check matrices (H). Review of parity check matrices: – For a (n,k) code, H is a (n-k ,n) matrix of ones and zeros. – A codeword c is valid if cHT =s= 0 – Each row of H specifies a parity check equation. The code bits in positions where the row is one must sum (modulo-2) to zero. – In an LDPC code, only a few bits (~4 to 6) participate in each parity check equation. – From parity check matrix we obtained Generator Matrix G which is used to generate LDPC Codeword. – G.HT = 0 – Parity check matrix is arranged in Systmatic from as H = [Im | P] – Generator matrix G = [Ik | PT] – Code can be expressed as c= x . G

116 Low Density Parity Check Codes
Representations Of LDPC Codes (Soft) Message passing: Variable nodes communicate to check nodes their reliability (log-likelihoods) Check nodes decide which variables are not reliable and “suppress” their inputs Number of edges in graph = density of H Sparse = small complexity parity check matrix

117 Parity Check Matrix to Tanner Graph

118 LDPC Codes Bipartite graph with connections defined by matrix H
r’: variable nodes corrupted codeword s: check nodes Syndrome, must be all zero for the decoder to claim no error Given the syndromes and the statistics of r’, the LDPC decoder solves the equation r’HT=s in an iterative manner

119 Construction of LDPC codes
Random LDPC codes MacKay Construction Computer Generated random Construction Structured LDPC codes Well defined and structured code Algebraic and Combinatoric construction Encoding advantage over random LDPC codes Performs equally well as random codes Examples Vandermonde-matrix (Array codes) Finite Geometry Balance Incomplete block design Other Methods (e.g. Ramanujan Graphs)

120 Protograph Construction of LDPC codes by J. Thorpe
A protograph can be any Tanner graph with a relatively small number of nodes. The protograph serves as a blueprint for constructing LDPC codes of arbitrary size whose performance can be predicted by analysing protograph. J.Thrope, “Low-Density Parity Check codes constructed from Protograph,” IPN Progress report , 2003.

121 Protograph Construction (Continued)

122 LDPC Decoding (Message passing Algorithm)
Decoding is accomplished by passing messages along the lines of the graph. The messages on the lines that connect to the ith variable node, ri, are estimates of Pr[ri =1] (or some equivalent information). Each variable node is furnished an initial estimate of the probability from the soft output of the channel. The variable node broadcasts this initial estimate to the check nodes on the lines connected to that variable node. But each check node must make new estimates for the bits involved in that parity equation and send these new estimates (on the lines) back to the variable nodes.

123 Message passing Algorithm or Sum-Product Algorithm
While(not equal to stop criteria) { - All variable nodes pass messages to corresponding check nodes - All check nodes pass messages to corresponding variable nodes } Stop Criteria: Satisfying Equation r’HT=0 Maximum number of iterations reached

124 LDPC Decoding Process Check Node Processing z z L L
Variable Node Processing n m L 4 3 2 1 F Check Nodes M(n) Var Node n 1 mn z 2 4 3 1 mn z 4 3 Var Nodes N(m) Check Node m n m z 4 3 2 1 3 mn L 2 4 1 2 mn L Where Fn is the channel reliability value for hard decision

125 Sum-Product Algorithm
Step #1 : Initialisation LLR of the (soft) received signal yi , for AWGN Lij = Ri = where j represents check and i variable node Step # 2 Check to Variable Node Extrinsic message from check node j to bit node i Where Bj represents set of column locations of the bits in the jth parity-check equations and ln is natural logarithm

126 Sum-Product Algorithm (Continued)
Step #3 : Codeword test or Parity Check Combined LLR is the sum of the extrinsic LLRs and the original LLR calculated in step #1. Where Ai is the set of row locations of the parity-check equations which check on the ith bit of the code Hard decision is made for each bit

127 Sum-Product Algorithm (Continued)
Condition to stop further iterations: If r =[r1, r2,…………. rn] is a valid code word then, It would satisfied H.rT = 0 maximum number of iterations reached. Step #4 : Variable to Check Node Variable node i send LLR message to check node j without using the information from check node j . Return to step # 2

128 Example: Code word send is [ ] through AWGN channel with Eb/No = 1.25 and Vector [ – – ] is received. Parity Check Matrix R= Step # 1 After Hard Decision we find that 2 bits are in errors so we need to apply LDPC decoding to correct the errors.

129 Example (Continued) Variable Nodes Check Nodes Variable Nodes

130 Initialisation

131 Questions 1. Suppose we have codeword c as follows:
where each is either 0 or 1 and codeword now has three parity- check equations Determine the parity check matrix H by using the above equation Show the systematic form of H by applying Gauss Jordan elimination Determine Generator matrix G from H and prove G * HT = 0 Find out the dimension of the H, G State whether the matrix is regular or irregular

132 Questions 2. The parity check matrix H of LDPC code is given below:
Determine the degree of rows and column State whether the LDPC code is regular or irregular Determine the rate of the LDPC code Draw the tanner graph representation of this LDPC code. What would be the code rate if we make rows equals to column Write down the parity check equation of the LDPC code

133 Questions 3. Consider parity check matrix H generated in question 1,
Determine message bits length k, parity bits length m, codeword length n Use the generator matrix G obtained in question 1 to generate all possible codewords c. 4. What is the difference between regular and irregular LDPC codes? What is the importance of cycles in parity check matrix? Identify the cycles of 4 in the following tanner graph. Check Nodes Variable Nodes

134 Solutions Question 1 a) (A) This 1 needs to be eliminated to achieve
the identity matrix I Question 1 b) Desired diagonal Applying Gaussian elimination first Swap C3 with C4 to obtain the diagonal of 1’s in the first 3 columns modulo-2 addition of R1 and R2 in equation (A) modulo-2 addition of R1 and R3 in equation (A)

135 Solutions Now, we need an Identity matrix of 3 x 3 dimensions. As you can see the first 3 columns and rows can become an identity matrix if we somehow eliminate 1 in the position (row =1 and column =2). To do that we apply Jordan elimination to find the parity matrix in systematic form, hence Modulo- 2 addition of R2 into R1 gives It is now in systematic form and represented as H = [I | P]

136 Solutions Generator matrix can be obtained by using H in the systematic form obtained in a) G = [PT | I] To prove G * HT = 0

137 Solutions d) Dimension of H is (3 × 6) and G is also (3 × 6) e)
Matrix is irregular because the number of 1’s in rows and columns are not equal e.g. number of 1’s in 1st row is ‘3’ while 3rd row has ‘4’ 1’s. Similarly, number of 1’s in 1st column is ‘3’ while in 2nd columns has ‘2’ 1’s. Question 2 a) The parity check matrix H contains 6 ones in the each row and 3 ones in each column. The degree of rows is the number of 1’s in the rows which is 6 in this case, similarly and the degree of column is the number of 1’s in the column hence in this case 3. b) Regular LDPC, because the number of ones in each row and column are the same c) Rate = 1 – m / n = 1 – 6/12 = ½

138 Solutions d) Tanner graph is obtained by connecting the check and variable nodes as follows

139 Solutions e) If we make rows equals to columns then the code rate is 1. It means there is no redundancy involved in the code and all bits are the information bits. f) Parity check equations of the LDP code are

140 Solutions Question 3 a) message bits length k = 6-3 =3
parity bits length m = 3 codeword length n = 6 b) Since the information or message is 3 bits long therefore, The information bits has 8 possibilities as shown in the Table below

141 Solutions The codeword is generated by multiplying information bits with the generator matrix as follows c = x G The Table below shows the code words generated by using G in question 1.

142 Solutions Question 4 a) The Regular LDPC code has constant number of 1’s in the rows and columns of the Parity check matrix. The Irregular LDPC code has variable number of 1’s in the rows and columns of the b) A cycle in a tanner graph is a sequence of connected vertices that start and end at the same vertex in the graph, and other vertices participates only once in the cycle. The length of the cycle is the number of edges it contains, and the girth of a graph is the size of the smallest cycle. A good LDPC code should have a large girth so as to avoid short cycles in the tanner Graph since they introduce an error floor. Avoiding short cycles have been proven to be more effective in combating error floor in the LDPC code. Hence the design criteria of LDPC code should be such that it removes most of the short cycles in the code.

143 Solutions c) Cycles of length 4 are identified as follows

144 Security in Mobile Systems
By Prof R A Carrasco School of Electrical, Electronic and Computing Engineering University of Newcastle-upon-Tyne

145 Security in Mobile Systems
Air Interface Encryption -Provides security to the Air Interface -Mobile Station to Base Station End-to-end Encryption -Provides security to the whole communication path -Mobile Station to Mobile Station

146

147

148 Air Interface Encryption Protocols
Symmetric Key -Use Challenge Response Protocols for authentication and key agreement Asymmetric Key -Use exchange and verification of ‘Certificates’ for authentication and key agreement Where it is used

149 Challenge Response Protocol
GSM *Only authenticates the Mobile Station *A3,A8,Algorithms are used TETRA *Authentication both Mobile Station and the Network *TA11, TA12, TA21, TA22 algorithm are used 3G *Authentication and Key Agreement (AKA)

150 3G Advantages - Simpler than Public key techniques
- Less processing power required in the hand set Disadvantages - Network has to maintain a Database of secret keys of all the Mobile stations supported by it - The secret key is never changed in normal operation - Have to share secret keys with MS

151 Challenge-Response Protocol
MS sends its identity to BS BS sends the receiver MS identity to AC AC gets the corresponding key ‘k’ from database AC generates a random number called a challenge By hashing K and the challenge the AC computes a signed response It also generates a session authentication key by hashing K and the challenge (difference hashing function)

152 Challenge-response Protocol
7.AC sends the challenge, Response and session key to BS 8.BS sends the challenges to the MS 9.MS computes the Response and the session authentication key 10.MS sends the response to the BS 11. If the two Responses received by BS from AC & MS are equal, the MS is authentic 12. Now MS and BS uses the session key to encrypt the communication data between them

153 Challenge-Response Protocol
MS BS AC Database Identity (i) Identity (i) Identity key K challenge challenge K I K1 I K2 Response/ks Response

154

155 Challenge response protocol in GSM
1)MS sends its IMSI (international mobile subscriber identity) to the VLR 2)VLR sends the IMSI to AC via HLR 3)AC looks up in the database and gets the authentication key “Ki” using IMSI 4)Authentication center generates RAND 5)It combines Ki with RAND to produce SRES using A3 algorithm 6)It combines ki with RAND to produce kc using A8 algorithm

156 Challenge response protocol in GSM
7) The AC provides the HLR a set of RAND ,SRES ,kc triplets 8) HLR sends one set to the VLR to authenticate the MS 9) VLR sends RAND to the MS 10) MS computes SRES and kc using ki and RAND 11) MS sends SRES to the VLR 12) VLR compares the two SRES ’s received from the HLR and MS. If they are equal, the MS is authenticated SRES=A3(ki,RAND) and kc=A8(ki,RAND)

157

158 TETRA Protocol Protocol flow
1.MS sends its TMSI(TETRA Mobile subscriber Identity) to BS (Normally a temporary Identity) 2) BS sends the TMSI to AC 3)AC looks up in the database and gets the Authentication key ‘k’ using TMSI 4)AC generates a 80 bit RANDOM Seed (RS) 5)AC computes KS (session authentication key-128bits) using K and RS 6)AC sends KS &RS to BS 7)BS generates a 80 bit random challenge called RAND1 and computes a 32 bit expected response called XRES1

159 TETRA Protocol 8.BS sends RAND1 and RS to the MS
9.MS computes KS using k & RS 10.Then MS computes RES1 using KS & RAND1 11.MS sends RES1 to the BS 12.BS compares RES1 and XRES1.If they are equal ,the MS is authenticated . 13.BS sends the results ‘R1’ of the comparison to the MS

160

161 Authentication of the user
Protocol Flow 1)A Random number is chosen by AC called RS 1a)AC uses the K and RS to generate session key (KS), using TA11 algorithm 2)AC sends the KS and RS to the base station. 3)BS generates a random number called RAND1. 4)BS computes expected response (XRES1) and Derived Cypher key noting also TA12

162 Authentication of the user
5)BS sends RS and RAND1 to MS 6)MS using his own key (k) and RS computes KS (session key) using TA11 also use TA12 computes RES1 and DCK1. 7)MS sends RES1 to BS. 8)BS computes XRES1 with RES1.

163

164

165

166 Comparison Challenge response Protocol Advantages
Simpler than Public key techniques Less processing power required in the hand set Disadvantages Network has to maintain a Database of secret keys of all the Mobile stations supported by it

167 Comparison Public key Protocol Advantages
Network doesn’t has to share the secret keys with MS Network doesn’t has to maintain a database of secret keys of the Mobiles Disadvantages Requires high processing power in the mobile handsets to carry out the complex computations in real time

168 Hybrid Protocol Combines the challenge-response protocol with a Public key scheme. Here the AC also acts as the Certification authority AC has the public key PAC and private key SAC for a public key scheme. MS also has the public key pi and private key si for a public key scheme.

169

170 End to End Encryption Requirements
Authentication Key Management Encryption Synchronisation for multimedia (e.g Video)

171 Secret key Methods Advantages
Less complex compared to public key methods Less processing power required for implementation Higher encryption rates than the public key techniques Disadvantages Difficult to manages keys

172 Public key Methods Advantages Easy to manage keys
Capable of providing Digital Signatures Disadvantages More complex and time consuming computations Not suitable for bulk encryption of user data

173 Combined Secret-key and Public-key Systems.
Encryption and Decryption of User Data (Private key technique) Session key Distribution (Public key technique) Authentication (Public key technique)

174 Possible Implementation
Combined RSA and DES Encryption and Decryption of Data (DES) Session key distribution (RSA) Authentication (RSA and MD5 Hash Function) Combined Rabin’s Modular Square Root (RMSR) and SAFER

175

176 One Way Hashing Functions
A one way hash function, H(M), operates on an arbitrary-length pre-image M and return a fixed- length value h h=H(M) where h is of length m. The pre-image should contain some kind of binary representation of the length of the entire message. This technique overcome a potential security problem resulting from message with different lengths possibly hashing to the same value. (MD-Strengthening).

177 Characteristic of hash functions
Given M. it is easy to compute h. Given h, it is hard to compute M such that H(M)=h Given M, it is harder to find another message, M’ such that H(M)=H(M’) MD5 Hashing Algorithm Variable Length MD bit Message message Digest

178 Questions Describe the Channel Response Protocol for authentication and key agreement. Describe the Channel Response Protocol for GSM. Describe the TETRA Protocol for authentication and key agreement. Describe the authentication for the user in mobile communications and networking. Describe the End to End encryption requirement, Secret Key Methods, Public Key Methods and possible implementation. Explain the principle of public/private key encryption. How can such encryption schemes be used to authenticate a message and check integrity. Describe different types of data encryption standards.

179

180

181


Download ppt "MSc Data Communications"

Similar presentations


Ads by Google