Presentation is loading. Please wait.

Presentation is loading. Please wait.

Turbo Codes Colin O’Flynn Dalhousie University

Similar presentations


Presentation on theme: "Turbo Codes Colin O’Flynn Dalhousie University"— Presentation transcript:

1 Turbo Codes Colin O’Flynn Dalhousie University
4/14/2017 Turbo Codes Colin O’Flynn Dalhousie University Last Update of This Presentation: Thursday Jan 26 / 2012

2 Handy References @Dal Trellis and turbo coding Codes and turbo codes
Schlegel, Christian B.; Pérez, Lance C.  Codes and turbo codes Claude Berrou Turbo code applications : a journey from a paper to realization Keattisak Sripimanwat Note: You need to be on Dalhousie Network or using EZProxy to access these resources online.

3 Handy References @Dal The design and application of RZ turbo codes
Xin Liao Turbo coding, turbo equalisation, and space-time coding : exit-chart aided near-capacity designs for wireless channels L. Hanzo et. al. Note: You need to be on Dalhousie Network or using EZProxy to access these resources online.

4 Handy References A Turbo Code Tutorial
4/14/2017 Handy References A Turbo Code Tutorial William E. Ryan Liang Li’s Nine Month Report & MATLAB Code Liang Li, Rob Maunder “A Turbo Code Tutorial” includes pseudo-code for implementing the turbo code decoding even. Chapers 4,9,15 of Turbo coding, turbo equalisation, and space-time coding : exit-chart aided near-capacity designs for wireless channels

5 Notes If You Are Viewing Online
Be sure to enable presentation notes, as there is some additional information there: I’m using the following acronyms to reference where you can find additional information: CATC: Codes and Turbo Coding, 1st Edition, Berrou TATC: Trellis and Turbo Coding, 1st Edition, Schlegel TCT: Turbo Code Tutorial, Ryan TCTEASTC: Turbo Coding, Turbo Equalization, and Space-Time Coding

6 Which Reference to Use? Trellis and Turbo Coding – Very in-depth guide to Turbo Codes, including mathematical derivation. Covers sufficient background to make book stand-alone reference. Covers topics I didn’t find in other books such as derivation of free-distance for turbo codes. Codes and Turbo Coding – Less mathematical derivation by comparison, but contains some examples which are easier to follow. Author was also on team that invented Turbo Codes. Turbo Code Applications – Covers how codes are used in real systems, and also history of discovery in more depth than other books. Turbo coding, turbo equalisation, and space-time coding : exit-chart aided near-capacity designs for wireless channels - I found had easiest to understand description of how Turbo Decoding works. Includes a complete example of the decoding process, with intermediate values and internal operation of soft-output decoders. Also includes substantial number of charts showing difference in performance for changing Turbo parameters, which is fairly interesting. You can get a few chapters of this book online (see previous slides for link).

7 Background, hand-waving, and all that stuff

8 4/14/2017 The Beginning < We wish to have biggest difference between valid code-words (Minimum Hamming Distance). Unfortunately this means complexity of decoding increases – to get very good performance would require too much computational force.

9 4/14/2017 They did try to make big decoders too! The grand-daddy of them all is the Big Viterbi Decoder at NASA, which ran on 14-bit state register. Cost, size, and power are all a little large to fit inside your cell phone though… “With 52 FPGAs for the processors, 52 FPGAs for the switches, seven FPGAs for the memory controllers and one FPGA to control the whole system, a total of 112 FPGAs are required. In total, the cost of the RACER II is approximately $200,000, which is 10 times cheaper than the BVD at the same decoding rate and without the need for custom ASICs”

10 The Beginning Large MHD Code Small MHD Code What we can have What we want “It's not the quantity that counts — it's the quality” WRONG

11 The Beginning Small MHD Code Large MHD Code Small MHD Code Small MHD Code Maybe we can get the large MHD by using several of the small ones…

12 Concatenated Codes: Serial
Permutation Outer Encoder Inner Encoder Permutation Inner Decoder Outer Decoder

13 Concatenated Codes: Parallel
4/14/2017 Concatenated Codes: Parallel Systematic Part of Codeword Encoder 1 Redundant Part of Codeword from Enc 1 Permutation This is a 2-dimension encoder, can have higher dimensions Encoder 2 Redundant Part of Codeword from Enc 2

14 Concatenated Codes: Comparison Shopping
4/14/2017 Concatenated Codes: Comparison Shopping Parallel Serial MHD Normally better compared to parallel Systematic ONLY, at least one should be recursive, need to be careful with code choice Systematic or Non-Systematic, pretty indifferent to code choice

15 Example: Parallel Using Hamming (7,4) as base: Parity from Code 1 Data
4/14/2017 Example: Parallel Using Hamming (7,4) as base: Parity from Code 1 Data You can look up the parity tables from hamming (7,4) codes on e.g.: Wikipedia so you can see where I get the parity values from. Parity from Code 2

16 Example: Parallel Using Hamming (7,4) as base: Math Stuff
4/14/2017 Example: Parallel Using Hamming (7,4) as base: Parity from Code 1 Math Stuff Rate = 16/40 =4/10 For input of weight 1, output = 5 Asymptotic Gain = 10log(R*d) = 10log(4 * 5 / 10) =3.01 dB Compare with input code: =10log(4 * 3 / 7) = 2.34 dB Data Be sure to view this slide in presentation mode to see the amazing animation… This shows you how parallel works – each code basically doesn’t care about the other. The permutation is done by reading the first code row-wise, and the second code column-wise. Parity from Code 2

17 Example: Serial Using Hamming (7,4) as base: Math Stuff Rate = 16/49
4/14/2017 Example: Serial Using Hamming (7,4) as base: Parity from Code 1 Math Stuff Rate = 16/49 For input of weight 1, output = 9 Asymptotic Gain = 10log(R*d) = 10log(16 * 9 / 49) =4.68 dB Compare with input code: =10log(4 * 3 / 7) = 2.34 dB Data Be sure to view this slide in presentation mode to see the amazing animation… So serial concatenation runs overtop of the previous data, which gives you a lower rate. Parity from Code 2

18 Concatenated Codes: Conclusions
4/14/2017 Concatenated Codes: Conclusions As promised, serial codes had better MHD but worse Rate. Parallel codes can work well but should use Recursive Systematic Convolutional (RSC) codes – notice poor performance with hamming code here. Note I prove later WHY RSC is so good in parallel concatenated codes.

19 Tag Team (Iterative) Decoding
4/14/2017 Tag Team (Iterative) Decoding Everybody loves wrestling photos. Eat this Error-Man!

20 Iterative Example V S D A L T H E N P O G I animate Greek Sticks
1 2 3 4 5 animate Greek Sticks Conflicts slow oral representation projection force rope V S D A L T H E N P O G I i ii iii iv v This example from: Codes and Turbo Codes by Claude Berrou

21 Iterative Example - Across
1 2 3 4 5 animate Greek Sticks Conflicts slow oral representation projection force rope V S D A L T H E C N P O G I i ii iii iv v This example from: Codes and Turbo Codes by Claude Berrou

22 Iterative Example - Down
1 2 3 4 5 animate Greek Sticks Conflicts slow oral representation projection force rope V S T A L H E G C N P O i ii iii iv v This example from: Codes and Turbo Codes by Claude Berrou

23 Iterative Example - Across
1 2 3 4 5 animate Greek Sticks Conflicts slow oral representation projection force rope V I T A L H E G C N S O i ii iii iv v Agons = fight/struggle in Latin Lento = slow in Spanish This example from: Codes and Turbo Codes by Claude Berrou

24 Iterative Example - Down
1 2 3 4 5 animate Greek Sticks Conflicts slow oral representation projection force rope V I T A L O M E G C N S i ii iii iv v Tenon = A projecting piece of wood made for insertion into a mortise in another piece This example from: Codes and Turbo Codes by Claude Berrou

25 Iterative Example - Across
1 2 3 4 5 animate Greek Sticks Conflicts slow oral representation projection force rope V I T A L O M E G C N S i ii iii iv v This example from: Codes and Turbo Codes by Claude Berrou

26 Turbo Encoder – General Format
4/14/2017 Turbo Encoder – General Format Systematic Part of Codeword Encoder 1 Redundant Part of Codeword from Enc 1 Permutation So this is what the turbo encoder looks like. The Puncturing just throws out some bits – this increases your code rate, which would otherwise be 1/3. Puncturing (Optional) Encoder 2 Redundant Part of Codeword from Enc 2

27 Turbo Encoder - Permutation
4/14/2017 Turbo Encoder - Permutation WIRELESS CHANNEL WHY do we usually do this permutation? It’s to increase convolution codes resistance to blocks of errors, which they don’t deal with too well.

28 Turbo Encoder - Permutation
WIRELESS CHANNEL

29 Turbo Encoder - Permutation
4/14/2017 Turbo Encoder - Permutation Assumption: Nature Hates Us (In all fairness, we started it.)

30 Turbo Encoder - Permutation
4/14/2017 Turbo Encoder - Permutation WIRELESS CHANNEL Now what happens if nature corrupts such bits that they all end up in one packet? We would have been better of not permeating the input sequence. Damn you nature! But Turbo codes have an advantage here, that is that since only one of the sequences is permeated, it spreads out the errors in either case. It allows us to get much closer to the Shannon limit by forcing the errors to ‘play nicely’, and give the code the best chance of correcting it.

31 Turbo Encoder - Termination
4/14/2017 Turbo Encoder - Termination Do Nothing Decreases Asymptotic Gain Will give proof later about why this is bad – hold on Convolutional codes normally terminated – this is brought back to the all zeros state. You can terminate the first encoder easily enough, but the 2nd will be trickier since the interleaver will ruin your termination sequence. So we have some ways of dealing with this – the first is do nothing, and not terminate ANY encoder. This isn’t such a good idea (will show you later).

32 Turbo Encoder - Termination
4/14/2017 Turbo Encoder - Termination Terminate the Trellis of One or Both, outside of permutation Systematic Part of Codeword Encoder 1 Message Redundant Part of Codeword from Enc 1 Permutation The second is to terminate just the first encoder, and allow the 2nd to finish in any state. This actually isn’t nearly as bad as leaving both of them unterminated. You can also terminate both encoders OUTSIDE of the interleaver – that is the termination sequence is not interleaved. This isn’t so great because you don’t have any spreading of the termination bits. Simulations have shown for larger block sizes that not terminating 2nd encoder you lose a minimal amount of performance. For some applications the easiness of this approach outweighs the tiny performance penalty. Puncturing (Optional) Encoder 2 Redundant Part of Codeword from Enc 2

33 Unterminated Loss N=1024, only 1st encoder terminated
4/14/2017 Unterminated Loss N=1024, only 1st encoder terminated Here is an example comparing what happens when you don’t terminate both. For longer block sizes it’s not a big deal at all. N=1024, both encoders terminated From “Illuminating the Structure of Code and Decoder of Parallel Concatenated Recursive Systematic (Turbo) Codes”, by Patrick Robertson

34 Turbo Encoder - Termination
4/14/2017 Turbo Encoder - Termination 3. Use Interleaver to Terminate Trellis based on input sequence to first Here you get clever – you use the interleaver in such a way the termination of one encoder is transformed into a valid termination sequence for the 2nd encoder. For an example of this see the paper “Trellis Termination in Turbo Codes with Full Feedback RSC Encoders” by Xin Liao, Jacek Ilow, and Ali Al-Shaikhi.

35 Turbo Encoder - Termination
4/14/2017 Turbo Encoder - Termination 4. Circular Encoding (“Tail Biting”) I’m not even going to talk about this to be honest. See page 138 of TATC or page 194 of CATC.

36 a Little More Rigor

37 Why is RSC So Good? Convolution Code (Not Systematic, Not Recursive):
4/14/2017 Why is RSC So Good? Convolution Code (Not Systematic, Not Recursive): Here is a normal convolutional code. If we put a single-weight input in (e.g.: a single 1 anywhere), the result is a finite-weight output, with a maximum possible weight 2x the highest degree of the generator polynomial. Note: Input of weight 1 results in finite-weight output TACT pp291, TTC pp1

38 Why is RSC So Good? Recursive Systematic Convolution Code:
4/14/2017 Why is RSC So Good? Recursive Systematic Convolution Code: Here is a recursive systematic convolution code. Note that because of it’s recursive nature, an input of weight 1 will result in an output of infinite weight. In real systems the weight is bounded because the input is of limited length and has trailing bits to terminate the trellis. But the point is a weight-1 input results in a large-weight output. Why does that matter? The RSC will also have codes with low weights, why do we care about a weight-1 input mapping to something high, compared to say a weight-48 input mapping to a codeword with a low eight? The answer is in the Parallel Concatenated structure: Note: Input of weight 1 results in infinite-weight output

39 Example with Non-Recursive
4/14/2017 Example with Non-Recursive Systematic Part of Codeword Encoder 1 Redundant Part of Codeword from Enc 1 Permutation Let’s say these are both non-recursive encoders. The input happens to have a weight of 1, which for the first encoder produces a codeword of low weight. After the permutation the input still has a weight of 1 – it doesn’t matter what the permutation does. Then the second encoder still produces a low weight codeword. So the end result is fairly poor. And this happens for every single-weight codeword, which is why non-recursive codes perform so poorly in PC. Non-recursive encoders Encoder 2 Redundant Part of Codeword from Enc 2

40 Example with RSC Systematic Part of Codeword Encoder 1
4/14/2017 Example with RSC Systematic Part of Codeword Encoder 1 Redundant Part of Codeword from Enc 1 Permutation Now what happens when they are RSC? First, single-weight input sequences don’t result in low-weight codewords. But say you give encoder 1 an input sequence that produces a low-weight codeword. The permutation will almost certainly change that sequence into another input sequence, which for Encoder 2 will produce a totally different weight codeword. RSC Encoder 2 Redundant Part of Codeword from Enc 2

41 Permutation - Linear Encoder 1 dFree Permutation Encoder 2 dFree
4/14/2017 Permutation - Linear Encoder 1 dFree Permutation Say this is the input to the first encoder – and this sequence happens to generate the minimum distance codeword dfree. Well if the permutation is a linear transformation where rows & columns are interchanged, which could be seen as the equivalent to rotating it 90 degrees, the result after the transformation is the same! Encoder 2 dFree TACT pp296-7

42 Permutation - Random Encoder 1 dFree Permutation Encoder 2 dFree
4/14/2017 Permutation - Random Encoder 1 dFree Permutation This is why a random permutation is useful – it makes sure you change the bits around such it is very unlikely to map two information sequences such they generate the same dFree. This also brings up something I talked about earlier – why do we terminate the first encoder? Encoder 2 dFree

43 Why Terminate Encoder 1? 0 0 0 … 0 0 1 4/14/2017
If NOT terminated, this is a valid input: all zeros followed by a single 1. Because the trellis is not terminated (returned to all zeros), this means the coded output will be a 1 as well! So the total code weight is 2 at best, which is terrible. Even worse some permutations may not change the input sequence, because they happen to just not change that single bit that matters, but maybe change every other bit. This is very bad indeed, as now both encoders are outputting a low weight codeword. This means your minimum distance will be very small – if you have puncturing it could be as low as 2. By terminating the trellis, we force some more 1’s to be outputted, increasing our code weight. At least by terminating the first encoder we avoid the case of two very low code-weight words occurring, and beef up our minimum distance.

44 Analyzing Performance

45 What does Turbo Code BER Look Like?
4/14/2017 What does Turbo Code BER Look Like? Alright, here is a turbo code BER example. This is for a N=1000 code. The region in the middle is called the ‘Turbo Cliff’, because it quickly changes BER. Here a change of 1 dB is the difference between almost a 1/10 chance of error to less than 1/10000 chance of error. Codes with longer interleaves have even steeper cliffs – a fraction of a dB change in SNR can be enough to result in going over the turbo cliff. I’m showing you this so you have some idea what real curves look like, as next I’m going to be talking about the free-distance bound. Turbo Cliff

46 4/14/2017 Here is an even steeper cliff – a change of about 0.5 dB is the difference between a 1/10 and 1/ chance of error.

47 Bit Error Rate Bound of Finite-Length Convolution Code
4/14/2017 Bit Error Rate Bound of Finite-Length Convolution Code Convolutional codes have infinite length. But I want to break them into blocks, and present this equation to give you the bound on the BER for a finite-length convolutional code. You can see this is the addition of applying the Q function to every possible input sequence and resulting codeword. Note that this obviously becomes almost impossible for large block sizes – a 512 block size would mean 1.34x additions. Consider the universe is about x 1015 seconds old, it quickly becomes apparent you cannot do that… TACT 10.3 / 10.4 : pp290-7

48 Distance Spectrum Representation of Bound
4/14/2017 Distance Spectrum Representation of Bound So assume instead you were able to get the distance spectrum. Here we have information about how many times each distance occurred – we instead can rearrange the previous summation to sum over all possible free distances. The derivation proves that (as you might guess) it’s the low distances that matter, the higher distances don’t contribute much. This lets you do a much more reasonable summation…. but even that isn’t too important, as turbo codes let you get simpler: TACT 10.3 / 10.4 : pp290-7

49 Free Distance of Turbo Code
4/14/2017 Free Distance of Turbo Code For the turbo code, the interleaver ends up mapping low-weight codewords for the first encoder to high-weight codewords for the second encoder. We thus assume it is ‘sparse’ in low-weight codewords, and only take the free-distance as the asymptote. This asymptote is ONLY valid of medium or high SNRs, beyond the ‘turbo cliff’. It does not predict performance at lower SNRs, and indeed a code which is asymptotically worse may work better at the SNR your system is operating at. So don’t get too hung up on this (or any one) performance evaluation! Anyway let’s look at several examples of how the free distance changes. (note: The full derivation of this is given in TACT in section 10.3, and additional information in section ) TACT 10.3 / 10.4 : pp290-7

50 Plotting Distance Spectrum
4/14/2017 Plotting Distance Spectrum function printBERContribution( d, Nd, wd, N, R ) %PRINTBERCONTRIBUTION Print contribution from distance spectrum components. % d, Nd, wd are arrays of numbers, each index corresponding to one spectral % component. N is interleaver size. R is rate. close all; %SNR Range in dB SNR_range = [0:0.01:2]; ebno = 10 .^ (SNR_range ./ 10); ber=zeros(length(d), length(ebno)); linetypes = {'b', 'r--', 'b--', 'r:', 'b:', 'r-.', 'b-.'}; leg = cell(1,length(d)); for i=1:length(d) ber(i+1,:) = ber(i,:) + ((Nd(i) * wd(i)) / N) * qfunc (sqrt(d(i) * 2 * R * ebno)); semilogy(SNR_range, ber(i+1,:), linetypes{i}); hold on leg{i} = sprintf('d = %d', d(i)); end legend(leg); This MATLAB code will plot the value of the summation, so you can see how the line moves from only having the dfree contribution.

51 Distance Spectrum Examples
The following plotted with Roger Garello’s algorithm & software. Based on ‘example 2’ available from: Full tutorial for plotting given in Part 2. Specifications: -No Puncturing (e.g.: rate = 1/3) -Block length = 1000

52 Distance Spectrum (examples)
4/14/2017 Distance Spectrum (examples) Linear Interleaver N=1000 d Nd Wd 11 1 21 2 24 Alright, here is the easiest possible interleaver. It’s a linear interleaver created by exchanging the rows for columns. You can see from the graph how uniform it is – so it has great spreading, but poor randomness. The resulting dfree = 11. >> plot(0:999, inter, ‘.’) >> inter = CreateLinearInterleaver(1000,25,40); >> writePerm(inter); turbo.exe

53 Distance Spectrum Contribution
4/14/2017 Distance Spectrum Contribution >> d=[ ]; >> Nd=[1 1 1]; >> wd=[1 2 1]; >> printBERContribution(d, Nd, wd, 1000, 1/3);

54 Distance Spectrum (examples)
4/14/2017 Distance Spectrum (examples) Linear Interleaver N=1000 d Nd Wd 11 1 14 2 15 18 3 Here is another linear example, this time I’ve changed around the size of the matrix… >> plot(0:999, inter, ‘.’) >> inter = CreateLinearInterleaver(1000,100,10); >> writePerm(inter); turbo.exe

55 Distance Spectrum (examples)
4/14/2017 Distance Spectrum (examples) Random Interleaver N=1000 d Nd Wd 14 2 4 16 1 18 3 6 19 Alright, so now we use a purely random interleaver. A little better – it’s random, but it’s not got very good spreading. Note how many points are very close together. >> inter = CreateRandomInterleaver(1000); >> writePerm(inter); turbo.exe >> plot(0:999, inter, ‘.’)

56 Distance Spectrum (examples)
4/14/2017 Distance Spectrum (examples) Another Random Interleaver N=1000 d Nd Wd 14 1 2 15 18 5 10 19 3 Every interleaver (e.g.: for ever frame) will be slightly different. Some might have better or worse performance, so ideally you would want to average over a bunch of samples. Since the TX & RX interleavers must be synchronized, random really means some pseudo-random sequence seeded with a known value. >> inter = randperm(1000) – 1; >> writePerm(inter) turbo.exe >> plot(0:999, inter, ‘.’)

57 Distance Spectrum Contribution
>> Nd=[ ]; >> wd = [ ]; >> printBERContribution(d, Nd, wd, 1000, 1/3);

58 Distance Spectrum (examples)
4/14/2017 Distance Spectrum (examples) S Random Interleaver N=1000, S=9 d Nd Wd 18 4 8 21 1 3 22 24 2 So we also have the ‘Spread Random’ or ‘S Random’ interleaver. This starts with random data, but spreads the points out a bit. Note here how they aren’t as clumped together as the purely random interleaver. >> inter = CreateSRandomInterleaver(1000, 9); >> writePerm(inter) turbo.exe >> plot(0:999, inter, ‘.’)

59 Distance Spectrum Contribution
>> Nd = [ ]; >> wd = [ ]; >> printBERContribution(d, Nd, wd, 1000, 1/3);

60 Distance Spectrum (examples)
4/14/2017 Distance Spectrum (examples) S Random Interleaver N=1000, S=16 d Nd Wd 20 1 22 5 10 25 3 26 8 16 Here we have S=16, which means the spreading factor is larger. Larger spreading factors mean more work in the interleaver, which means slower simulations. If the spreading factor is set too large it will never converge, and your simulation (or real turbo code) will stall. >> inter = CreateSRandomInterleaver(1000, 16); >> writePerm(inter) turbo.exe >> plot(0:999, inter, ‘.’)

61 Free Distance Asymptote vs. BER
4/14/2017 Free Distance Asymptote vs. BER >> startup >> CmlSimulate(‘TurboTests’, [8]) (wait a while, can end early with Ctrl-C if you don’t need higher SNRs) >> CmlPlot(‘TurboTests’, [8]) >> close([2 3 4]) >> figure(1) >> hold on >> printBERContribution([18], [4], [8], 1000, 1/3) Now here is an example of how the free distance asymptote compares to the actual performance. You can see that the asymptote predicts the performance only for past a certain level, which is why we say the asymptote works for ‘medium or higher’ SNRs. Using S-Random interleaver, S=9, same parameters as in previous slides.

62 Sidenote: Simulation Time
4/14/2017 Sidenote: Simulation Time SNR Value (dB) Delta Sim Time (hour:min:sec) Actual Sim Time (hour:min:sec) 0.00 00:00:02 0.25 00:00:04 0.50 00:00:03 00:00:07 0.75 00:00:14 1.00 00:00:32 00:00:46 1.25 00:03:55 00:04:41 1.50 00:12:05 00:16:46 1.75 00:16:18 00:33:04 2.00 00:26:45 00:59:49 2.25 00:51:03 01:50:52 2.50 01:28:47 03:19:39 2.75 02:22:46 05:42:25 3.00 04:13:04 09:55:29 3.25 08:19:09 18:14:38 3.50 16:54:45 35:08:33 Using cml running on dual 3.5 GHz Intel i7 990 on 64-bit Linux. No shortage of processing power! Simulating over the turbo cliff gets very very slow! It takes so many frames to get enough data to estimate the BER, your simulations slow to a crawl. This is one reason the free-distance asymptote is useful, since you can get if fairly quickly. You could see in the previous plot of the very steep turbo cliff how the BER floor was around 1E-8. So to even see where the floor is takes hours of simulations, again the free-distance asymptote gives you a much quicker feel for performance.

63 Free Distance Asymptote vs. BER
4/14/2017 Free Distance Asymptote vs. BER >> startup >> CmlSimulate(‘TurboTests’, [9]) (wait a while, can end early with Ctrl-C if you don’t need higher SNRs) >> CmlPlot(‘TurboTests’, [9]) >> close([2 3 4]) >> figure(1) >> hold on >> printBERContribution([11], [1], [1], 1000, 1/3) Another example, although I didn’t keep the simulation running as long. Again you can see it approaching the free-distance asymptote. Using Linear interleaver, same parameters as in previous slides.

64 Turbo Decoding

65 Soft Input Soft Output (SISO)
4/14/2017 Soft Input Soft Output (SISO) Decoder 1 1 Things Only I Know Things we Both Know We know the objective of the turbo encoder is to share information. So each decoder has some different information. First, there will be unique information only it knows – this is what it finds based on it’s own parity bits. Second, there is information it shares with the other decoder. This information is the probability of error on each bit. For this to work it is apparent the decoder MUST produce not just hard ‘1’ or ‘0’ decisions, but some idea about the confidence of each bit. This lets decoders achieve a solution as fast as possible, since they have some idea about the confidence on each bit. First, consider the most common decoding of RSC codes. We would probably use the Viterbi algorithm, but what does that give us? Let’s look… Decoder 2 Things only I Know 65

66 A Posteriori Probability (APP)
4/14/2017 A Posteriori Probability (APP) State 00 State 01 State 10 State 11 Normal decoding using the Viterbi algorithm aims to minimize the chance a codeword was selected in error – this is done by taking the path through the Trellis diagram with the minimum ‘error weight’. The result though has no information about each individual bit, just overall probability. We would imagine another type of decoder which calculated the probability that a certain codeword was transmitted given the received data. We could do this for every valid codeword, and the one with the highest probability would be the best choice. Here we are directly maximizing the probability of selecting the correct codeword, and not doing the ‘roundabout’ method of selecting the most likely path. The probability for each codeword is called the A Posteriori Probability (APP), and maximizing results in the Maximum A Posteriori (MAP) Decoder.

67 4/14/2017 Log-likelihood Ratio A quick side-note: for expressing probabilities we will use what is called the Log-likelihood Ratio (LLR). As you can see the LLR is the log of the ratio of the probability the bit is 1 to the probability the bit is 0. Here is a graph of the value (next slide): WARNING: Some papers define this other way around, so if using code or equations which rely on LLR, always look back to see which way the code/equation previously defined it. “Trellis and Turbo Coding” for example defines it as ln(p0/p1).

68 Log-likelihood Ratio 4/14/2017
Here is a graph of probability vs. LLR. If p0 is > p1, this means the LLR will be < 0. If p1 > p0, the LLR will be > 0. So we can just look at the value of the LLR – if it is greater than zero, the bit is more likely 1. If the LLR is lower than zero, the bit is more likely 0. The magnitude of the LLR represents the confidence of the prediction. >> p1=[0:0.01:1] >> p0=1-p1; >> llr = log(p1./p0); >> plot([0:0.01:1], llr); >> xlabel('P(u = 1)'); >> ylabel('LLR(u)');

69 Maximum A Posteriori Estimation: A Dumb Approach
4/14/2017 Maximum A Posteriori Estimation: A Dumb Approach Step 1: Generate EVERY possible Codeword %Generate every possible codeword for i=1:2^nbits codeword = rsc_encode([feedback; feedforward], allValidInputs(i,:), 1); all_codewords(i,:) = reshape(codeword, 1, len); end %For every possible codeword & ours: find out Pcodeword bitsInDiff = sum(abs(input - all_codewords(i,:))); bitsOK = len - bitsInDiff; %Find APP Pr{X | Y} % X = Codeword that was transmitted % Y = Codeword that was receieved pcodeword(i) = pberr^bitsInDiff * (1-pberr)^bitsOK; %Limited valid Tx codewords, so normalize probability to add up to 1.0 pcodeword = pcodeword ./ (sum(pcodeword)); Step 2: Calculate probability that transmitter sent some codeword So I said before conceptually you could brute-force calculate the probability. The code distribution has an example of doing this calculation, and here are some snippets to give you a feeling for MAP decoding. So based on a pberr (e.g.: P(1|0) or P(0|1) ) we can find the probability for each codeword. The pberr comes from the normal BPSK equation for error, that is Q(sqrt(2*EbNo)).

70 4/14/2017 MAP: A Dumb Approach Step 3: Calculate probability for each bit in systematic input part %Calculate individual probability of error pSystematic = zeros(1, nbits); for bitindex=1:nbits psum = 0; for i=1:2^nbits codewords_reshaped = reshape(all_codewords(i,:), 2, len/2); %Find probability any given bit is ZERO if codewords_reshaped(1, bitindex) == 0 psum = psum + pcodeword(i); end p0Systematic(bitindex) = psum; %Find probability any given bit is ONE p1Systematic = p0Systematic; %From P1 & P0 calculate LLR llrs = log(p1Systematic ./ p0Systematic); You can calculate the probability any particular bit was ‘0’ by summing over all probabilities where the valid codeword had that bit set to zero. If that value is > 0.5, it’s most likely zero. We transform those probabilities back into LLRs as shown. Note: this is not full code, see resources/brute_force_map.m for full listing

71 MAP: A Dumb Approach, Example
4/14/2017 MAP: A Dumb Approach, Example Information Bits: Codeword after RSC: *** SEND OVER CHANNEL *** Location of errors after demodulating: Result of decoding: llrs: Codeword: Result of Actual LOG-MAP Algorithm SISO Decoder with HARD inputs: So here is an example of running it. What is very interesting is the resulting bits in error also have very weak LLRs – the decoder realized they were in error, but doesn’t put a lot of weight into it’s estimation, since it was told conflicting information at the input. Also note how well our brute-force approach corresponds with one of the actual MAP algorithms. The LOG-MAP algorithm is one of the improvements to the MAP algorithm which greatly speeds up running it. Finally you can see that the LLR of the output bits is in fact WRONG. Bit #6 for example is estimated to be a ‘zero’, when it is in fact a 1. The output of our brute-force algorithm still selected the correct codeword, since it was selecting the output which minimized the chance of ALL bits in the entire codeword being in error, not JUST the information bits. The output LLRs are just about the information bits, since that is the only part of the codeword that several decoders will share. Run example yourself: doc\resources\brute_force_test.m

72 MAP: A Smart Approach: BCJR
4/14/2017 MAP: A Smart Approach: BCJR The original paper that published this MAP approach actually had a smarter algorithm than brute-force, as obviously it becomes impossible for any reasonable length of data. It’s still more complex though than the normal Viterbi algorithm, and by the authors own admission doesn’t result in any better performance. So perhaps it’s understandable that the paper attracted not too much interest. Here is a graph of citations of the paper I did – there was a few for the years after it came out, but is basically forgotten in the late 80’s early 90’s. After it’s use in the Turbo algorithm, this number exploded, and by now has almost 2000 citations in IEEE Xplore alone. Optimal Decoding of Linear Codes for minimizing symbol error rate Bahl, L.; Cocke, J.; Jelinek, F.; Raviv, J.;

73 Soft Input Soft Output (SISO)
4/14/2017 Soft Input Soft Output (SISO) Normal input to decoder from demodulator (hard-input) SISO decoders are basically the key to making turbo decoding work. In this example note how the decoder is missing out on loads of extra info. Bit 3 for example is VERY likely a -1, yet it gets the same information about bit 1 and 3. Soft input gives the decoder some idea about confidence on the probability.

74 Soft Input Soft Output (SISO)
4/14/2017 Soft Input Soft Output (SISO) Here is my best guess about the data a priori probability information SISO So a SISO decoder plays with probabilities. It just gets passed some probability information, and returns what it thinks the result is. In real systems this isn’t too useful though – you want a hard decision about a bit/symbol. If it isn’t sure it should just guess. For this reason SISO decoders remained mostly an academic interest until Turbo Codes gave them a true purpose in life, as the iterative decoding process requires the probabilities. Here is my best guess about the data a posteriori probability information

75 MAP: HISO vs SISO Information to Send 1 1 1 1 1 0 1 1 0
4/14/2017 MAP: HISO vs SISO Information to Send Demodulated Signal (input to SISO): Hard Limited Signal (input to HISO): Result of LOG-MAP Algorithm Decoder with HARD inputs: Result of LOG-MAP Algorithm Decoder with SOFT inputs: So here is an example of running it. What is very interesting is the resulting bits in error also have very weak LLRs – the decoder realized they were in error, but doesn’t put a lot of weight into it’s estimation, since it was told conflicting information at the input. Also note how well our brute-force approach corresponds with one of the actual MAP algorithms. The LOG-MAP algorithm is one of the improvements to the MAP algorithm which greatly speeds up running it. Finally you can see that the LLR of the output bits is in fact WRONG. Bit #6 for example is estimated to be a ‘zero’, when it is in fact a 1. The output of our brute-force algorithm still selected the correct codeword, since it was selecting the output which minimized the chance of ALL bits in the entire codeword being in error, not JUST the information bits. The output LLRs are just about the information bits, since that is the only part of the codeword that several decoders will share.

76 EXIT Chart Extrinsic Information Transfer (EXIT) Output Information
4/14/2017 EXIT Chart Extrinsic Information Transfer (EXIT) Output Information Since a SISO decoder plays with probabilities, there must be a way to analyze these. We use something called the ‘mutual information’ (MI) to do so. Mutual information compares the two random variables to create a number indicating how much information they share, in the range 0 to 1. For our case this is used to compare the data bits (which are ‘0’ or ‘1’), and the LLR input/output of the decoder. If the decoder is doing something, the output MI should be higher than the input. The EXIT chart plots this for different SNRs. So you take some random data, add the appropriate amount of noise (based on SNR), and feed it into the decoder. This is the input information. You will also provide the decoder with some ‘extrinsic’ information, which would normally come from the other decoder. But instead you force this extrinsic information to have a certain value, say 0.2, which is Ia on this graph. So this extrinsic information isn’t too closely associated with the input information. You then run a single iteration of the decoder. Measuring the extrinsic information at the output results in the Ie value. You can plot these for a range of Ia input extrinsic information and get the transfer charts shown. They show you how much the decoder improves it for a single iteration. Input Information CATC pp & TCTEASTC 16.3 pp

77 Depuncturing / Reshaping
Turbo Decoder Setup Upper Encoder Parity Bits SISO SISO Extrinsic Information Depuncturing / Reshaping Systematic Part Lower Encoder Parity Bits

78 Turbo Decoder Iterations
4/14/2017 Turbo Decoder Iterations Iteration 4 Iteration 3 Iteration 2 Iteration 1 You can plot how the actual extrinsic information (LLRs) went compared to how it was supposed to go. When the theoretical limit is plotted, this is done with truly random input LLRs. When plotting the trajectory, the only random LLR will be the initial one from the channel. The rest may start becoming systematic – especially if the interleaver is poor. Long interleavers should see the trajectory approach the theoretical limit, but poor and/or short interleavers will be well inside the limit.

79 4/14/2017 EXIT Chart Notes The EXIT chart is useful for low SNRs to see where convergence starts. So here is one decoder – at an SNR of -0.5 the channel is “closed” and decoders are only very rarely (maybe never) going to reach convergence.

80 4/14/2017 EXIT Chart Notes At an SNR of zero the decoders are more often converging to a mutual information of 1.

81 4/14/2017 EXIT Chart Notes Finally an SNR of 0.8 shows them reaching convergance no problem, as the channel is fully open.

82 EXIT Chart – BER of Previous
4/14/2017 EXIT Chart – BER of Previous This corresponds with the BER plot – note how it is starting to enter the cliff region for an SNR above about 0.6. Thus the EXIT chart is useful to analyze decoders in the turbo cliff and lower SNRs. It is useless at high SNRs as the decoders converge after 1 or 2 iterations, so doesn’t tell you much.

83 Example

84 Turbo Example % SNR in dB to run channel at, play around % with this to get a good number % which uses a few turbo iterations. On my % system this causes the code to do % 3 iterations to correct all the errors SNR = -7.3; %Length of data in bits frame_length = 9; % The generator polynomials we are using feedback_polynomial = [ ] feedforward_polynomial = [ ]

85 Turbo Example - Polynomial
feedback_polynomial = [ ] feedforward_polynomial = [ ]

86 Turbo Example %Keep all tail bits
tail_pattern = [ %Encoder 1 Systematic Part %Encoder 1 Parity Part %Encoder 2 Systemtic Part 1 1 1];%Encoder 2 Parity Part %Puncture systematic part from encoder 2 pun_pattern = [ 1 1 %Encoder 1 Systematic Part 1 1 %Encoder 1 Parity Part 0 0 %Encoder 2 Systemtic Part 1 1];%Encoder 2 Parity Part

87 Turbo Example - Puncturing

88 Turbo Example %Max number of iterations to display
turbo_iterations = 5; %Automatically stop when no more errors autostop = 1;

89 Turbo Example – Data %% Data Generation
%Seed the random number generator so we always %get the same random data rand('state', 0); randn('state', 0); %Generate some random data data = round( rand( 1, frame_length ) ); fprintf('Information Data = '); fprintf('%d ', data); fprintf('\n'); Information Data =

90 Turbo Example - Encoding
%Make polynomial genPoly = [feedback_polynomial; feedforward_polynomial]; %How many rows in polynomial? [N, K] = size(genPoly); upper_data = data; upper_output = ConvEncode( upper_data, genPoly, 0); %lower input data is interleaved version of upper lower_data = interleave(data, 1); lower_output = ConvEncode( lower_data, genPoly, 0); % convert to matrices (each row is from one row of the generator) upper_reshaped = [ reshape( upper_output, N, length(upper_output)/N ) ]; lower_reshaped = [ reshape( lower_output, N, length(lower_output)/N ) ];

91 Turbo Example - Encoding
4/14/2017 Turbo Example - Encoding Upper Code Input = Upper Code Systematic = Upper Code Parity = Red bits are terminating bits. In this case the state before termination is “1 1 0” (from left to right), so the will terminate the registers to “0 0 0”.

92 Turbo Example - Encoding
4/14/2017 Turbo Example - Encoding Now we run those bits through the interleaver. This shows how the bits are related to the input sequence. function [interleaved] = interleave(input, print) input_data = reshape(input, a, a); output_data = input_data'; interleaved = reshape(output_data, 1, length(input));

93 Turbo Example - Encoding
4/14/2017 Turbo Example - Encoding Lower Code Input = Lower Code Systematic = Lower Code Parity = Red bits are terminating bits

94 Turbo Example - Puncturing
% parallel concatenate unpunctured_word = [upper_reshaped lower_reshaped]; fprintf('\n\nUnpunctured = '); fprintf('%d ', unpunctured_word); fprintf('\n'); %Puncture Codeword codeword = Puncture( unpunctured_word, pun_pattern, tail_pattern ); fprintf('Punctured = '); fprintf('%d ', codeword); Unpunctured = Punctured =

95 Turbo Example - Puncturing
4/14/2017 Turbo Example - Puncturing Upper Code Systematic = Upper Code Parity = Lower Code Systematic = Lower Code Parity = Upper Code Puncturing = Lower Code Puncturing = Lower Code Puncturing = Note the odd pattern for filling the terminating bits, is because this particular puncturing code follows UMTS specifications when four rows are input to it. Upper Code Systematic = Upper Code Parity = Lower Code Systematic = Lower Code Parity = Punctured =

96 Turbo Example - Channel
%% Modulate, Channel, and Demodulate %Turn into +/- 1 for BPSK modulation example tx = -2*(codeword-0.5); %Generate AWGN of correct length EsNo = 10^(SNR/10.0); variance = 1/(2*EsNo); noise = sqrt(variance)*randn(1, length(tx) ); %Add AWGN rx = tx + noise; %Demodulate symbol_likelihood = -2*rx./variance; %Stats plot(symbol_likelihood, zeros(1, length(symbol_likelihood)), '.') fprintf('Received log-liklihood ratio (LLR): mean(abs(LLR)) = %f\n', mean(abs(symbol_likelihood)));

97 Turbo Example - Channel
Sending data over AWGN channel, SNR= Received log-liklihood ratio (LLR): mean(abs(LLR)) =

98 Turbo Example - Depuncturing
%% Decoding % intialize error counter errors = zeros( turbo_iterations, 1 ); % depuncture and split into format used in each decoder depunctured_output = Depuncture(symbol_likelihood, pun_pattern, tail_pattern ); input_upper_c = reshape( depunctured_output(1:N,:), 1, N*length(depunctured_output) ); input_lower_c = reshape( depunctured_output(N+1:N+N,:), 1, N*length(depunctured_output) ); LLR After Channel = LLR After Depuncturing = Upper Input = Lower Input =

99 Turbo Example – Decode Setup
% No estimate of original data input_upper_u = zeros( 1, frame_length ); saved_outLLR = []; saved_outExt = []; saved_interLLR = []; saved_interExt = []; totalIts = 0; traj = zeros(1,2); figure; axis square; title('Turbo Decoder Trajectory'); ylabel('I_E'); xlabel('I_A'); xlim([0,1]); ylim([0,1]); hold on; IA = 0; IE = 0;

100 Turbo Example - Decoding
% Iterate over a number of times for turbo_iter=1:turbo_iterations fprintf( '\n*** Turbo iteration = %d\n', turbo_iter ); % Pass through upper decoder [output_upper_u output_upper_c] = SisoDecode( input_upper_u, input_upper_c, genPoly, 0, 0 ); % Extract Extrinsic information ext = output_upper_u - input_upper_u; % Interleave this information, which organizes it % in the same manner which the lower decoder sees bits input_lower_u = interleave(ext, 0);

101 Turbo Example - Decoding
% Pass through lower decoder [output_lower_u output_lower_c] = SisoDecode( input_lower_u, input_lower_c, genPoly, 0, 0 ); % Interleave and extract Extrinsic information input_upper_u = interleave( output_lower_u - input_lower_u, 0 ); % Hard decision based on LLR: if < 0 bit is 0, if > 0 bit is 1 detected_data = (sign(interleaved_output_lower_u) + 1) / 2; % Find errors – NOT part of turbo algorithm, since normally you don’t have % actual data. But we do so take advantage of that to detect when to stop error_positions = xor( detected_data, data); if (sum(error_positions)==0) && (autostop) break; else errors(turbo_iter) = temp_errors + errors(turbo_iter); end

102 Turbo Example - Outputs
*** Turbo iteration = 1 Error Vector = Errors = 2, mean(abs(output LLR)) = *** Turbo iteration = 2 Error Vector = Errors = 1, mean(abs(output LLR)) = *** Turbo iteration = 3 Error Vector = Errors = 0, mean(abs(output LLR)) =

103 Turbo Decoder - LLRs input_upper_u 0 0 1 0 0 0 0 1 0
4/14/2017 Turbo Decoder - LLRs input_upper_u Here is the LLR inbetween the first and second encoder after each iteration. Note how between the 1st and 3rd iteration the LLR of the bits in error has slowly moved over the zero line, indicating the correction. Error Vectors for Each Iteration

104 Turbo Decoder - Extrinsic
4/14/2017 Turbo Decoder - Extrinsic output_upper_u - input_upper_u The extrinsic information ONLY shows how each bit changed, and doesn’t give you an absolute indication of the value of the bit. So you can see for example that bit #1 has been strongly reinforced each time by the upper encoder. Error Vectors for Each Iteration

105 Turbo Decoder - Extrinsic
4/14/2017 Turbo Decoder - Extrinsic output_upper_u - input_upper_u Here the lower encoder had not much to say about bit #1 for example, but every time had lots to say about bit #8 and 9. Error Vectors for Each Iteration

106 Turbo Example - Decoding
detected_data = reshape( detected_data', 1, frame_length);

107 Turbo Example - Decoding
% Combine output_c and puncture % convert to matrices (each row is from one row of the generator) upper_reshaped = [ reshape( output_upper_c, N, length(output_upper_c)/N ) ]; lower_reshaped = [ reshape( output_lower_c, N, length(output_lower_c)/N ) ]; % parallel concatenate unpunctured_word = [upper_reshaped lower_reshaped]; Upper Code Output = Upper Code Output Systematic = Upper Code Output Parity = Lower Code Output = Lower Code Output Systematic = Lower Code Output Parity =

108 Part 2 – Computer simulations

109 Coded Modulation Library
Iterative Solutions has created an Open Source (LGPL) project to simulate Turbo Codes. Key functions written in C & work on Linux, Windows, MAC from MATLAB Speeds up simulation drastically compared to MATLAB-only solution NOTE: Comes with Documentation in documentation/CMLOverview.pdf, see that file for more help, as I won’t repeat information in that file

110 Downloading CML I’ve extended CML to add: Bunch of examples
Tutorial file (shows decoding step-by-step) Some simple additional interleavers Functions to find dFree and plot asymptote EXIT Chart Plotting & Trajectories (wouldn’t trust it 100%) Option to plot only specified iteration without changing files (useful for comparisons)

111 Download CML My version + this presentation stored in SVN on Or download a .zip from

112 Installing CML Unzip file somewhere
On Windows – you are done. On other systems: Startup MATLAB, go to cml\mex\source and run ‘make’ Optional for dFree calculation: Download turbo.exe from , direct link:

113 Running CML cd to directory you unzipped file in run command ‘startup’
>> cd E:\Documents\academic\classes\ilo2\turbocml\trunk >> startup >>

114 Running BER Simulations

115 Running Simulations Simulations run with ‘CmlSimulate’ command. Parameters for simulations stored in scenario files Scenario files stored in ‘cml/localscenarios’ and ‘cml/scenarios’. Simulation results saved in ‘cml/output’ You can interrupt simulation and continue it later without loosing all the data

116 Running Simulations >> CmlSimulate('TurboTests', [1 2 3])
Use file TurboTests.m Run scenarios 1, 2, and 3

117 Running Simulations Hit Ctrl-C To Abort
>> CmlSimulate('TurboTests', [3 6]) Initializing case (3): UMTS-TC, BPSK, Rayleigh, 530 bits, max-log-MAP Initializing case (6): UMTS-TC, BPSK, Rayleigh, 5114 bits, max-log-MAP Record 1 UMTS-TC, BPSK, Rayleigh, 530 bits, max-log-MAP Eb/No in dB = dB Clock 13:32:26 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx. Eb/No in dB = dB Clock 13:32:27 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx??? Operation terminated by user during ==> TurboDecode at 99 Hit Ctrl-C To Abort

118 Running Simulations >> CmlSimulate('TurboTests', [3 6]) Initializing case (3): UMTS-TC, BPSK, Rayleigh, 530 bits, max-log-MAP Initializing case (6): UMTS-TC, BPSK, Rayleigh, 5114 bits, max-log-MAP Record 1 UMTS-TC, BPSK, Rayleigh, 530 bits, max-log-MAP Eb/No in dB = dB Clock 13:32:31 Eb/No in dB = dB xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx. Eb/No in dB = dB Clock 13:32:32 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.xxxxxxxxx. Eb/No in dB = dB Clock 13:32:34 xxxxxxxxxxxxx.xxxxxxxxxxxxx.xxxxxxxxxxxxxx.Simulation Complete Clock 13:32:38 Continue Simulation – Note 1.4 dB point already simulated and saved

119 Running Simulations >> CmlSimulate('TurboTests', [11]) Initializing case (11): TC, BPSK, AWGN, bits, max-log-MAP Record 1 TC, BPSK, AWGN, bits, max-log-MAP Eb/No in dB = dB Clock 10: 3:11 xxxxxxxx.xxxxxxxx.xxxxxxxx.xxxxxxxx.xxxxxxxx.xxxxxxxx.xxxxxxxx.xxxxxxxx.xxxxxxxx.xxxxxxxx.xxxxxxxx.xxxxxxxx.xxxx. Eb/No in dB = dB Clock 10: 5: 7 Eb/No in dB = dB Clock 10: 7: 3 .x.x.x..x..xx..x...xxx..x.x..x.xx..x.x.xx.x.xx..x.....x...x.x.xx.x.xx.xx..x..xx....x..xxx..x....x.x.x...x.....xxx.x.....x...x.x.x.x.x x...x.....x.x.x...xx....x.x.x.....xx.x.x x..x.x.x.x..x..x....x....x..xx....x.....x..xx.x.xx.x...x.x.....x.....x..xx.x.x..x..x...x...x. Eb/No in dB = dB Clock 10:28:17 x x x x..x x x x x x x x x x.... ‘x’ printed for errors in frames As you go over turbo cliff, simulation times become massive… note 0.25 dB change in SNR (this sim step isn’t done yet either)

120 Plotting Simulations Use ‘CmlPlot’ command in same way as Simulate. You can copy saved .mat files from another source to the proper place in Output if you don’t do the simulations yourself. e.g.: On command line, copy proper .mat file from server running simulation: $ cd cml/output/TurboCodes $ scp . Back in MATLAB, Plot results, we can see how Simulation is doing and decide if we have enough data >> CmlPlot('TurboTests', [11]) Initializing case (11): TC, BPSK, AWGN, bits, max-log-MAP ans = B: [] … BUNCH MORE STUFF …

121 Plotting Simulations Scenario File decides what type of simulation, and thus what type of plot: ‘coded’ means Bit Error Rate (BER) and Frame Error Rate (FER). Only sim type I’m using.

122 Resulting Plots Figure 1: BER vs Eb/No Figure 3: FER vs Eb/No
Figure 2: BER vs Es/No Figure 4: FER vs Es/No EbNo = EsNo./sim_param(i).rate; for this example rate=

123 Comparing Turbo Iterations
record = 8; sim_param(record).comment = 'TC, BPSK, AWGN, 1000 bits, max-log-MAP'; sim_param(record).SNR = 0:0.25:3.5; sim_param(record).framesize = 1000; sim_param(record).channel = 'awgn'; sim_param(record).decoder_type = 1; sim_param(record).max_iterations = 16; sim_param(record).plot_iterations = [ ]; Plot BER for iterations 1, 5, 10, 16

124 Comparing Turbo Iterations
Override scenario file, plot iterations 1,2, and 3. Final iteration always plotted (in this example = 16) >> CmlPlot('TurboTests', [8], 'iter', [1 2 3]) Initializing case (8): TC, BPSK, AWGN, 1000 bits, max-log-MAP

125 Comparing Plots >> CmlPlot('TurboTests', [8 11], 'iter', [16])
Use ‘iter’ override to only plot final iteration, cleans up plot

126 Comparing Plots Specify ‘linetype’ in Scenario file to make plots different record = 8; ... sim_param(record).linetype = 'g-'; record = 11; sim_param(record).linetype = ‘b-';

127 Defining your Own Simulation: 1
Copy/Paste block of code, increment ‘record’ number, adjust parameters. e.g.: record = 8; sim_param(record).comment = 'TC, BPSK, AWGN, 1000 bits, max-log-MAP'; sim_param(record).SNR = 0:0.25:3.5; sim_param(record).framesize = 1000; sim_param(record).channel = 'awgn'; sim_param(record).decoder_type = 1; sim_param(record).max_iterations = 16; sim_param(record).plot_iterations = [ ]; sim_param(record).linetype = 'g-'; sim_param(record).sim_type = 'coded'; sim_param(record).code_configuration = 1; sim_param(record).SNR_type = 'Eb/No in dB'; sim_param(record).modulation = 'BPSK'; You need to change this You should change this Change as needed You should change this Continued on Next Page

128 Defining your Own Simulation: 2
sim_param(record).mod_order = 2; sim_param(record).bicm = 1; sim_param(record).demod_type = 0; sim_param(record).legend = sim_param(record).comment; sim_param(record).code_interleaver = ... strcat( 'CreateSRandomInterleaver(', int2str(sim_param(record).framesize ), ', 9)' ); Interleaver is specified as a function which gets called, so you need to create a string with all the parameters that will be required. Examples: S-Random, S=9 strcat( 'CreateSRandomInterleaver(', int2str(sim_param(record).framesize ), ', 9)' ) S-Random, S=16 strcat( 'CreateSRandomInterleaver(', int2str(sim_param(record).framesize ), ', 16)' ) Random strcat( 'CreateRandomInterleaver(', int2str(sim_param(record).framesize ), ‘)' ) Linear (note this has fixed parameters, need to change them if frame-size changes) 'CreateLinearInterleaver(1000, 25, 40)' UMTS strcat( 'CreateUmtsInterleaver(', int2str(sim_param(record).framesize ), ')' ); Continued on Next Page

129 Defining your Own Simulation: 3
Polynomial order for RSC: [ Feedback Feedforward]] % Feedback = [1011] % Feedforward = [1111] sim_param(record).g1 = [ ]; sim_param(record).g2 = sim_param(record).g1; sim_param(record).nsc_flag1 = 0; sim_param(record).nsc_flag2 = 0; %No puncturing sim_param(record).pun_pattern1 = [1 1 1 1]; sim_param(record).pun_pattern2= [0 0 1 1 ]; sim_param(record).tail_pattern1 = [1 1 1 1 1 1]; sim_param(record).tail_pattern2 = ... sim_param(record).tail_pattern1; Puncturing pattern specified in: [ Systematic Part Codeword Part] Pattern needs to be long enough to show pattern, see examples next page Continued on Next Page

130 Defining your Own Simulation: 4
%No puncturing sim_param(record).pun_pattern1 = [1 1 1 1]; sim_param(record).pun_pattern2= [0 0 1 1 ]; sim_param(record).tail_pattern1 = [1 1 1 1 1 1]; sim_param(record).tail_pattern2 = ... sim_param(record).tail_pattern1; %Rate = 1/2 Puncturing sim_param(record).pun_pattern1 = [1 1 0 1]; sim_param(record).pun_pattern2= [0 0 1 0 ]; sim_param(record).tail_pattern1 = [1 1 1 1 1 1]; sim_param(record).tail_pattern2 = ... sim_param(record).tail_pattern1; Continued on Next Page

131 Defining your Own Simulation: 5
THIS IS CRITICAL: Must have unique filename for each record, or records will share data. Here it autogenerates filename based on some parameters, but if you are lazy add a strcat() of int2str(record) to ensure filename for each record # is unique, which I have done here. Save this file to save your simulation results. Set to ‘1’ to force simulation to run again, and not used previously saved results sim_param(record).filename = strcat( data_directory, ... strcat(int2str(record), 'umts', ... int2str(sim_param(record).framesize ), ... sim_param(record).channel, ... int2str( sim_param(record).decoder_type ), '.mat') ); sim_param(record).reset = 0; sim_param(record).max_trials = 1e9*ones(... size(sim_param(record).SNR) ); sim_param(record).minBER = 1e-8; sim_param(record).max_frame_errors = 100* ... ones( 1, length(sim_param(record).SNR) ); sim_param(record).save_rate = ... ceil(511400/sim_param(record).framesize); See CMLOverview.pdf slide 17 Continued on Next Page

132 Defining your Own Simulation: 6
Number of trajectories to plot on EXIT chart, set to ‘0’ for none sim_param(record).exit_trajectories = 5; sim_param(record).exit_nframes = 100; sim_param(record).exit_iterations = 20; sim_param(record).exit_snr = -4:2:0; Number of frames for each Ia point. If you have short frames set this higher (>1000), if you have very long frames can set this much shorter. SNR in dB to plot EXIT charts over. Creates one chart for each point, so don’t make too many! This example plots three: -4 dB, -2 dB, and 0 dB. Number of iterations to plot trajectory over

133 Changing Simulation Parameters
Some simulation parameters you can change without needing to reset simulation: SNR max_trials exit_trajectories filename minBER exit_iterations comment minFER exit_snr legend max_frame_errors exit_nframes linetype compiled_mode plot_iterations input_filename save_rate trial_size reset scenarios NB: Reset = Throw away all previously simulated results. When changing above parameters previously simulated results can be integrated into new simulated data

134 Changing Simulation Parameters
The rest you need to reset simulation by setting ‘reset=1’ in the record , otherwise you get a mismatch since some data was simulated with potentially different data (be sure to set reset back to 0 once done) e.g.: CML is not impressed. >> CmlPlot('TurboTests', [9]) Initializing case (9): TC, BPSK, AWGN, 1000 bits, max-log-MAP Warning: field pun_pattern2 does not match stored value, using stored value >> CmlSimulate('TurboTests', [9])

135 EXIT Charts

136 EXIT Chart Plotting Actual Trajectories Theoretical
>> CmlPlotExit('TurboTests', [11]) NOTE: This function doesn’t save any state, so save your figures if they took a while to calculate!

137 EXIT Chart Plotting I’m not too confident on EXIT chart accuracy. TurboTests scenario number 13 should match figure of TATC, but it doesn’t.

138 free distance

139 Free Distance Calculation
Free distance calculation done by Roberto Garello’s program/algorithm from You must be in freeDistance directory Simply call a single scenario you’d like analyzed like: >> cd turboUtils\freeDistance >>[dfree, Nfree, wfree] = CmlFindDfree('UmtsScenarios', [1]) Initializing case (1): UMTS-TC, BPSK, AWGN, 40 bits, max-log-MAP Calling Garellos turbo program, this step could take a while dfree = 12 Nfree = 2 wfree = 8

140 Free Distance Spectrum
You do this manually by looking at output file out.txt in directory Function printBERContribution can be used to plot free distance asymptote (see earlier examples)

141  Fin Did you find this helpful? Find errors? Please let me know at or visit for more.

142 Review Questions

143 Q1: Plotting Run a simulation with a R=1/2 Turbo Code, SRandom Interleaver with S=9, Interleaver length=512 Plot iterations 1-10 over SNR of 0-3dB. Also plot the improvement in SNR for each iteration for a BER of ~10E-5 (if possible)

144 Q1: Plotting Hint #1: Polynomials are: feedback = [ ] feedforward = [ ] Hint #2: R=1/2 implies we are puncturing half the parity bits. See slide in this presentation entitled “Defining Your Own Simulation: 4”. Hint #3: There is no function for plotting the SNR improvement, you’ll need to read off graph (consider using MATLAB data-point picker here).

145 Q2: SISO Decoding The brute force MAP example used hard inputs. Can you extend that to work on soft-input decoding? Hint #1: The code already outputs the true soft-input LOG-MAP algorithm, which should match your results Hint #2: Doing so will require calculating probability of bit in error (pberr) for each input bit. Remember for hard-input we are only told if result of demodulating was >0 or <0, which we use with Q function to give probability bit with original value +/-1 was thrown over zero. For soft-input we are given actual result of demodulation, so instead need to find probability bit was thrown from +/- 1 to that value instead of zero. Hint #3: The easiest way to do the above will be to consider the output of the demodulator as LLRs. You can find a formula in the references to convert from LLR to Pb(0) and Pb(1).

146 Q3: Free Distance I have an idea for an interleaver which maps like this for a 1000-bit input: 1 2 3 4 .. 24 25 26 27 28 29 49 50 51 52 74 75 40 Rows 25 Columns

147 Q3: Continued That is, the interleaver vector would look like: [ … 999]

148 Q3: Continued Plot the free distance asymptote for a rate 1/3 code compared to normal linear & S-Random interleavers. You can use TurboTests.m scenario 9 as a starting point (and the comparison). Also plot the visualization of the interleaver to understand the performance

149 Answers to Questions (cheater)

150 Question Answer Scenario
NB: I highly suggest trying to answer them on your own first! But the scenario file, which parts of are copied here, is in localscenarios/TurboReviewAnswers.m .

151 Question 1: Setup % Question 1 % Run a simulation with a R=1/2 Turbo Code, SRandom Interleaver with S=9, Interleaver length=512 % Plot iterations 1-10 over SNR of 0-3dB. % Also plot the improvement in SNR for each iteration for a BER of ~10E-5. record = 1; sim_param(record).comment = 'Review Question 1'; sim_param(record).SNR = 0:.2:3; sim_param(record).framesize = 512; sim_param(record).channel = 'AWGN'; sim_param(record).decoder_type = 1; sim_param(record).max_iterations = 10; sim_param(record).plot_iterations = [1:sim_param(record).max_iterations]; sim_param(record).linetype = 'r-'; sim_param(record).sim_type = 'coded'; sim_param(record).code_configuration = 1; sim_param(record).SNR_type = 'Eb/No in dB'; sim_param(record).modulation = 'BPSK'; sim_param(record).mod_order = 2; sim_param(record).bicm = 1; sim_param(record).demod_type = 0; sim_param(record).legend = sim_param(record).comment; sim_param(record).code_interleaver = ... strcat( 'CreateSRandomInterleaver(', int2str(sim_param(record).framesize ), ', 9)' );

152 Question 1: Setup Continued
%From question slide: % d1(D)=1+D^2+D^3 <= Feedback = [1011] % d2(D)=1+D+D^3+D^4 <= Feedforward = [1111] sim_param(record).g1 = [ ]; sim_param(record).g2 = sim_param(record).g1; sim_param(record).nsc_flag1 = 0; sim_param(record).nsc_flag2 = 0; sim_param(record).pun_pattern1 = [ ]; sim_param(record).pun_pattern2= [ ]; sim_param(record).tail_pattern1 = [ ]; sim_param(record).tail_pattern2 = sim_param(record).tail_pattern1; sim_param(record).filename = strcat( data_directory, ... strcat( 'q1', int2str(sim_param(record).framesize ), sim_param(record).channel, int2str( sim_param(record).decoder_type ), '.mat') ); sim_param(record).reset = 0; sim_param(record).max_trials = 1e9*ones(size(sim_param(record).SNR) ); sim_param(record).minBER = 1e-6; sim_param(record).max_frame_errors = 200*ones( 1, length(sim_param(record).SNR) ); sim_param(record).save_rate = ceil(511400/sim_param(record).framesize);

153 Question 1: Running Commands
>> CmlSimulate('TurboReviewAnswers', [1]) Initializing case (1): Review Question 1 Record 1 Review Question 1 ….. >> CmlPlot('TurboReviewAnswers', [1])

154 Question 1: BER Results

155 Question 2: SISO Decoder
NB: Files located in doc/resources/question_answers/q2_softbrute Output of demodulator is used as input LLR Convert LLR to probability of error for each bit Use that probability in calculation of probability of word in error Rest of file is same as in the hard input

156 Code Clips %Demodulate rx_demodulated = -2*rx./variance; %Chop into binary received = (sign(rx_demodulated) + 1)/2; %% Do MAP Algorithm fprintf('Result of decoding:\n'); [llrs, resultingCodeword] = brute_force_map_soft(feedback, feedforward, received, rx_demodulated, nbits)

157 Code Clips %Convert input LLR into pberr, this equation is: %D = exp(-LLR/2) / (1+exp(-LLR); %P1 = D * exp(LLR/2) %P0 = D * exp(-LLR/2) %Error will be minimum of those two pberr = zeros(1,nbits); for i=1:length(llr_input) D = exp(-llr_input(i)/2) / (1+exp(-llr_input(i))); pberr(i) = min([D*exp(llr_input(i)/2) D*exp(-llr_input(i)/2) ]); end

158 %For every possible codeword & ours: find out Pcodeword for i=1:2^nbits %Generates an error vector indicating which bits are different %between received codeword & codeword we are testing against bitsInDiff = abs(input - all_codewords(i,:)); %Generates a vector with only the probabilities of bits in error pbErrVect = bitsInDiff .* pberr; %Get non-zero elements, multiple together, this works on previous %operation which got only probabilities of bits we care about pbCodeError = prod(pbErrVect(find(pbErrVect))); %Do same steps as above, I haven't duplicate the documentation %though bitsSame = 1-bitsInDiff; pbOkVect = bitsSame .* (1-pberr); pbCodeOk = prod(pbOkVect(find(pbOkVect))); %Find APP Pr{X | Y} % X = Codeword that was transmitted % Y = Codeword that was receieved pcodeword(i) = pbCodeError * pbCodeOk; end

159 Q3: Fancy (or not?) Interleaver
function [output] = q3_interleaver(rows, cols) linput = rows*cols; input = [0:linput-1]; output = zeros(1,length(input)); input = reshape(input, cols, rows)'; j = 1; for startingrow=1:rows; %Start at left point in each row... col = 1; row = startingrow; while row > 0 && col <= cols output(j) = input(row,col); col = col+1; row = row-1; j = 1+j; end for startingcol=2:cols; %Do bottom row all way along col = startingcol; row = rows;

160 Q3: Fancy (or not?) Interleaver
record = 2; % Copy everything from our reference sim_param(record) = sim_param(9); sim_param(record).code_interleaver = ... strcat( 'q3_interleaver(25, 40)' ); Assuming record 9 is setup using Linear, and 10 is using SRandom, and you are in directory with q3_interleaver.m: >> [df1, nf1, wf1] = CmlFinddFree('TurboReviewAnswers ', [2]) >> [df2, nf2, wf2] = CmlFinddFree('TurboReviewAnswers', [9]) >> [df3, nf3, wf3] = CmlFinddFree('TurboReviewAnswers', [10]) >> printBERContribution( df1, nf1, wf1, 1000, 1/3 ) >> hold on >> printBERContribution( df2, nf2, wf2, 1000, 1/3 ) >> printBERContribution( df3, nf3, wf3, 1000, 1/3 )

161 Q3 Results: Not Very Good

162 Q3 Results: Not Very Good
4/14/2017 Q3 Results: Not Very Good Notice this interleaver has a linear-looking pattern. Not only that, but it’s spreading is far worse than the original linear interleaver. And the first & last bits are mapping to the same location (bit 0 & 999). All together it clearly makes for a poor interleaver. If you wanted to try something else, you could try implementing the interleaver seen in the paper “Trellis Termination in Turbo Codes with Full Feedback RSC Encoders”. It also uses a diagonal, but does it in a different way, which should eliminate a lot of disadvantages.


Download ppt "Turbo Codes Colin O’Flynn Dalhousie University"

Similar presentations


Ads by Google