1 (Chapter 15): Concatenated codes Simple (classical, single-level) concatenation Length of concatenated code: n 1 n 2 Dimension of concatenated code:

Slides:



Advertisements
Similar presentations
Iterative Equalization and Decoding
Advertisements

Convolutional Codes Representation and Encoding  Many known codes can be modified by an extra code symbol or by deleting a symbol * Can create codes of.
Decoding of Convolutional Codes  Let C m be the set of allowable code sequences of length m.  Not all sequences in {0,1}m are allowable code sequences!
Forward Error Correcting Codes for Forward Error Correcting Codes for Optical Communication Systems University of Technology Dept. of computer Engineering.
 Ammar Abh-Hhdrohss Islamic University -Gaza 1 Chapter 2 Concatenated codes.
Noise, Information Theory, and Entropy (cont.) CS414 – Spring 2007 By Karrie Karahalios, Roger Cheng, Brian Bailey.
Nearest Neighbor Search in High Dimensions Seminar in Algorithms and Geometry Mica Arie-Nachimson and Daniel Glasner April 2009.
(speaker) Fedor Groshev Vladimir Potapov Victor Zyablov IITP RAS, Moscow.
Modern Digital and Analog Communication Systems Lathi Copyright © 2009 by Oxford University Press, Inc. C H A P T E R 15 ERROR CORRECTING CODES.
Maximum Likelihood Sequence Detection (MLSD) and the Viterbi Algorithm
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
Turbo Codes – Decoding and Applications Bob Wall EE 548.
Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen.
Figure 6.1. A convolutional encoder. Figure 6.2. Structure of a systematic convolutional encoder of rate.
CS151 Complexity Theory Lecture 10 April 29, 2004.
EE 3220: Digital Communication Dr Hassan Yousif 1 Dr. Hassan Yousif Ahmed Department of Electrical Engineering College of Engineering at Wadi Aldwasser.
EE 3220: Digital Communication Dr Hassan Yousif 1 Dr. Hassan Yousif Ahmed Department of Electrical Engineering College of Engineering at Wadi Aldwasser.
Linear codes 1 CHAPTER 2: Linear codes ABSTRACT Most of the important codes are special types of so-called linear codes. Linear codes are of importance.
Approximating the MST Weight in Sublinear Time Bernard Chazelle (Princeton) Ronitt Rubinfeld (NEC) Luca Trevisan (U.C. Berkeley)
Linear Codes.
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
Review of modern noise proof coding methods D. Sc. Valeri V. Zolotarev.
SPANISH CRYPTOGRAPHY DAYS (SCD 2011) A Search Algorithm Based on Syndrome Computation to Get Efficient Shortened Cyclic Codes Correcting either Random.
1 Channel Coding (II) Cyclic Codes and Convolutional Codes.
III. Turbo Codes.
CHAPTER 7: Clustering Eick: K-Means and EM (modified Alpaydin transparencies and new transparencies added) Last updated: February 25, 2014.
Codes Codes are used for the following purposes: - to detect errors - to correct errors after detection Error Control Coding © Erhan A. Ince Types: -Linear.
1 –Mandatory exercise for Inf 244 –Deadline: October 29th –The assignment is to implement an encoder/decoder system.
A Novel technique for Improving the Performance of Turbo Codes using Orthogonal signalling, Repetition and Puncturing by Narushan Pillay Supervisor: Prof.
Wireless Mobile Communication and Transmission Lab. Theory and Technology of Error Control Coding Chapter 5 Turbo Code.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 12.
§6 Linear Codes § 6.1 Classification of error control system § 6.2 Channel coding conception § 6.3 The generator and parity-check matrices § 6.5 Hamming.
1 CLUSTERING ALGORITHMS  Number of possible clusterings Let X={x 1,x 2,…,x N }. Question: In how many ways the N points can be assigned into m groups?
Quantization Codes Comprising Multiple Orthonormal Bases Alexei Ashikhmin Bell Labs  MIMO Broadcast Transmission  Quantizers Q(m) for MIMO Broadcast.
Iterative decoding If the output of the outer decoder were reapplied to the inner decoder it would detect that some errors remained, since the columns.
ADVANTAGE of GENERATOR MATRIX:
Information Theory Linear Block Codes Jalal Al Roumy.
1 Coded modulation So far: Binary coding Binary modulation Will send R bits/symbol (spectral efficiency = R) Constant transmission rate: Requires bandwidth.
Real-Time Turbo Decoder Nasir Ahmed Mani Vaya Elec 434 Rice University.
1 Channel Coding (III) Channel Decoding. ECED of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft.
Last time, we talked about:
Low Density Parity Check codes
Timo O. Korhonen, HUT Communication Laboratory 1 Convolutional encoding u Convolutional codes are applied in applications that require good performance.
Perfect and Related Codes
Error Correction Code (2)
Some Computation Problems in Coding Theory
1 Design of LDPC codes Codes from finite geometries Random codes: Determine the connections of the bipartite Tanner graph by using a (pseudo)random algorithm.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
A simple rate ½ convolutional code encoder is shown below. The rectangular box represents one element of a serial shift register. The contents of the shift.
Turbo Codes. 2 A Need for Better Codes Designing a channel code is always a tradeoff between energy efficiency and bandwidth efficiency. Lower rate Codes.
Log-Likelihood Algebra
1 Reliability-Based SD Decoding Not applicable to only graph-based codes May even help with some algebraic structure SD alternative to trellis decoding.
1 Channel Coding: Part III (Turbo Codes) Presented by: Nguyen Van Han ( ) Wireless and Mobile Communication System Lab.
Block Coded Modulation Tareq Elhabbash, Yousef Yazji, Mahmoud Amassi.
1 Code design: Computer search Low rate: Represent code by its generator matrix Find one representative for each equivalence class of codes Permutation.
Classical Coding for Forward Error Correction Prof JA Ritcey Univ of Washington.
Channel Coding: Part I Presentation II Irvanda Kurniadi V. ( ) Digital Communication 1.
FEC decoding algorithm overview VLSI 자동설계연구실 정재헌.
The Viterbi Decoding Algorithm
What is this “Viterbi Decoding”
MAP decoding: The BCJR algorithm
S Digital Communication Systems
Error Correction Code (2)
Error Correction Code (2)
Chapter 10: Error-Control Coding
Irregular Structured LDPC Codes and Structured Puncturing
Error Correction Code (2)
Homework #2 Due May 29 , Consider a (2,1,4) convolutional code with g(1) = 1+ D2, g(2) = 1+ D + D2 + D3 a. Draw the.
Error Correction Coding
IV. Convolutional Codes
Presentation transcript:

1 (Chapter 15): Concatenated codes Simple (classical, single-level) concatenation Length of concatenated code: n 1 n 2 Dimension of concatenated code: k 1 k 2 If minimum distances of component codes are d 1 and d 2, respectively, then the concatenated code has minimum distance d 1 d 2 Decoding: Two-stage: Decode (Hard decision out) inner code, then outer code Not optimum! Can decode up to approximately ¼ of min. dist. Good for decoding mixture of random and burst errors

2 Multiple Inner Codes Not necessary that all inner codes are identical Justesen codes: n 2 different inner codes Can show that a good class of codes can be constructed this way A class {C i } of codes of increasing lengths n i is good if the normalized dimensions k i /n i and the normalized minimum distances d i /n i are both bounded away from zero. A theoretical result; first known class of good codes

3 Generalization of the model m may range from 1 to ”large” Permutes the order of channel symbols

4 Example of interleaved serial concatenation

5 Multilevel concatenated codes Multiple outer and inner codes A 1  A 2 ...  A m  {0} k i dimension of A i d(A i ) min.dist. of A i [A i /A i+1 ] coset code: set of coset representatives; dimension k i - k i+1 q i = 2 k i -k i+1 B i code over GF(q i ) K 1 N K 2 N K m N K =  K i (k i - k i+1 ) d(C)  min{d(B i )d(A i )}

6 Multi-stage decoding m-level multilevel concatenation: Decode stage B 1  [A 1 /A 2 ] first,..., stage B m  A m last 1.Decode r = r (1) into a codeword b 1 in B 1 Inner decoding:Find closest word in [A 1 /A 2 ] Outer decoding: Use inner decoder’s results Set i=2; 2.Let r (i) = r (i-1) – f i-1 (b i-1 ). Decode r (i) into a codeword b i in B i Set i=i+1; If (i  m), repeat from 2.

7 Soft decision multistage decoding Requires soft decision (and usually trellis based) decoding at each decoding stage Decode stage B 1  [A 1 /A 2 ] first,..., stage B m  A m last 1.Decode r = r (1) into a codeword b 1 in B 1 Inner decoding:Find closest word in [A 1 /A 2 ] Outer decoding: Use inner decoder’s results Set i=2; 2.Modify received vector r (i) : r j,l (i) = r j,l (i-1)  (1-2c j,l (i-1) ) Decode r (i) into a codeword b i in B i Set i=i+1; If (i  m), repeat from 2.

8 Inner and outer decoding Inner decoder: Find the word (label) in each coset in [A i /A i+1 ] with largest metric for each symbol of the outer code Pass these metric tables to the outer decoder Outer decoder: Find word with largest metric Not MLD because of possible error propagation Simpler than known MLD algorithms for such codes Can be improved by passing a list of L candidates from one decoding stage to the next; and by selecting as the final decoded word the one wth the largest metric at the final stage

9 Code decomposition Expressing a code in terms of a multilevel concatenation  -level decomposable code: can be expressed as a  -level concatenated code Some classical code constructions may be expressed in this way. This may facilitate decoding of such codes, and can provide SD decoding r-th order Reed-Muller code of length 2 m, RM(r,m) Idea: Decompose trellis into  trellises, each trellis significantly less complex than the original

10 Properties of RM(r,m) v 0 =(1...1) of length 2 m v i =(0...0, 1...1, 0...0, , 1...1) (groups of length 2 i-1 ) RM(r,m) is spanned by vectors: v 0,v 1, v 2,..., v m,v 1 v 2, v 1 v 3,..., v m-1 v m,...all products of degree up to r k(r,m) = 1+C(m,1)+...+C(m,r), q(r,m) = C(m,r) Minimum distance 2 m-r RM(r,m)  RM(r-1,m) ...  RM(0,m) = {0,1}  RM(-1,m) = {0} RM(m-1,m) is a single parity check code RM(m-r-1,m) is the dual code of RM(r,m)

11 Example 15.2 RM(3,3)  RM(2,3)  RM(1, 3)  RM(0,3)  {0}

12 RM codes and interleaving RM(r,m) = {RM(0, ) q(r,m- ), RM(1, ) q(r-1,m- ),..., RM( , ) q(r- ,m- ) }  {RM(r, m- ), RM(r-1, m- ),..., RM(r- , m- )}...for any , : 1  m-1,  = for r>,  =r for r . Example: RM(3,6), a (64,42,8) code. Select  = =3 RM(3,6) = {RM(0,3) q(3,3), RM(1,3) q(2,3), RM(2,3) q(1,3),RM(3,3) q(0,3) }  {RM(3, 3), RM(2, 3), RM(1, 3), RM(0, 3)} = {(8,1),(8,4) 3, (8,7) 3,(8,8)}  {(8,8),(8,7),(8,4),(8,1)} = (8,1)  [(8,8)/(8,7)]  (8,4) 3  [(8,7)/(8,4)]  (8,7) 3  [(8,4)/(8,1)]  (8,8)  [(8,1)/{0}]

13 Example RM(4,7) = (128,99,8) code. Can show that trellis has max log state complexity of 19 Can be decomposed into 3-level concatenation Subtrellises of lengths 16 and 8, at most 256 states

14 Another example RM(3,7) = (128,64,16) code. Can show that trellis has max log state complexity of 26 Can be decomposed into 3-level concatenation Subtrellises of lengths 16 and 8, at most 512 states

15 Iterative multistage MLD algorithm Decoding algorithm (m=2-stage decoding) 1.Compute best estimate b (1),1 of first decoding stage, and M(b (1),1 ). If coset label sequence L(b (1),1 )  C: Best codeword found, so stop, otherwise proceed to 2. 2.Second stage decoding, obtain L(b (2),1 ), M(b (2),1 ). Store b (1),1 and b (2),1. Set i 0 =1, i=2. 3.Calculate b (1),i and b (2),i. If M(b (2 ),i 0 )> M(b (1),i ), decoding is finished, and b (1), i 0 and b (2), i 0 give the most likely codeword. Otherwise go to 4. 4.If coset label L(b (1), i )  C: Best codeword found, so stop, otherwise proceed to 5. 5.If M(b (2 ),i )> M(b (2),i 0 ), set i 0 =i and update tables and index i. Go to 3.

16 Example IMS-MLD Can be generalized to an m-stage iterative procedure. RM(3,7) = (128,64,16) code.

17 Convolutional inner codes Can of course use convolutional codes as inner codes. Facilitates SD decoding Example in book...

18 Concatenation of binary codes Also possible with binary outer codes (block or convolutional) More difficult to make statements about overall minimum distance Interleaver useful for increasing distance SISO algorithms useful Iterative decoding useful Serial concatenation/Parallel concatenation

19 Suggested exercises In principle: All Ch. 15 problems But some rely on insight in the RM codes

20

21

22

23

24

25

26

27

28

29

30

31

32