Presentation is loading. Please wait.

Presentation is loading. Please wait.

Channel Coding. Outline  Overview of Code Design  Linear Block Codes  Convolutional Codes  Concatenated Codes  Turbo Codes  Low-Density Parity-Check.

Similar presentations


Presentation on theme: "Channel Coding. Outline  Overview of Code Design  Linear Block Codes  Convolutional Codes  Concatenated Codes  Turbo Codes  Low-Density Parity-Check."— Presentation transcript:

1 Channel Coding

2 Outline  Overview of Code Design  Linear Block Codes  Convolutional Codes  Concatenated Codes  Turbo Codes  Low-Density Parity-Check Codes  Code Modulation  Coding with Interleaving for Fading Channels  Unequal Error Protection Codes  Joint Source and Channel Coding

3 Outline  Overview of Code Design  Linear Block Codes  Convolutional Codes  Concatenated Codes  Turbo Codes  Low-Density Parity-Check Codes  Code Modulation  Coding with Interleaving for Fading Channels  Unequal Error Protection Codes  Joint Source and Channel Coding

4 Recap…  Information is transmitted through channels (eg. Wires, optical fibres and even air)  Channels are noisy and we do not receive what was transmitted

5 System Model  A Binary Symmetric Channel  Crossover with probability p

6 6 例:传输天气情况 阴 晴 一比特 0 1 50% 错误概率 二比特 00 11 禁用码组 01 、 10 可检 1 位错: 01 、 10 三比特 000 111 禁用码组 001 、 010 、 100 、 101 、 011 、 110 可检 2 位错: 001 、 010 、 100 、 101 、 011 、 110 或纠 1 位错: 001 、 010 、 100-  000 101 、 011 、 110-  111 结论:在一个码组集合中,减小允许使用的码组子集,即增加编码 的冗余度,可以提高这种编码的检错或纠错能力。 即信道编码的检错或纠错能力是以增加信号的冗余度来获得的。 结论:在一个码组集合中,减小允许使用的码组子集,即增加编码 的冗余度,可以提高这种编码的检错或纠错能力。 即信道编码的检错或纠错能力是以增加信号的冗余度来获得的。

7 Repetition Coding  Assume 1/3 repetition  What is the probability of error ?  If crossover probability p = 0.01, Pe ≈ 0.0003  Here coding rate R = 1/3. Can we do better? How much better?

8 Shannon’s Theorem  Given, A noisy channel (some fixed p) A value of P e which we want to achieve “We can transmit through the channel and achieve this probability of error at a maximum coding rate of C(p)”  Is it counterintuitive?  Do such good codes exist?

9 Channel Capacity  C(p) is called the channel capacity  For binary symmetric channel,  Can we really design codes that achieve this rate? How?

10 What is Coding?  Coding is the conversion of information to another form for some purpose.  Source Coding : The purpose is lowering the redundancy in the information. (e.g. ZIP, JPEG, MPEG2)  Channel Coding : The purpose is to defeat channel noise.

11 2003/07/07 11 Digital Communication System Information Source Encoder Channel Encoder Modulator Channel Demodu- lator Channel Decoder Source Decoder Data Sink rbrb rcrc rsrs JPEG, MPEG, etc. RS code, Turbo code, QPSK, QAM, BPSK, etc.

12 Errors in transmission are “forwardly” corrected using channel coding

13 Principle: Add redundancy to minimize error rate 0 1 0 11 1 1 0 Transmitter Receiver 11 0 1 0 0 0 11 1 1 1 Source Channel encoder Sink Channel decoder Channel 11 0 1 Introduction: Channel Coding

14 Channel Coding  Channel encoding : The application of redundant symbols to correct data errors.  Modulation : Conversion of symbols to a waveform for transmission.  Demodulatin : Conversion of the waveform back to symbols, usually one at a time.  Decoding: Using the redundant symbols to correct errors.

15 2003/07/07 15 Types of Error Control  Before we discuss the detail of structured redundancy, let us describe the two basic ways such redundancy is used for controlling errors. Error detection and retransmission, utilizes parity bits (redundant bits added to data) to detect that an error has been made and requires two-way link for dialogue between the transmitter and receiver. Forward error correction (FEC), requires a one way link only, since in this case the parity bit are designed for both the detection and correction of errors.

16 Channel coding techniques overview

17 FEC Historical Pedigree 1970 19601950 Shannon’s Paper 1948 Reed and Solomon define ECC Technique Early practical implementations of RS codes for tape and disk drives Hamming defines basic binary codes Berlekamp and Massey rediscover Euclid’s polynomial technique and enable practical algebraic decoding Gallager’s Thesis On LDPCs Viterbi’s Paper On Decoding Convolutional Codes BCH codes Proposed Forney suggests concatenated codes

18 FEC Historical Pedigree II 2000 19901980 Ungerboeck’s TCM Paper - 1982 First integrated Viterbi decoders (late 1980s) LDPC beats Turbo Codes For DVB-S2 Standard - 2003 TCM Heavily Adopted into Standards Renewed interest in LDPCs due to TC Research Berrou’s Turbo Code Paper - 1993 Turbo Codes Adopted into Standards (DVB-RCS, 3GPP, etc.) RS codes appear in CD players

19 How to evaluate Code Performance?  Need to consider Code Rate (R), SNR (E b /N 0 ), and Bit Error Rate (BER).  Coding Gain is the saving in E b /N 0 required to achieve a given BER when coding is used vs. that with no coding.  Generally the lower the code rate, the higher the coding gain.  Better Codes provides better coding gains.  Better Codes usually are more complicated and have higher complexity.

20 20 Why Use Error-Correction Coding  Trade-off: Error Performance verse Bandwidth Power verse Bandwidth Data Rate verse Bandwidth Capacity verse Bandwidth Coded verse Uncoded Performance  Coding Gains For a given bit-error probabilities, coding gain is defined as the reduction in E b /N 0 that can be realized through the use of code.

21 State-of-the-art High Coding Gain Codes  (15, ¼ ) Concatenated code: constraint length 15, rate ¼ convolutional code  Turbo Code These code can achieve coding gains close to Shannon bound, but the implementation cost is high!

22 Outline  Overview of Code Design  Linear Block Codes  Convolutional Codes  Concatenated Codes  Turbo Codes  Low-Density Parity-Check Codes  Code Modulation  Coding with Interleaving for Fading Channels  Unequal Error Protection Codes  Joint Source and Channel Coding

23 Parity Check Codes  #information bits transmitted = k  #bits actually transmitted = n = k+1  Code Rate R = k/n = k/(k+1)  Error detecting capability = 1  Error correcting capability = 0

24 2-D Parity Check  Rate?  Error detecting capability?  Error correcting capability? 1 0 0 0 1 0 0 0 1 1 0 0 1 1 0 1 0 0 1 1 1 Bottom row consists of check bit for each column Last column consists of check bits for each row

25 25 Block Codes (n,k) Block Codes –message : k-tuple u=(u 1,u 2,…,u k ) –code word : n-tuple v=(v 1,v 2,…,v n ) –code rate : R=k/n Messages Code words (0 0 0) (0 0 0 0 0 0) (1 0 0) (1 1 0 1 0 0) (0 1 0) (0 1 1 0 1 0) (1 1 0) (1 0 1 1 1 0) (0 0 1) (1 1 1 0 0 1) (1 0 1) (0 0 1 1 0 1) (0 1 1) (1 0 0 0 1 1) (1 1 1) (0 1 0 1 1 1) (6,3) Binary Block Code

26 Linear Block Codes  #parity bits n-k (=1 for Parity Check)  Message m = {m 1 m 2 … m k }  Transmitted Codeword c = {c 1 c 2 … c n }  A generator matrix G kxn A systematic linear block code is described by a genetaror matrix of the form  Parity-Check Matrix  Syndrome Testing

27 Linear Block Codes  Linearity  Example : 4/7 Hamming Code k = 4, n = 7 (n=2 m -1,m≥2) 4 message bits at (3,5,6,7) 3 parity bits at (1,2,4) d min =3 Error correcting capability =1 Error detecting capability = 2

28 Cyclic codes  Special case of Linear Block Codes  Cyclic shift of a codeword is also a codeword Easy to encode and decode Can correct continuous bursts of errors CRC (used in Wireless LANs), BCH codes, Hamming Codes, Reed Solomon Codes (used in CDs)  Generated via a generator polynomial instead of a generator matrix.

29 Performance of Decision Decoding  Hard Decision Decoding 0/1 Codeword errors depends on the Euclidean distance between modulation points associated with the transmitted codeword symbols Channel code be designed jointly with the modulation – coded modulation Hamming distance is a better measure of the code performance in fading when codes are combined with interleaving.  Soft Decision Decoding Make a soft decision corresponding to the distance between the received symbol and the symbol corresponding to a 0-bit or a 1-bit transmission. The coding gain depends on the code rate, the number of information bits per codeword, the minimum distance of the code, and the channel SNR.  Performance of SDD is about 2~3 dB better than HDD!

30 Common Linear Block Codes  Hamming code d min =3  Golay code (23,12) linear block code d min =7 t=3  Extended Golay code Adding a single parity bit to the Golay code (24,12) linear block code d min =8 t=3 Simplifies implementation (using same clock)  Bose-Chadhuri-Hocquenghem (BCH) code Cyclic codes n=2 m -1,m≥3 N=7,15,31,63 t<(d-1)/2

31 Nonbinary Block Codes: The Reed Solomon Code  K information symboles mapped into codewords of length N  The N codeword symbols of each codeword are chosen from a nonbinary alphabet of size q.  RS code N=q-1=2 k -1 t=0.5[N-K] d min =N-K+1

32 Outline  Overview of Code Design  Linear Block Codes  Convolutional Codes  Concatenated Codes  Turbo Codes  Low-Density Parity-Check Codes  Code Modulation  Coding with Interleaving for Fading Channels  Unequal Error Protection Codes  Joint Source and Channel Coding

33 Convolutional Codes  Encoder consists of shift registers forming a finite state machine  Decoding is also simple – Viterbi Decoder which works by tracking these states  First used by NASA in the voyager space programme  Extensively used in coding speech data in mobile phones

34 Convolutional Codes (n,k,m) Convolutional Codes –message : k-tuple u=(u 1,u 2,…,u k ) –code word : n-tuple v=(v 1,v 2,…,v n ) –code rate : R=k/n –memory order : m –Constraint length : K=m+1 –Generator polynomials

35 Convolutional Coding: Trellis Example

36 Trellis Example (cont)

37 Convolutional Decoding  Optimal, bit error rate, decoding is achieved by maximizing the likelihood function for a given codeword Compare the received codeword to all possible codewords and pick output with smallest distance  Viterbi in 1967 published a dynamic programming algorithm for decoding  Complexity in decoding is proportional to the number of states and the number of branches into each state Example: 64 state code used in PBCC or IEEE802.11a 128 metric calculations per transition in the trellis

38 Viterbi Decoding: ACS Operation

39 Viterbi Decoding: ML Path

40 Hardware Implementation of Viterbi 64 state code from PBCC and IEEE802.11a 32 Add Compare and Select (ACS) units (32 butterflies) Trace back length is 32 (should be 4 - 5 times constraint length) Input is and path metrics are Branch Metric Computation Add Compare Select Trace Back Unit Set Initial State Store Path Metric Branch History Bit Stream Soft Inputs

41 Outline  Overview of Code Design  Linear Block Codes  Convolutional Codes  Concatenated Codes  Turbo Codes  Low-Density Parity-Check Codes  Code Modulation  Coding with Interleaving for Fading Channels  Unequal Error Protection Codes  Joint Source and Channel Coding

42 Concatenated Codes  Two levels of coding An inner code Remove most of the errors introduced by channel An outer code Less powerful code Further reduces error probability when the received coded bits have a relatively low probability of error  Effective at correcting bursts of errors  Inner and outer codes separated by an interleaver to break up bursts of errors.  Decoding Usually done in two stages first the inner code is decoded then the outer code is decoded separately Joint maximum likelihood decoding is optimal, complex Near optimal decoder: iterative decoding  Turbo codes

43 Concatenated Codes RS Encoder Interleaver Conv. Encoder Data Channel Viterbi Decoder De- Interleaver RS Decoder Data Inner Code Outer Code

44 Outline  Overview of Code Design  Linear Block Codes  Convolutional Codes  Concatenated Codes  Turbo Codes  Low-Density Parity-Check Codes  Code Modulation  Coding with Interleaving for Fading Channels  Unequal Error Protection Codes  Joint Source and Channel Coding

45 Concatenating Convolutional Codes Parallel Concatenation CC Encoder1 Interleaver CC Encoder2 Data Channel Viterbi/APP Decoder De- Interleaver Data Viterbi/APP Decoder Serial Concatenation CC Encoder1 Data Interleaver CC Encoder2 Channel De- Interleaver Data Viterbi/APP Decoder Viterbi/APP Decoder Combiner

46 Turbo Codes  Proposed by Berrou & Glavieux in 1993  Parallel Concatenating Convolutional Code (PCCC)  Advantages Use very large block lengths Have feasible decoding complexity Perform very close to capacity  Limitation – delay, complexity

47 Parallel concatenated encoding and iterative decoding One Decoder (D) for every Encoder (C) Iterative Decoding: D1  D2  D1  D2  D1  … Improved Bit-Error Rate Performance Turbo Decoder  C 2 C 1  D 2 D 1   -1 Turbo Encoder U Û Turbo Codes

48 Iterative Decoding of CCCs  Turbo Codes add coding diversity by encoding the same data twice through concatenation. Soft-output decoders are used, which can provide reliability update information about the data estimates to the each other, which can be used during a subsequent decoding pass.  The two decoders, each working on a different codeword, can “iterate” and continue to pass reliability update information to each other in order to improve the probability of converging on the correct solution. Once some stopping criterion has been met, the final data estimate is provided for use.  These Turbo Codes provided the first known means of achieving decoding performance close to the theoretical Shannon capacity. Rx Data Interleaver De- Interleaver Data Viterbi/APP Decoder Viterbi/APP Decoder

49 Soft Input Soft Output Decoding

50 50 MAP/APP decoders  Maximum A Posteriori/A Posteriori Probability Two names for the same thing Basically runs the Viterbi algorithm across the data sequence in both directions ~Doubles complexity Becomes a bit estimator instead of a sequence estimator  Optimal for Convolutional Turbo Codes Need two passes of MAP/APP per iteration Essentially 4x computational complexity over a single-pass Viterbi Soft-Output Viterbi Algorithm (SOVA) is sometimes substituted as a suboptimal simplification compromise

51 51 Log-MAP Algorithms |x-y| 0~0.250.25~0.50.5~0.750.75~11~1.251.25~1.51.5~2>2 ln(1+e -|x-y| ) 0.750.5 0.25 0

52 52 Log-MAP Algorithms

53 The MAP Algorithm is a hard nut to crack

54 54 The SOVA algorithm Trace back Store Input symbols Delay Line ML path Competitor path sign ‧‧‧ weight

55 55 Log-MAP vs SOVA Iteration increment.. Iteration increment.. Log_MAP SOVA G=[7 5], Unpunctured(1/3), frame size=1024, Iteration=8.

56 56 Turbo Code Performance

57 Turbo Code Performance II  The performance curves shown here were end-to-end measured performance in practical modems. The black lines are a PCCC Turbo Code, and The blue lines are for a concatenated Viterbi-RS decoder. The vertical dashed lines show QPSK capacity for R = ¾ and R = 7/8. The capacity for QPSK at R = ½ is 0.2dB.  The TC system clearly operates much closer to capacity. Much of the observed distance to capacity is due to implementation loss in the modem.

58 58 Applications Turbo code is currently adopted as the channel coding schemes in many 3G communication systems –WCDMA, CDMA2000 –CCSDS in space communications –Baseband Signal compensation in Fiber transmission systems

59 59 Specifications in WCDMA Type of TrCHCoding schemeCoding rate BCH Convolutional coding 1/2 PCH RACH CPCH, DCH, DSCH, FACH 1/3, 1/2 Turbo coding1/3 No coding

60 60 Specification in CDMA2000 Channel TypeForward Error Correction code Code Rate Access ChannelConvolutional1/3 Enhanced Access ChannelConvolutional1/4 Reverse Common Control ChannelConvolutional1/4 Reverse Dedicated Control ChannelConvolutional1/4 Reverse Fundamental ChannelConvolutional1/2, 1/3, 1/4 Reverse Supplemental Code ChannelConvolutional or Turbo code 1/2, 1/3 1/2, 1/3, 1/4

61 61 Turbo Code v.s. Convolutional Code Convolutional Code  Non-recursive  Non-systematic  Without Interleaver Turbo Code Recursive Systematic Parallel structure Use Interleaver RSC NSC

62 Serially concatenated block (product) codes are brothers and rivals of turbo codes

63 Turbo Codes v.s. Product codes

64 Outline  Overview of Code Design  Linear Block Codes  Convolutional Codes  Concatenated Codes  Turbo Codes  Low-Density Parity-Check Codes  Code Modulation  Coding with Interleaving for Fading Channels  Unequal Error Protection Codes  Joint Source and Channel Coding

65 Low Density Parity Check Codes  A kind of linear block code with sparse parity matrix  Iterative decoding of simple parity check codes  First developed by Gallager, with iterative decoding, in 1962!  Published examples of good performance with short blocks Kou, Lin, Fossorier, Trans IT, Nov. 2001  To achieve shannon’s capacity: Enough code length Radom coding Maximum likelihood decoding  Near-capacity performance with long blocks Very near! - Chung, et al, “On the design of low-density parity-check codes within 0.0045dB of the Shannon limit”, IEEE Comm. Lett., Feb. 2001  Complexity Issues, especially in encoder  Implementation Challenges – encoder, decoder memory

66 Catalogs of LDPC  Regular LDPC code Weights of rows and columns are fixed  Binary Irregular LDPC code Relax the weight constrains Weights of each rows and columns are not equal In GF(2) field  M-ary Irregular LDPC code In GF(M) field

67 Regular LDPC code – Gallager Code  Linear block code  H  G  Encoding Construct H  Decoding Tree based iterative decoding

68  整个解码过程如下:  ( 1 )首先计算所有的校验式(即计算伴随式,用监督矩阵乘以解码器输入矢量)。如 果校验式全部为零,则认为传输正确,无错发生,停止解码,否则转到( 2 )。  ( 2 )从出错的校验式中任意确定一个比特(不能重复取本次解码已经确认过的比特)。 把这个比特作为树的根节点,按上述方法构造一棵 m 层的树。解码从树的顶层向根部进 行。对于每一个子点,看与其相连的枝(校验式)是否都出错,如果都错,则表明该 子点出错的可能性很大,将此点取反。  ( 3 )对下一层的每一个子点,按照上层已经修正过的子点值计算与之相连的根节点, 如果都错,则将此点取反。判断是否回到根节点,如果不是根节点,继续( 3 );如果 是根节点,则按所有修正过的比特,回到( 1 )。  比特和校验式可以在解码中帮助那些似乎与他们不相关的比特进行解码。设定最大循 环次数,当迭代次数超出最大次数时,终止解码。

69 LDPC Bipartite Graph This is an example bipartite graph for an irregular LDPC code. Check Nodes Edges Variable Nodes (Codeword bits)

70 Binary Irregular LDPC code  Encoding Mackay method Based on Bipartite Graph Eliminate the cycles in Graph – C-1A – C-2A – C-1B and C-2B Luby method Davey Method  Decoding Belief Propagation (also named Message Passing & Sum-Product)

71 Iteration Processing 1 st half iteration, compute  ’s,  ’s, and r’s for each edge.  i+1 = maxx(  i,q i )  i = maxx(  i+1,q i ) Check Nodes (one per parity bit) Variable Nodes (one per code bit) qiqi riri r i = maxx(  i,  i+1 ) 2 nd half iteration, compute mV, q’s for each variable node. mV n mV = mV 0 +  r’s q i = mV – r i Edges

72 M-ary Irregular LDPC code  Better performance than Binary Irregular LDPC codes, but complexity  Combine multiple bits to one symbol, so that less cycles in Bipartite Graph  better performance  Have fast implementation by FFT  Constructing methods is similar to binary LDPC codes, except for the (q-1) kinds of non- zero symbols

73 LDPC Performance Example LDPC Performance can be very close to capacity. The closest performance To the theoretical limit ever was with an LDPC, and within 0.0045dB of capacity. The code shown here is a high-rate code and is operating within a few tenths of a dB of capacity. Turbo Codes tend to work best at low code rates and not so well at high code rates. LDPCs work very well at high code rates and low code rates.

74 LDPC v.s. Turbo Code LDPC  High complexity in encoding  Low complex in decoding  BP algorithm Simpler than Turbo decoding Parallelizable Can be closely approximated with decoders of very low complexity  The decoding algorithm can determine when a correct codeword has been detected Turbo  Low complexity in encoding Linear in blocklength  High complexity in decoding Iterative nature and the message passing  BP algrithm MAP/SPVA, Complex  The decoding algorithm cannot determine weather a correct codeword has been detected

75 Application for LDPC Codes  Wireless, Wired, and Optical Communications.  Different throughput requirement  Need to design codes that work with multi- level modulation (e.g. QAM or M-PSK)

76 Research Opportunity  Code Design  Hardware Implementation  LDPC Application for next generation communication systems (Wireless, OFDM, ADSL).

77 Outline  Overview of Code Design  Linear Block Codes  Convolutional Codes  Concatenated Codes  Turbo Codes  Low-Density Parity-Check Codes  Code Modulation  Coding with Interleaving for Fading Channels  Unequal Error Protection Codes  Joint Source and Channel Coding

78 Code Modulation  Joint optimize both channel coding and modulation.  Results in significant coding gains without bandwidth expansion.  Use multilevel / phase modulation and simple convolutional coding with mapping by set partitioning.  Coset codes, lattice codes (E is a block encoder), trellis codes (E is a convolutional encoder), et.al.

79 Trellis coded modulation coding specific to modulation scheme improves bandwidth efficiency

80 Generate scheme for the coded modulation  A binary encoder E  A subset selector  A point selector  A constellation map  An MQAM modulator Channel coding Modulation

81 Outline  Overview of Code Design  Linear Block Codes  Convolutional Codes  Concatenated Codes  Turbo Codes  Low-Density Parity-Check Codes  Code Modulation  Coding with Interleaving for Fading Channels  Unequal Error Protection Codes  Joint Source and Channel Coding

82 Coding with Interleaving  To improve the performance of coding in fading channels  coding combined with interleaving to mitigate the effect of error bursts.  Spread error bursts due to deep fades over many codewords so that each received codeword exhibits at most a few simultaneous symbol errors, which can be corrected for.  Should be large enough so that fading is independent across a received codeword.  Coding and interleaving is a form of diversity Maximizing the diversity order of the code on fading channels

83 Coding with Interleaving (cont)  Block Coding with Interleaving Block interleaver  Convolutional coding with interleaving Convolutional Interleaver Delays the transmission through the channel  Coded modulation with symbol/bit interleaving Bit-interleaved coded modulation Interleave the bits and then map them to modulated symbols Symbol-interleaved coded modulation The modulation and coding can be done jointly as in coded modulation for AWGN channels; And the resulting symbols interleaved prior to transmission. BICM is better than SICM

84 Outline  Overview of Code Design  Linear Block Codes  Convolutional Codes  Concatenated Codes  Turbo Codes  Low-Density Parity-Check Codes  Code Modulation  Coding with Interleaving for Fading Channels  Unequal Error Protection Codes  Joint Source and Channel Coding

85 Unequal Error Protection Codes  Not all bits transmitted over the channel have the same priority or bit error probability requirement  UEP  UEP coding techniques: Multilevel coding The source encoder first divides the information sequence into M parallel bit streams of decreasing priority The channel encoder consists of M different binary error correcting codes with decreasing codeword distances. The ith-priority bit stream enters the ith encoder Multistate decoding First the most powerful code is decoded Then the second, and so forth It is assumed to be correct for code decisions on the weaker code sequences Bandwidth-efficient implementations Time-multiplexed coded modulation (different lattice or trellis coded modulations with different coding gains are used for each priority class of input data) Joint optimization of signal constellation (high-priority bits are heavily encoded and are used to choose the subset of the partitioned constellation, the low-priority bits are uncoded or lightly coded and are used to select the constellation signal point. )  The bit error probabilities of the channel code should be matched to the priority or P b requirement associated with the bits to be transmitted.  Can be considered as a joint design between source coding and channel coding.

86 Multilevel Coding A number of parallel encoders The outputs at each instant select one symbol M-way Partitioning of data data bits from the information source E 1 (rate R 1 ) E M (rate R M ) E 2 (rate R 2 ) q 1 K 1 N x 1 Mapping (to 2 M -point constellation) Signal Point q 2 K 2 q M K M N x 2 N x M

87 Multistage Decoding Decoder D 1 Decoder D 2 Decoder D M Y

88 Outline  Overview of Code Design  Linear Block Codes  Convolutional Codes  Concatenated Codes  Turbo Codes  Low-Density Parity-Check Codes  Code Modulation  Coding with Interleaving for Fading Channels  Unequal Error Protection Codes  Joint Source and Channel Coding

89 Joint Source and Channel Coding  Joint soure and channel coding including: Source-optimized channel coding Source code is designed for a noiseless channel A channel code is then designed for this source code to minimize end-to-end distortion over the given channel based on the distortin associated the corruption of different transmitted bits. Channel-optimized source coding Channel code is designed independently of the source Source code is optimized based on the error probability associated with the channel code Iterative algorithms Combined source-optimized channel coding and modulation can be combined with channel-optimized source coding  Significant performance advantages are possible!  Turbo or LDPC have not yet been combined with source codes in a joint optimization.

90 90 Current State-of-the-Art  Block Codes Reed-Solomon widely used in CD-ROM, communications standards. Fundamental building block of basic ECC  Convolutional Codes K = 7 CC is very widely adopted across many communications standards K = 9 appears in some limited low-rate applications (cellular telephones) Often concatenated with RS for streaming applications (satellite, cable, DTV)  Turbo Codes Limited use due to complexity and latency – cellular and DVB-RCS TPCs used in satellite applications – reduced complexity  LDPCs Recently adopted in DVB-S2, ADSL, being considered in 802.11n, 802.16e Complexity concerns, especially memory – expect broader consideration

91 91 Cited References [1] http://www.andrew.cmu.edu/user/taon/Viterbi.htmlhttp://www.andrew.cmu.edu/user/taon/Viterbi.html [2] Kou, Lin, Fossorrier, “Low-Density Parity-Check Codes Based on Finite Geometries: A Rediscovery and New Results”, IEEE Trans. On IT, Vol 47-7, p2711, November, 2001 [3] http://www.wirelesscommunication.nl/reference/slides/turbo_alex/sld001.htm http://www.wirelesscommunication.nl/reference/slides/turbo_alex/sld001.htm

92 92 Partial Reference List  TCM G. Ungerboeck, “Channel Coding with Multilevel/Phase Signals”, IEEE Trans. IT, Vol. IT-28, No. 1, January, 1982  BICM G. Caire, G. Taricco, and E. Biglieri, “Bit-Interleaved Coded Modulation”, IEEE Trans. On IT, May, 1998  LDPC Ryan, W., “An Introduction to Low Density Parity Check Codes”, UCLA Short Course Notes, April, 2001 Kou, Lin, Fossorier, “Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and New Results”, IEEE Transactions on Information Theory, Vol. 47, No. 7, November 2001 R. Gallager, “Low-density parity-check codes”, IRE Trans. IT, Jan. 1962 Chung, et al, “On the design of low-density parity-check codes within 0.0045dB of the Shannon limit”, IEEE Comm. Lett., Feb. 2001 J. Hou, P. Siegel, and L. Milstein, “Performance Analysis and Code Optimisation for Low Density Parity-Check Codes on Rayleigh Fading Channels” IEEE JSAC, Vol. 19, No. 5, May, 2001 L. Van der Perre, S. Thoen, P. Vandenameele, B. Gyselinckx, and M. Engels, “Adaptive loading strategy for a high speed OFDM-based WLAN”, Globecomm 98 Numerous articles on recent developments LDPCs, IEEE Trans. On IT, Feb. 2001

93 Digital Fountains: Applications and Related Issues

94 Goals  Explain the digital fountain paradigm for network communication.  Examine related advances in coding.  Summarize work on applications.  Speculate on what comes next.  For more, see Digital Fountains: A Survey and Look Forward www.eecs.harvard.edu/~michaelm/ListByYear.html

95 What is a Digital Fountain?  For this talk, a digital fountain is an ideal/paradigm for data transmission. Vs. the standard (TCP) paradigm: data is an ordered finite sequence of bytes.  Instead, with a digital fountain, a k symbol file yields an infinite data stream; once you have received any k symbols from this stream, you can quickly reconstruct the original file.

96 How Do We Build a Digital Fountain?  We can construct (approximate) digital fountains using erasure codes. Including Reed-Solomon, Tornado, LT, fountain codes.  Generally, we only come close to the ideal of the paradigm. Streams not truly infinite; encoding or decoding times; coding overhead.

97 History  Reed-Solomon codes  Tornado Codes  Luby Transform  Rateless/Raptor codes

98 Raptor/Rateless Codes  Properties: “Infinite” supply of packets possible. Need k(1+  ) symbols to decode, for some  > 0. Decoding time proportional to k ln (1/  ). On average, ln (1/  ) (constant) time to produce an encoding symbol.  Conclusion: these codes can be made very efficient and deliver on the promise of the digital fountain paradigm.

99 Applications  Reliable multicast  Parallel downloads  Long-distance transmission (avoiding TCP)  One-to-many TCP  Content distribution on overlay networks  Streaming video

100 Reliable Multicast  Many potential problems when multicasting to large audience. Feedback explosion of lost packets. Start time heterogeneity. Loss/bandwidth heterogeneity.  A digital fountain solves these problems. Each user gets what they can, and stops when they have enough.

101 Downloading in Parallel  Can collect data from multiple digital fountains for the same source seamlessly. Since each fountain has an “infinite” collection of packets, no duplicates. Relative fountain speeds unimportant; just need to get enough. Combined multicast/multigather possible.  Can be used for BitTorrent-like applications.

102 Point-to-Point Data Transmission  TCP has problems over long-distance connections. Packets must be acknowledged to increase sending window (packets in flight). Long round-trip time leads to slow acks, bounding transmission window. Any loss increases the problem.  Using digital fountain + TCP-friendly congestion control can greatly speed up connections.  Separates the “what you send” from “how much” you send. Do not need to buffer for retransmission.

103 One-to-Many TCP  Setting: Web server with popular files, may have many open connections serving same file. Problem: has to have a separate buffer, state for each connection to handle retransmissions. Limits number of connections per server.  Instead, use a digital fountain to generate packets useful for all connections for that file.  Separates the “what you send” from “how much” you send. Do not need to buffer for retransmission.  Keeps TCP semantics, congestion control.

104 Distribution on Overlay Networks  Encoded data make sense for overlay networks. Changing, heterogeneous network conditions. Allows multicast. Allows downloading from multiple sources, as well as peers.  Problem: peers may be getting same encoded packets as you, via the multicast. Not standard digital fountain paradigm.  Requires reconciliation techniques to find peers with useful packets.

105 Video Streaming  For “near-real-time” video: Latency issue.  Solution: break into smaller blocks, and encode over these blocks. Equal-size blocks. Blocks increases in size geometrically, for only logarithmically many blocks.  Engineering to get right latency, ensure blocks arrive on time for display.

106 Other Applications  Other possible applications outside of networking Storage systems Digital fountain codes for errors Others???

107 Putting Digital Fountains To Use  Digital fountains are out there. Digital Fountain, Inc. sells them.  Limitations to their use: Patent issues. Perceived complexity. Lack of reference implementation. What is the killer app?

108 Patent Issues  Several patents / patents pending on irregular LDPC codes, LT codes, Raptor codes by Digital Fountain, Inc.  Supposition: the theory/practice of digital fountains was greatly developed by the company and its employees.  Supposition: but this stifles external innovation. Potential threat of being sued. Potential lack of commercial outlet for research.  Suggestion: need unpatented alternative that approximate a digital fountain. There is work going on in this area, but more is needed to keep up with recent developments in rateless codes.

109 Perceived Complexity  Digital fountains are now not that hard…  …but networking people do not want to deal with developing codes.  A research need: A publicly available, easy to use, reasonably good black box digital fountain implementation that can be plugged in to research prototypes.  Issue: patents. Legal risk suggests such a black box would need to be based on unpatented codes.

110 What’s the Killer App?  Multicast was supposed to be the killer app. But IP multicast was/is a disaster. Distribution now handled by content distributions companies, e.g. Akamai.  Possibilities: Overlay multicast. General wireless: e.g. video phones. Specialized wireless: e.g. automobiles. Others???

111 Conclusions  Digital fountain paradigm and enabling codes have significant potential. Many proposed applications. More to come.  Applications helped push forward the technology. Codes with better and better properties.  Challenge in moving from a “technology” to use in the real-world. A simple, easy-to-use implementation based on non- proprietary might spur research community. Need more potential killer apps to spur business community.


Download ppt "Channel Coding. Outline  Overview of Code Design  Linear Block Codes  Convolutional Codes  Concatenated Codes  Turbo Codes  Low-Density Parity-Check."

Similar presentations


Ads by Google