Error Correction Coding

Slides:



Advertisements
Similar presentations
Convolutional Codes Mohammad Hanaysheh Mahdi Barhoush.
Advertisements

Convolutional Codes Representation and Encoding  Many known codes can be modified by an extra code symbol or by deleting a symbol * Can create codes of.
Decoding of Convolutional Codes  Let C m be the set of allowable code sequences of length m.  Not all sequences in {0,1}m are allowable code sequences!
Cyclic Code.
Error Control Code.
Michael Alves, Patrick Dugan, Robert Daniels, Carlos Vicuna
Information and Coding Theory
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
1 Huffman Codes. 2 Introduction Huffman codes are a very effective technique for compressing data; savings of 20% to 90% are typical, depending on the.
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
Chapter 9: Huffman Codes
Variable-Length Codes: Huffman Codes
EE 3220: Digital Communication Dr Hassan Yousif 1 Dr. Hassan Yousif Ahmed Department of Electrical Engineering College of Engineering at Wadi Aldwasser.
EE 3220: Digital Communication Dr Hassan Yousif 1 Dr. Hassan Yousif Ahmed Department of Electrical Engineering College of Engineering at Wadi Aldwasser.
Zvi Kohavi and Niraj K. Jha 1 Memory, Definiteness, and Information Losslessness of Finite Automata.
1 Channel Coding (II) Cyclic Codes and Convolutional Codes.
Information Coding in noisy channel error protection:-- improve tolerance of errors error detection: --- indicate occurrence of errors. Source.
Codes Codes are used for the following purposes: - to detect errors - to correct errors after detection Error Control Coding © Erhan A. Ince Types: -Linear.
MIMO continued and Error Correction Code. 2 by 2 MIMO Now consider we have two transmitting antennas and two receiving antennas. A simple scheme called.
Redundancy The object of coding is to introduce redundancy so that even if some of the information is lost or corrupted, it will still be possible to recover.
Data and Computer Communications by William Stallings Eighth Edition Digital Data Communications Techniques Digital Data Communications Techniques Click.
Prof. Amr Goneid, AUC1 Analysis & Design of Algorithms (CSCE 321) Prof. Amr Goneid Department of Computer Science, AUC Part 8. Greedy Algorithms.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 12.
ADVANTAGE of GENERATOR MATRIX:
Last time, we talked about:
Timo O. Korhonen, HUT Communication Laboratory 1 Convolutional encoding u Convolutional codes are applied in applications that require good performance.
Vector Quantization CAP5015 Fall 2005.
A simple rate ½ convolutional code encoder is shown below. The rectangular box represents one element of a serial shift register. The contents of the shift.
Dr. Muqaibel \ EE430 Convolutional Codes 1 Convolutional Codes.
Convolutional Coding In telecommunication, a convolutional code is a type of error- correcting code in which m-bit information symbol to be encoded is.
Basic Message Coding 《 Digital Watermarking: Principles & Practice 》 Chapter 3 Multimedia Security.
Interleaving Compounding Packets & Convolution Codes
Coding No. 1  Seattle Pacific University Digital Coding Kevin Bolding Electrical Engineering Seattle Pacific University.
(iii) Simplex method - I D Nagesh Kumar, IISc Water Resources Planning and Management: M3L3 Linear Programming and Applications.
Capabilities, Minimization, and Transformation of Sequential Machines
NUMBER SYSTEMS.
Linear Algebra Review.
St. Edward’s University
The Viterbi Decoding Algorithm
Hypotheses and test procedures
Other confidence intervals
Error Detection and Correction
Data Structure and Algorithms
Communication Networks: Technology & Protocols
What is this “Viterbi Decoding”
Factor Graphs and the Sum-Product Algorithm
Trellis Codes With Low Ones Density For The OR Multiple Access Channel
Computational Molecular Biology
S Digital Communication Systems
COS 463: Wireless Networks Lecture 9 Kyle Jamieson
Subject Name: Information Theory Coding Subject Code: 10EC55
Chapter 9: Huffman Codes
Chapter 6.
Analysis & Design of Algorithms (CSCE 321)
Appendix D Mapping Control to Hardware
Hidden Markov Models Part 2: Algorithms
Algorithm An algorithm is a finite set of steps required to solve a problem. An algorithm must have following properties: Input: An algorithm must have.
Chap 3. The simplex method
Chapter 3 The Simplex Method and Sensitivity Analysis
RS – Reed Solomon List Decoding.
Copyright © Cengage Learning. All rights reserved.
Guest Lecture by David Johnston
Feature space tansformation methods
Error Detection and Correction
Writing of Wet Paper for the Case of Binary Images
ECE 352 Digital System Fundamentals
COS 463: Wireless Networks Lecture 9 Kyle Jamieson
EGR 2131 Unit 12 Synchronous Sequential Circuits
Homework #2 Due May 29 , Consider a (2,1,4) convolutional code with g(1) = 1+ D2, g(2) = 1+ D + D2 + D3 a. Draw the.
IV. Convolutional Codes
Presentation transcript:

Error Correction Coding Multimedia Security

Outline Part 1. The Problem with Simple Multi-symbol Message Part 2. The Idea of Error Correction Codes

Part 1. The Problem with Simple Multi-symbol Message

In direct message coding, we can choose any set of message marks we wish. Thus, we can ensure that the angle between any pair of message marks is maximal. In any multi-symbol system, however, code separation depends on the methods used for source coding and modulation.

If every possible symbol sequence of a given length is used to represent a message, using CDM, some of the resulting vectors will have poor code separation. This is because there is a lower limit on the inner product, and hence, an upper limit on the angle between the resulting message marks.

For example, consider a system with an alphabet size |A| = 4 and message length L = 3. Let be the message mark embedded in a work to represent the symbol sequence 3,1,2.

Now examine the inner product between this message mark and the message mark that encodes the sequence 3,1,4 (which differs only in the last symbol); namely,

The inner product between these two vectors is given by (because all marks for one location are orthogonal to all marks for any other location)

Suppose all the marks are normalized to have unit variance Suppose all the marks are normalized to have unit variance. This means and both equal N (the dimensionality of marking space), while is bounded by –N . Thus, , and the inner product between the two closest encoded messages cannot possibly be lower than this.

In general, the smallest possible inner product between two message marks that differ in h symbols is N(L - 2h).

Thus, as L increases, the message marks of the closest pair become progressively more similar. => the code separation is getting poor as L increases. => false alarm rate ↗↗

Part 2. The Idea of Error Correction Codes

The above problem can be solved by defining a source coding system in which NOT every possible sequence of symbols corresponds to a message!

Sequences that correspond to messages are referred to as codewords. Sequences that do not correspond to messages are interpreted as corrupted code words.

By defining the mapping between messages and codewords in an appropriate way, it is possible to build decoders that can identify the codeword closest to a given corrupted sequence (i.e., decoders that correct errors). Such a system is an Error Correction Code, or ECC.

ECC ’s are typically implemented by increasing the lengths of symbol sequences.

For example, if we have 16 possible messages, we could represent the message with a sequence of 4 bits. The ECC encoder, then, takes this sequence as input, and output a longer sequence, say 7 bit. Of all the 27 = 128 possible 7-bit words, only 16 would be code words.

The set of code words can be defined in such a way that if we start with one code word, we have to flip at least 3 bits to obtain a code word that encodes a different message.

When presented with a corrupted sequence, the decoder would look for the code word that differs from it in the fewest number of bits. If only one bit has been flipped between encoding and decoding, the system will still yield the correct message.

t t ci cj

If we have a 7-bit code (L = 7) encoding 4-bit messages and ensuring that the code words of every pair differ in at least three bits (h = 3) , the maximum inner product between any two code words will be N(L - 2h) = N(7 - 2x3) = N.

This is better than the performance we would get without error correction, where messages are coded with only 4 bits (L = 4) and two messages can differ in as few as one bit (h = 1) .

In this case, the maximum inner product would be N(L- 2h) = N(4 - 2) = 2N. Of course, one can increase the size of the alphabet, rather than the length of the sequences.

There are many ECC ‘s available. (7;4) Hamming Code : single bit error correctable. Algebraic Codes : BCH, RS Statistical Codes / Convolutional Codes : Trellis Code, Turbo Codes

Different codes are suitable for different types of errors. Random errors → Hamming Codes Burst errors → BCH Codes (errors in groups of consecutive symbols)

Since all code words are ultimately modulated, we can view all codes as convenient methods of generating well-separated message marks.

Example: Trellis Codes and Viterbi Decoding Encoding: The encoder can be implemented as a finite-state-machine, as illustrated in Fig. 4.8 . The machine begins in state A and processes the bits of the input sequence one at a time. As it processes each input bit, it outputs 4 bits and changes state.

Example: Trellis Codes and Viterbi Decoding If the input bit is a ‘0’, it traverses the light arc coming from the current state and outputs the 4 bits with which that arc is labeled. If the input bit is a ‘1’, it traverses the dark arc and outputs that arc’s 4-bit label. Thus, a 4-bit message will be transformed into a 16-bit code word after encoding.

Example: Trellis Codes and Viterbi Decoding e.g. the 4-bit sequence the 16-bits code word (1,0,1,0) (0001,1100,1111,1110)

Example: Trellis Codes and Viterbi Decoding Note that each bit of the message effects not only the 4-bit used to encode it (behaving as input) but also the encoding of several subsequent bits (behaving as state). Thus, the subsequent bits contain redundant information about earlier bits.

Example: Trellis Codes and Viterbi Decoding Fig. 4.9 shows an alternative representation of the code, in a diagram called a trellis. Here, we have 8(L + 1) states, where L is the length of the input sequence.

Example: Trellis Codes and Viterbi Decoding Each row of states corresponds to one state in Fig. 4.8 at different times. A0 (in Fig. 4.9) → state A (in Fig. 4.8) at the beginning of the encoding. A1 (in Fig. 4.9) → state A (in Fig. 4.8) after the 1st input bit has been processed. Each possible code word corresponds to a path through this trellis starting at A0 !!

Example: Trellis Codes and Viterbi Decoding We can replace the 4-bit arc labels with symbols drawn from a 16-symbol alphabet. => We can assign a unique reference mark to each arc in the trellis of Fig. 4.9 (one each for each of the 16 symbols in each of the L sequence locations).

Example: Trellis Codes and Viterbi Decoding => The message mark resulting from a given code word, then, is simply the sum of the reference marks for the arcs along the path that corresponds to that code word. This is known as trellis-coded modulation.

Example: Trellis Codes and Viterbi Decoding Decoding a trellis-coded message is a matter of finding the most likely path through the trellis.

Example: Trellis Codes and Viterbi Decoding The most likely path is the one that leads to the highest linear correlation or inner product between the received vector and the message vector for that path. This can be found efficiently using an algorithm known as Viterbi decoder.

Example: Trellis Codes and Viterbi Decoding The Viterbi algorithm relies on the fact that the most likely path through each node in the trellis always includes the most likely path up to that node. Thus, once we find the most likely path from A0 to some node further into the trellis, we can forget about all the other possible paths from A0 to that node.

Example: Trellis Codes and Viterbi Decoding The algorithm proceeds by traversing the trellis from left to right. For each of the 8 states in the columns of the trellis, it keeps track of the most likely path so far from A0 to that state, and the total inner product between the received vector and the reference marks along the path.

Example: Trellis Codes and Viterbi Decoding In each iteration, it updates these paths and products. When it reaches the end of the trellis (i.e., the end of the coded message), it has found the most likely path from A0 to each of the 8 final states. It is then a simple matter to decide which of the resulting 8 paths is most likely.

Example: Trellis Codes and Viterbi Decoding Notations: is the received vector being decoded. is the reference mark associated with the arc in the trellis from state i to the state j . For example, is the reference mark associated with the arc from state A0 to B1 .

Example: Trellis Codes and Viterbi Decoding Notations: P[A,…,H] is an array of eight paths. At any given time, P[i] is the most likely path that leads up to state i in the current column. For example, when processing column 3, P[C] is the most likely path that leads up to state C3 .

Example: Trellis Codes and Viterbi Decoding Notations: Z[A,…,H] is an array of eight inner products. At any given time, Z[i] is the total product between the received vector, , and the reference marks for path P[i].

Example: Trellis Codes and Viterbi Decoding Viterbi Algorithm: At the beginning of the algorithm, P[A,…,H] are initialized to the null path, and Z[A,…,H] are initialized to 0.

Example: Trellis Codes and Viterbi Decoding Viterbi Algorithm: In the first iteration, we compute the inner product between the received vector and the 16 reference marks associated with the arcs that go from states in column 0 to states in column 1.

Example: Trellis Codes and Viterbi Decoding Viterbi Algorithm: To find the total product between the received vector and a path from a state in column 0 to a state in column 1, we add the product with the corresponding arc to the value in Z that corresponds to the column 0 state.

Example: Trellis Codes and Viterbi Decoding Viterbi Algorithm: For example, the inner product for the path from A0 to B1 is just Z[A] + . Similarly, the product for the path from E0 to B1 is Z[E] + .

Example: Trellis Codes and Viterbi Decoding Viterbi Algorithm: By comparing the inner products for the two paths that lead up to a state in column 1, we can decide which path to that state is most likely, and update P and Z accordingly.

Example: Trellis Codes and Viterbi Decoding Viterbi Algorithm: For example, if Z[A] + > Z[E] + than we would update Z[B] ← Z[A] + P[B] ← P[A]; concatenated with the arc from A0 to B1.

Example: Trellis Codes and Viterbi Decoding Viterbi Algorithm: This process is repeated L times, until we reach the end of the trellis. At that point, we identify the most likely path by finding the highest value in Z . The path can then be converted into a decoded message.

Example: Trellis Codes and Viterbi Decoding Viterbi Algorithm: The ensure that the most likely path always starts at A0 , we can modify this algorithm by initializing only Z[A] to 0 at the beginning.

Example: Trellis Codes and Viterbi Decoding Viterbi Algorithm: The other elements of Z are initialized to an extreme negative value, indicating that the corresponding nodes have not yet been reached.

Fig. 4.8

Fig. 4.9