Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dr. Shahriar Bijani Shahed University March 2014

Similar presentations


Presentation on theme: "Dr. Shahriar Bijani Shahed University March 2014"— Presentation transcript:

1 Dr. Shahriar Bijani Shahed University March 2014
In the Name of God Computer Networks Chapter 3: The Data Link Layer (part2) Dr. Shahriar Bijani Shahed University March 2014

2 References: A. S. Tanenbaum and D. J. Wetherall, Computer Networks (5th Edition), Pearson Education, the book slides, 2011. Chapter 6, Data Communications and Computer Networks: A Business User's Approach, 6th Edition B. A. Forouzan, Data Communications and Networking, 5th Edition, Behrouz A. Forouzan, McGraw Hill, lecture slides, 2012.

3 Error Detection and Correction
Noise is always present White Noise (thermal or Gaussian noise) Impulse Noise

4 Error Detection and Correction
Two basic strategies to deal with errors: Include enough redundant information to enable the receiver to deduce the original data: Error correcting codes. Include only enough redundancy to allow the receiver to deduce that an error has occurred (but not which error): Error detecting codes.

5 Error Detection & Correction Code
Hamming codes. Binary convolutional codes. Reed-Solomon codes. Low-Density Parity Check codes. An error-detecting code can detect only the types of errors for which it is designed. Other types of errors may remain undetected. There is no way to detect every possible error

6 Error Detection & Correction Code
All the codes presented in the previous slide add redundancy to the sent information. A frame consists of m data bits (message) and r redundant bits (check). Block code - the r check bits are computed solely as function of the m data bits with which they are associated. the m bits were looked up in a large table to find their corresponding r check bits. Systemic code – the m data bits are send directly along with the check bits (rather than being encoded). Linear code – the r check bits are computed as a linear function of the m data bits. XOR or modulo 2 addition is a popular choice.

7 Error Detection & Correction Code
n – total length of a block (i.e., n = m + r) (n, m) code n –bit codeword containing n bits. m/n – code rate (range ½ for noisy channel and close to 1 for high-quality channel).

8 Error Detection & Correction Code
Example Transmitted: Received: XOR operation gives number of bits that are different. XOR: Hamming Distance: the number of bit positions in which two codewords differ. It shows that two codes are d distance apart = d errors to convert one into the other. Minimum Hamming distance: the smallest Hamming distance between all possible pairs in a set of words. Find the min Hamming distance of : 00000 01011 10101 11110

9 Error Detection & Correction Code
All 2m possible data messages are legal, but due to the way the check bits are computers not all 2n possible code words are used. Only small fraction of 2m/2n=1/2r of possible messages will be legal codewords. The error-detecting and error-correcting codes of the block code depend on this Hamming distance. To reliably detect d error, we need a distance d+1 code. To correct d' error: we need a distance 2d' +1 code.

10 Error Detection & Correction Code
Example: 4 valid codes: The Minimal Distance is 5 => can correct 2 errors and detect 4 errors. => single or double – bit error. Hence the receiving end must assume the original transmission was had triple error => received The error can only be detected. Distance (d+1)= 5 => d=4 errors can be detected Distance (2d’+1) =5 => d’ = 2 errors can be corrected

11 Error Detection & Correction Code
Error correction requires evaluation of each candidate codeword which may be time consuming search. Through design this search time can be minimized. In theory if n = m + r, a lower limit on the number of check bits needed to correct single errors: (m + r + 1) ≤ 2r Imagine that we want to design a code with m message bits and r check bits that will allow all single errors to be corrected. Each of the 2m legal messages has n illegal codewords at a distance of 1 from it. (by inverting each of the n bits in the codeword ) Thus, each of the 2m legal messages requires n + 1 bit patterns dedicated to it (n illegal + 1 legal ). Since the total number of bit patterns is 2n, we must have (n + 1)2m <= 2n . Using n = m + r, this requirement becomes (m + r + 1) ≤ 2r e.g: m = 3 bits => r = 3 bits to correct single errors

12 1. The Hamming Code Create the codeword:
Check bits (parity bits): All bit positions that are powers of 2: (p1, p2, p4, p8, p16, …). The rest of the bit positions are filled with m data bits: (m3, m5, m6, m7, m9, m10, m11, m12, m13,…) Each parity bit calculates the parity for some of the bits in the code word. The position of the parity bit determines the sequence of bits that it alternately checks and skips.  Position 1: check 1 bit, skip 1 bit, check 1 bit, skip 1 bit, etc. (1,3,5,7,9,11,13,15,...) Position 2: check 2 bits, skip 2 bits, check 2 bits, skip 2 bits, etc. (2,3,6,7,10,11,14,15,...) Position 4: check 4 bits, skip 4 bits, check 4 bits, skip 4 bits, etc. (4,5,6,7, 12,13,14,15, 20,21,22,23,...) Position 8: check 8 bits, skip 8 bits, check 8 bits, skip 8 bits, etc. (8-15, 24-31, 40-47,...) etc. Set a parity bit to 1 if the total number of ones in the positions it checks is odd. Set a parity bit to 0 if the total number of ones in the positions it checks is even.

13 Hamming Code: Example m = 4 data bits (D) and r = 4 check bits, n = 7-bit codeword: This would be called a (7,4) code. The 3 bits to be added are 3 EVEN Parity bits (P), where the parity of each is computed on different subsets of the message bits as shown below: 7 6 5 4 3 2 1 D P 7-BIT CODEWORD - (EVEN PARITY)

14 Hamming Code: Example For example, the message 1101 would be sent as , since: 7 6 5 4 3 2 1 7-BIT CODEWORD - (EVEN PARITY)

15 Hamming Code: Parity Circles
When these 7 bits are entered into the parity circles, it can be confirmed that the choice of these 3 parity bits ensures that the parity within each circle is EVEN:

16 Hamming Code: Example If an error occurs in any of the seven bits, it will affect different combinations of the three parity bits depending on the bit position. E.g. a single bit error occurs: transmitted message received message BIT No: BIT No.: The above error (in bit 5) can be corrected by examining which of the three parity bits was affected by the bad bit:

17 Hamming Code: Error Detection
received message: 7 6 5 4 3 2 1 7-BIT CODEWORD - (EVEN PARITY) NOT! OK! The bad parity bits labeled 101 point directly to the bad bit since 101 binary equals 5. Examination of the 'parity circles' confirms that any single bit error could be corrected in this way.

18 Hamming Code: Error Detection
Example of an (11, 7) Hamming code correcting a single-bit error.

19 Hamming Code: Summary The value of the Hamming code:
Detection of 2 bit errors (assuming no correction is attempted); Correction of single bit errors; Cost of 3 bits added to a 4-bit message. The ability to correct single bit errors comes at a cost which is less than sending the entire message twice. (Recall that simply sending a message twice accomplishes no error correction.)

20 2. Error Detection & Correction: Convolutional Codes
Not a block code There is no natural message size or encoding boundary as in a block code. The output depends on the current and previous input bits. Encoder has memory. Constraint length of the code: the number of previous bits on which the output depends. They are deployed as part of the GSM mobile phone system Satellite Communications, and (see example in the previous slide). Treats data as a series of bits, and computes a code over a continuous series The code computed for a set of bits depends on the current and previous input

21 Convolutional Encoders
A convolutional encoder is a linear system. A binary convolutional encoder can be represented as a shift register. The outputs of the encoder: modulo 2 sums of the values in the certain register's cells. The input to the encoder is either the unencoded sequence (for non-recursive codes) or the unencoded sequence added with the values of some register's cells (for recursive codes). Convolutional codes can be systematic or non-systematic. Systematic codes: an unencoded sequence is a part of the output sequence. Almost always recursive Non-recursive codes are almost always non-systematic. Like any error-correcting code, a convolutional code works by adding some structured redundant information to the user's data and then correcting errors using this information.

22 Convolutional Encoders
A combination of register's cells that forms one of the output streams (or that is added with the input stream for recursive codes) is defined by a polynomial. m: the maximum degree of the polynomials forming a code, then K =m+1 is a constraint length of the code. E.g. the polynomials of Figure 1: g1(z)=1+z+z2+z3+z6 g2(z)=1+z2+z3+z5+z6 Figure 1: A standard NASA convolutional encoder with polynomials (171,133).

23 Convolutional Encoders: Example 1
g1(z)=1+z+z2+z3+z6 g2(z)=1+z2+z3+z5+z6 A code rate is an inverse number of output polynomials. For the sake of clarity, here we restrict ourselves to the codes with rate R=1/2. Decoding procedure for other codes is similar. Encoder polynomials are usually denoted in the octal notation. For the above example: “ ” = 171 and “ ” = 133. The constraint length of this code is 7.

24 Convolutional Encoder: Example 2
An example of a recursive convolutional encoder is on the Figure 2. Figure 2. A recursive convolutional encoder.

25 Trellis Diagram A convolutional encoder is often seen as a finite state machine. Each state corresponds to some value of the encoder's register. Given the input bit value, from a certain state the encoder can move to two other states. A solid line= input 0, a dotted line = input 1 (the rightmost bit is the newest one). Any valid sequence from the encoder's output can be represented as a path on the trellis diagram. One of the possible paths is denoted as red (as an example). Figure 3. A trellis diagram corresponding to the encoder on the Figure 2.

26 Trellis Diagram Each state transition on the diagram corresponds to a pair of output bits. There are only 2 allowed transitions for every state (2 allowed pairs of output bits, and the 2 other pairs are forbidden) If an error occurs, it is very likely that the receiver will get a set of forbidden pairs, which don't create a path on the trellis diagram. So, the task of the decoder is to find a path on the trellis diagram which is the closest match to the received sequence. Let's define a free distance df as a minimal Hamming distance between two different allowed binary sequences (a Hamming distance is defined as a number of differing bits). A free distance is an important property of the convolutional code. It influences a number of closely located errors the decoder is able to correct.

27 Viterbi Algorithm A convolutional code is decoded by finding the sequence of input bits that is most likely to have produced the observed sequence of output bits (which includes any errors). Viterbi algorithm reconstructs the maximum-likelihood path for a given input sequence. The input sequence requiring the fewest errors at the end is the most likely message.

28 3. Error Detection & Correction: Reed-Solomon
Like Hamming codes, Reed-Solomon codes are linear block codes, and they are often systematic too. Unlike Hamming codes, which operate on individual bits, Reed-Solomon codes operate on m bit symbols. based on the fact that every n degree polynomial is uniquely determined by n + 1 points. Example ax + b is determined by two points. Extra points on the same line are redundant, which is helpful for error correction. 2 data points represent a line. we send those two data points plus two check points on the same line. If one of the points is received in error, we can still recover the data points by fitting a line to the received points. 3 points will lie on the line, and 1 error point will not. By finding the line we have corrected the error

29 Error-Detecting Codes
Linear, systematic block codes Parity. Checksums. Cyclic Redundancy Checks (CRCs).

30 1. Parity Bits Idea: add extra bits to keep the number of 1s even
Example: 7-bit ASCII characters + 1 parity bit 1 1 1 1 10 1 Detects 1-bit errors and some 2-bit errors Not reliable against bursty errors

31 Two Dimensional Parity
Parity bit for each row 1 Parity bit for each column Parity bit for the parity byte Can detect all 1-, 2-, and 3-bit errors, some 4-bit errors 14% overhead

32 2. Checksums Idea: Use ones-complement arithmetic
Add up the bytes in the data Include the sum in the frame Use ones-complement arithmetic Lower overhead than parity: 16 bits per frame But, not resilient to errors Why? Used in UDP, TCP, and IP START Data Checksum END 1 + =

33 3. Cyclic Redundancy Check (CRC)
Uses field theory to compute a semi-unique value for a given message In a cyclic code, rotating a codeword always results in another codeword Example: Much better performance than previous approaches Fixed size overhead per frame (usually 32-bits) Quick to implement in hardware Only 1 in 232 chance of missing an error with 32-bit CRC

34 CRC Encoder/Decoder

35 Cyclic Redundancy Check (CRC)
Example calculation of the CRC


Download ppt "Dr. Shahriar Bijani Shahed University March 2014"

Similar presentations


Ads by Google