Presentation is loading. Please wait.

Presentation is loading. Please wait.

I N THE N AME OF G OD C OMPUTER N ETWORKS C HAPTER 3: T HE D ATA L INK L AYER ( PART 2) Dr. Shahriar Bijani Shahed University March 2014.

Similar presentations


Presentation on theme: "I N THE N AME OF G OD C OMPUTER N ETWORKS C HAPTER 3: T HE D ATA L INK L AYER ( PART 2) Dr. Shahriar Bijani Shahed University March 2014."— Presentation transcript:

1 I N THE N AME OF G OD C OMPUTER N ETWORKS C HAPTER 3: T HE D ATA L INK L AYER ( PART 2) Dr. Shahriar Bijani Shahed University March 2014

2 References: Computer Networks A. S. Tanenbaum and D. J. Wetherall, Computer Networks (5th Edition), Pearson Education, the book slides, 2011. Chapter 6, Data Communications and Computer Networks: A Business User's Approach, 6th Edition B. A. Forouzan, Data Communications and Networking, 5th Edition, Behrouz A. Forouzan, McGraw Hill, lecture slides, 2012. 2

3 E RROR D ETECTION AND C ORRECTION Noise is always present White Noise (thermal or Gaussian noise) Impulse Noise 3

4 E RROR D ETECTION AND C ORRECTION Two basic strategies to deal with errors: 1. Include enough redundant information to enable the receiver to deduce the original data: Error correcting codes. 2. Include only enough redundancy to allow the receiver to deduce that an error has occurred (but not which error): Error detecting codes. 4

5 E RROR D ETECTION & C ORRECTION C ODE 1. Hamming codes. 2. Binary convolutional codes. 3. Reed-Solomon codes. 4. Low-Density Parity Check codes. 5

6 All the codes presented in the previous slide add redundancy to the sent information. A frame consists of m data bits (message) and r redundant bits (check). Block code - the r check bits are computed solely as function of the m data bits with which they are associated. the m bits were looked up in a large table to find their corresponding r check bits. Systemic code – the m data bits are send directly along with the check bits (rather than being encoded). Linear code – the r check bits are computed as a linear function of the m data bits. XOR or modulo 2 addition is a popular choice. 6 E RROR D ETECTION & C ORRECTION C ODE

7 n – total length of a block (i.e., n = m + r ) ( n, m ) code n –bit codeword containing n bits. m/n – code rate (range ½ for noisy channel and close to 1 for high-quality channel). 7

8 E RROR D ETECTION & C ORRECTION C ODE Example Transmitted: 10001001 Received:10110001 XOR operation gives number of bits that are different. XOR:00111000 Hamming Distance: the number of bit positions in which two codewords differ. It shows that two codes are d distance apart = d errors to convert one into the other. Minimum Hamming distance: the smallest Hamming distance between all possible pairs in a set of words. 8

9 E RROR D ETECTION & C ORRECTION C ODE All 2 m possible data messages are legal, but due to the way the check bits are computers not all 2 n possible code words are used. Only small fraction of 2 m / 2 n =1/2 r of possible messages will be legal codewords. The error-detecting and error-correcting codes of the block code depend on this Hamming distance. To reliably detect d error, we need a distance d+1 code. To correct d' error: we need a distance 2d' +1 code. 9

10 E RROR D ETECTION & C ORRECTION C ODE Example: 4 valid codes: 0000000000 0000011111 1111100000 1111111111 The Minimal Distance is 5 => can correct 2 errors and detect 4 errors. 0000000111 => single or double – bit error. Hence the receiving end must assume the original transmission was 0000011111. 0000000000 had triple error => 0000000111 received The error can only be detected. 10

11 E RROR D ETECTION & C ORRECTION C ODE Error correction requires evaluation of each candidate codeword which may be time consuming search. Through design this search time can be minimized. In theory if n = m + r, a lower limit on the number of check bits needed to correct single errors: (m + r + 1) ≤ 2 r 11

12 1. T HE H AMMING C ODE Create the codeword: 1. Check bits (parity bits): All bit positions that are powers of 2: (p1, p2, p4, p8, p16, …). 2. The rest of the bit positions are filled with m data bits: (m3, m5, m6, m7, m9, m10, m11, m12, m13,…) 3. Each parity bit calculates the parity for some of the bits in the code word. The position of the parity bit determines the sequence of bits that it alternately checks and skips. Position 1: check 1 bit, skip 1 bit, check 1 bit, skip 1 bit, etc. (1,3,5,7,9,11,13,15,...) Position 2: check 2 bits, skip 2 bits, check 2 bits, skip 2 bits, etc. (2,3,6,7,10,11,14,15,...) Position 4: check 4 bits, skip 4 bits, check 4 bits, skip 4 bits, etc. (4,5,6,7, 12,13,14,15, 20,21,22,23,...) Position 8: check 8 bits, skip 8 bits, check 8 bits, skip 8 bits, etc. (8-15, 24-31, 40-47,...) etc. 4. Set a parity bit to 1 if the total number of ones in the positions it checks is odd. Set a parity bit to 0 if the total number of ones in the positions it checks is even. 12

13 H AMMING C ODE : E XAMPLE m = 4 data bits (D) and r = 4 check bits, n = 7-bit codeword: This would be called a (7,4) code. The 3 bits to be added are 3 EVEN Parity bits (P), where the parity of each is computed on different subsets of the message bits as shown below: 7654321 D D D P D PP7-BIT CODEWORD D-D-D-P(EVEN PARITY) DD--DP- DDDP--- 13

14 For example, the message 1101 would be sent as 1100110, since: 7654321 110 0 1 107-BIT CODEWORD 1-0-1- 0 (EVEN PARITY) 11--1 1 - 110 0 --- 14 H AMMING C ODE : E XAMPLE

15 H AMMING C ODE : P ARITY C IRCLES When these 7 bits are entered into the parity circles, it can be confirmed that the choice of these 3 parity bits ensures that the parity within each circle is EVEN: 15

16 If an error occurs in any of the seven bits, it will affect different combinations of the three parity bits depending on the bit position. E.g. a single bit error occurs: transmitted message received message 1 1 0 0 1 1 0 1 1 1 0 1 1 0 BIT No: 7 6 5 4 3 2 1 BIT No.: 7 6 5 4 3 2 1 The above error (in bit 5) can be corrected by examining which of the three parity bits was affected by the bad bit: 16 H AMMING C ODE : E XAMPLE

17 7654321 111 0 1 10 7-BIT CODEWORD 1-1-1-0(EVEN PARITY) NOT!1 11--11-(EVEN PARITY) OK!0 1110---(EVEN PARITY) NOT!1 17 received message: The bad parity bits labeled 101 point directly to the bad bit since 101 binary equals 5. Examination of the 'parity circles' confirms that any single bit error could be corrected in this way. H AMMING C ODE : E RROR D ETECTION

18 Example of an (11, 7) Hamming code correcting a single-bit error. 18 H AMMING C ODE : E RROR D ETECTION

19 H AMMING C ODE : S UMMARY The value of the Hamming code: 1. Detection of 2 bit errors (assuming no correction is attempted); 2. Correction of single bit errors; 3. Cost of 3 bits added to a 4-bit message. The ability to correct single bit errors comes at a cost which is less than sending the entire message twice. (Recall that simply sending a message twice accomplishes no error correction.) 19

20 2. E RROR D ETECTION & C ORRECTION : C ONVOLUTIONAL C ODES Not a block code There is no natural message size or encoding boundary as in a block code. The output depends on the current and previous input bits. Encoder has memory. Constraint length of the code: the number of previous bits on which the output depends. They are deployed as part of the GSM mobile phone system Satellite Communications, and 802.11 (see example in the previous slide). 20

21 C ONVOLUTIONAL E NCODERS A convolutional encoder is a linear system. A binary convolutional encoder can be represented as a shift register. The outputs of the encoder: modulo 2 sums of the values in the certain register's cells. The input to the encoder is either the unencoded sequence (for non-recursive codes) or the unencoded sequence added with the values of some register's cells (for recursive codes). Convolutional codes can be systematic or non-systematic. Systematic codes: an unencoded sequence is a part of the output sequence. Almost always recursive Non-recursive codes are almost always non-systematic. 21

22 C ONVOLUTIONAL E NCODERS A combination of register's cells that forms one of the output streams (or that is added with the input stream for recursive codes) is defined by a polynomial. m: the maximum degree of the polynomials forming a code, then K =m+1 is a constraint length of the code. E.g. the polynomials of Figure 1: g 1 (z)=1+z+z 2 +z 3 +z 6 g 2 (z)=1+z 2 +z 3 +z 5 +z 6 Figure 1: A standard NASA convolutional encoder with polynomials (171,133). 22

23 C ONVOLUTIONAL E NCODERS : E XAMPLE 1 Example: g 1 (z)=1+z+z 2 +z 3 +z 6 g 2 (z)=1+z 2 +z 3 +z 5 +z 6 A code rate is an inverse number of output polynomials. For the sake of clarity, here we restrict ourselves to the codes with rate R=1/2. Decoding procedure for other codes is similar. Encoder polynomials are usually denoted in the octal notation. For the above example: “1111001” = 171 and “1011011” = 133. The constraint length of this code is 7. 23

24 C ONVOLUTIONAL E NCODER : E XAMPLE 2 Figure 2. A recursive convolutional encoder. 24 An example of a recursive convolutional encoder is on the Figure 2.

25 T RELLIS D IAGRAM A convolutional encoder is often seen as a finite state machine. Each state corresponds to some value of the encoder's register. Given the input bit value, from a certain state the encoder can move to two other states. A solid line= input 0, a dotted line = input 1 (the rightmost bit is the newest one). Any valid sequence from the encoder's output can be represented as a path on the trellis diagram. One of the possible paths is denoted as red (as an example). 25 Figure 3. A trellis diagram corresponding to the encoder on the Figure 2.

26 T RELLIS D IAGRAM Each state transition on the diagram corresponds to a pair of output bits. There are only 2 allowed transitions for every state (2 allowed pairs of output bits, and the 2 other pairs are forbidden) If an error occurs, it is very likely that the receiver will get a set of forbidden pairs, which don't create a path on the trellis diagram. So, the task of the decoder is to find a path on the trellis diagram which is the closest match to the received sequence. 26

27 V ITERBI A LGORITHM A convolutional code is decoded by finding the sequence of input bits that is most likely to have produced the observed sequence of output bits (which includes any errors). Viterbi algorithm reconstructs the maximum-likelihood path for a given input sequence. The input sequence requiring the fewest errors at the end is the most likely message. 27

28 3. E RROR D ETECTION & C ORRECTION : R EED -S OLOMON Like Hamming codes, Reed-Solomon codes are linear block codes, and they are often systematic too. Unlike Hamming codes, which operate on individual bits, Reed-Solomon codes operate on m bit symbols. based on the fact that every n degree polynomial is uniquely determined by n + 1 points. Example ax + b is determined by two points. Extra points on the same line are redundant, which is helpful for error correction. 2 data points represent a line. we send those two data points plus two check points on the same line. If one of the points is received in error, we can still recover the data points by fitting a line to the received points. 3 points will lie on the line, and 1 error point will not. By finding the line we have corrected the error 28

29 E RROR -D ETECTING C ODES Linear, systematic block codes 1. Parity. 2. Checksums. 3. Cyclic Redundancy Checks (CRCs). 29

30 1. P ARITY B ITS Detects 1-bit errors and some 2-bit errors Not reliable against bursty errors  Idea: add extra bits to keep the number of 1s even  Example: 7-bit ASCII characters + 1 parity bit 0101001101111011110011010011010010001110 1 10 30

31 T WO D IMENSIONAL P ARITY Can detect all 1-, 2-, and 3-bit errors, some 4-bit errors 14% overhead 0101001 1101001 1011110 0001110 0110100 1011111 101110101110 1111011 0 Parity bit for each row Parity bit for each column Parity bit for the parity byte

32 2. C HECKSUMS Idea: Add up the bytes in the data Include the sum in the frame Use ones-complement arithmetic Lower overhead than parity: 16 bits per frame But, not resilient to errors Why? Used in UDP, TCP, and IP DataSTART END Checksum 01010011101001= 10010010+ 01

33 3. C YCLIC R EDUNDANCY C HECK (CRC) Uses field theory to compute a semi-unique value for a given message In a cyclic code, rotating a codeword always results in another codeword Example: Much better performance than previous approaches Fixed size overhead per frame (usually 32-bits) Quick to implement in hardware Only 1 in 2 32 chance of missing an error with 32-bit CRC

34 34 CRC E NCODER /D ECODER

35 C YCLIC R EDUNDANCY C HECK (CRC) Example calculation of the CRC 35


Download ppt "I N THE N AME OF G OD C OMPUTER N ETWORKS C HAPTER 3: T HE D ATA L INK L AYER ( PART 2) Dr. Shahriar Bijani Shahed University March 2014."

Similar presentations


Ads by Google