Presentation is loading. Please wait.

Presentation is loading. Please wait.

Error-Correcting codes

Similar presentations


Presentation on theme: "Error-Correcting codes"— Presentation transcript:

1 Error-Correcting codes
Chapter 3 Error-Correcting codes

2 corner bit (use even parity checking)
Rectangular Codes Rule: take intersection of row-column parity error(s) – single error correction (double error detection) m1 row parity bits n1 message bits x x Explain in class: consistency of corner bit; logical sum over rectangle; assoc. and comm. of exclusive or corner bit (use even parity checking) column parity bits If n = m (square): (n − 1)² message bits + (2n − 1) check bits = n² total Example: n = 11, 21% overhead 3.2

3 diagonal parity checks
Triangular Codes Rule: intersection of parity errors – single error correction (no double error detection). A single parity error implies the error was on the diagonal itself. n1 diagonal parity checks n1 message bits x x superior for low n, though asymptotically equivalent to rectangular both row and column sum to even parity (see book) n(n − 1)/2 message bits + n check bits = n(n + 1)/2 total 3.3

4 Hyper-dimensional Codes
Arrange the bits in a cube, and include check bits not in every linear dimension, but every plane. So we get three edges of check bits, which intersect in a single point. Total of n³ = (3n − 2) check bits + message. excess bit redundancy ~ 3 / n² Four-dimensional: check over hyperplanes (cubes) excess bit redundancy ~ 4 / n³ b b k = 1 k = 2 k = 3 3.3 cont.

5 Hypercubes More generally, consider a k-dimensional n-cube. There are n (k−1) dimension hyperplanes in each of the k directions (each are parity checked), for a total of n × k bits. Overhead is hence ~ nk / nk = k / nk−1. So keeping m = nk fixed, overhead is ~ k∙k√m. This has a minimum when its (natural) logarithm does: log k + (ln m) / k. Taking derivatives: 1/k − (ln m)/k² = 0 implies k = ln m is the optimal dimension, making n = e. This is almost the highest dimensional space possible, since n must be at least two. So try using a k-dimensional 2-cube, with n = 2k bits. One parity check in each direction means each of the k different ways of splitting the cube has even parity. Same problem as most efficient radix, in which the answer is base e. 3.3 cont.

6 Hamming Code Design a code so the sequence of parity error bits (called syndrome) address (point to) the erroneous bit. All zero’s is no error. If we number the parity checks pm, ..., p1, where pi = (1 for an error / 0 for no error), then they can point to 2m  1 erroneous bits. If we use a total of n bits, we must have n ≤ 2m  1 to correct an error in any possible location. Clearly, each parity check pi must sum over all positions i in the string such that the jth bit of i is one: In order for x1, …, xn to be a valid code word, it must include both message bits and the proper check bits. The most convenient choice is to locate them at positions 2j-1 in the word x1, …, xn according to the formula: x2j-1 = ∑ {xi : i = (bm… b1)two & bj = 1} mod 2 summation doesn’t include itself, x_2^(j-1) 3.4

7 Example n = 7, m = 3; parity bit positions: 1, 2,  3 = 4 message bits at positions 3, 5, 6, 7. positions x1 x2 x3 x4 x5 x6 x7 message - 1 Codeword sent error x Codeword received p1 = x1 + x3 x5 x7 1 p2 x2 x6 p3 x4 Alternatively Turn 90 1 = x2 → 1 1 = x6 → 1 = x7 → (011)two error in position 3 3.4

8 This is the number of places that they differ (same as L0 norm in F2).
Hamming distance This is the number of places that they differ (same as L0 norm in F2). The minimum distance between allowable code words is: 1 For uniqueness 2 single-error detection 3 single-error correction 4 double-error detection 5 double-error correction corresponds to performance of code verify axioms (C. can be done bit-wise, by contradiciton) 3.6

9 Sphere packing For single error correction, with minimum distance = 3, the spheres of radius one around each code word must not overlap: k = # of message bits n = total number of bits m = # of check bits double error correction has non-overlapping spheres of radius 2: “Density” corresponds to efficiency of code When equality is achieved, the code is said to be “perfect”. E.g. double-error correcting with m = 12, n = 90. 3.6

10 Double – Error Detection
Hamming code would cause 3 errors in a double-error situation. But if we add an extra parity check bit over the entire word we can detect double errors: x0 x1, …, xn x0 = ∑ {xi : i = 1 … n} mod 2 Hamming syndrome extra check meaning correct 1 error in x0 i ≠ 0 error in xi ≠ 0 double error Algebraic approach Geometric approach – In the original code, points of distance 3 from each other have a different # of 1’s in them (mod 2) since they differ in 3 positions, so their extra parity checks will be set differently, and now they will differ in 4 positions! 3.7

11 Hamming Codes on Words Think of doing parity checks over entire words, using a logical sum. 0 0  0 represents no parity error anything else represents a parity error a check = The same syndrome technique locates the erroneous word. Any of the parity check failure words could now be added to the erroneous word to correct it. character parity bit Consider the ASCII code with each 8-bit word having a parity bit, and for each block of words, a checksum word. The parity check within a word locates the erroneous word, and the check sum bits locate the erroneous bit. ASCII code words x This is equivalent to a rectangular code checksum word 3.8


Download ppt "Error-Correcting codes"

Similar presentations


Ads by Google