Presentation is loading. Please wait.

Presentation is loading. Please wait.

Information ---- 61 Coding in noisy channel error protection:-- improve tolerance of errors error detection: --- indicate occurrence of errors. Source.

Similar presentations


Presentation on theme: "Information ---- 61 Coding in noisy channel error protection:-- improve tolerance of errors error detection: --- indicate occurrence of errors. Source."— Presentation transcript:

1 Information ---- 61 Coding in noisy channel error protection:-- improve tolerance of errors error detection: --- indicate occurrence of errors. Source coding / Channel coding Channel coding is to represent the source information in a manner that minimises the error probability in decoding into symbols. Symbol error / bit error bit error is given as p, probability of an outcome per trial (per bit) in binomial distribution. Symbol error depends on what coding method is used. We want this to be small. This is achievable by

2 Information ---- 62 c) 4 selections A00000 B00111 C11001 D11110 A compromise between two extremes 1.A lot of code words to give reasonable R. 2.Code words are as different as possible to reduce p(e), e.g. Each code word differs from all the other in at least three digit positions. Hamming distance is the number of digits positions in which a pair of code words differ.

3 Information ---- 63 One error can be tolerated. Minimum Hamming distance (MHD) is the smallest hamming distance for the set of code words. MHD=3. The example suggests that a compromise between low error rate and high data rate may be possible by using large group of digits, so that there are a large number of possible code words, and by selecting a sufficient number for use to keep a reasonable R, but with codes words sufficiently different from each other to make p(e) acceptably low.  Shannon’s Second Theorem

4 Information ---- 64 Hamming distance A good channel code is designed so that, if a few errors occur in transmission, the output can still be identified with the correct input. This is possible because although incorrect, the output is sufficiently similar to the input to be recognisable. The idea of similarity is made more firm by the definition of a Hamming distance. Let X and Y be two binary sequences of the same length. The Hamming distance between these two codes is the number of symbols that disagree. Suppose the code X is transmitted over the channel. Due to errors, Y is received. The decoder will assign to Y the code X that minimises the Hamming distance between X and Y.

5 Information ---- 65 If Y is received, and the decoder have a collection of codewords X 1, X 2, X 3,…, Then some of X (e.g. X 2 ) has This means that,

6 Information ---- 66 For example, consider the codewords: a 10000 b 01100 c 00011 If the transmitter sends 10000 but there is a single bit error and the receiver gets 10001, it can be seen that the "nearest" codeword is in fact 10000 and so the correct codeword is found. It can be shown that to detect E bit errors, a coding scheme requires the use of codewords with a Hamming distance of at least E + 1. It can also be shown that to correct n bit errors requires a coding scheme with at least a Hamming distnace of 2E + 1 between the codewords. By designing a good code, we try to ensure that the Hamming distance between possible codewords X is larger than the Hamming distance arising from errors.

7 Information ---- 67 Shannon’s Second Theorem If coding is in long groups of n binary digits, use only a small number M from the possible combinations. where C is the channel capacity. Assuming that the M selected code words are euiprobable, the information rate Unfortunately a large n can be unpractical. Information can be transmitted up to the and including the channel capacity without error (symbol errors in the decoded output)

8 Information ---- 68 Example a)n=10, p=0.01, C=0.9 The maximum M Possible number of code words So 1 out of 2 code words are used.  Error rate poor. b)n=100, p=0.01, C=0.9 So 1 out of 1024 code words are used.  Error rate very good. But, which is an impractically large source alphabet &difficult to choose which code word to use.

9 Information ---- 69 Practical codes for error detection and correction A set of code words with MHD=2, A single binary will produce a word not in the list, so that the error will be detected, but there will be no way of knowing which code was actually transmitted. This error can be however be corrected if the set of code words if MHD=3, to generate the nearest code word. In general To detect E errors MHD≥E+1 To correct E errors MHD≥2E+1 (i) Block codes (n,k) n=total length of code k=number of information digits n-k=number of checking digits

10 Information ---- 610 Error detection i.Double-difference codes 011000501010 110100601001 210010700110 310001800101 401100900011 MHD=2, single errors can be detected. A simple way of constructing such a code with n=5, is to use all the combinations in which exactly two “1”s appear  known as “two out of five” code. (10 combinations) Alternatively, “3 out of 7” code with n=7, which has 35 combinations. Such systems can be arranged to automatically request retransmission of an incorrect symbol ARQ(automatic request repeat)

11 Information ---- 611 Parity codes Parity is the sum of binary digits- (even or odd), Parity digit is an extra digit added to make parity even or odd (as desired). Odd number of errors changes parity. The code will fail to detect an even number of error, but will detect any odd number of errors. Parity digit can be generated with exclusive ors e.g. with 3 info digits, parity digit is c=I 1 ⊕ I 2 ⊕ I 3 Code word is I 1 I 2 I 3 c

12 Information ---- 612 00 coded as  000 01 coded as  011 10 coded as  101 11 coded as  110

13 Information ---- 613 P(0) P(1) P(2) P(3) P(4) ……. E=0 (no redundancy) E=1 (one error can be corrected) E=2 (two errors can be corrected) Parity codes (odd number of errors can be detected) … Correct symbol probability in block codes. Probability of symbols with detected error in parity codes (can be corrected via repeated transmission) Correct symbol probability in parity codes.

14 Information ---- 614 Homework: Sec.3.3.4. problem 2, Solution: P343. 2. (a) English text is to be transmitted via a binary symmetric channel that has a capacity of 50 binary digits per second and a binary error probability of 0.05. A non-redundant five-digit code is to be used for the symbols and punctuations (32 characters). Find the symbol error probability and comment on the quality of received text. (b) A single parity digit is added to the above five-digit code and the system arranged such that a detected error automatically leads to the immediate retransmission of that symbol until no error is detected. Find the symbol error probability and the effective rate of transmission of symbols, and comment on the quality of the received text.


Download ppt "Information ---- 61 Coding in noisy channel error protection:-- improve tolerance of errors error detection: --- indicate occurrence of errors. Source."

Similar presentations


Ads by Google