Presentation is loading. Please wait.

Presentation is loading. Please wait.

CY2G2 Information Theory 5

Similar presentations


Presentation on theme: "CY2G2 Information Theory 5"— Presentation transcript:

1 CY2G2 Information Theory 5
Channel capacity C=max I(xy)  that is maximum information transfer Binary Symmetric Channels The noise in the system is random, then the probabilities of errors in ‘0’ and ‘1’ is the same. This is characterised by a single value p of binary error probability. p(0) x (transmit) y (receive) p(1)=1-p(0)

2 Channel capacity of this channel
Mutual information increases as error rate decreases This is an backward equivocation (error entropy),p is fixed, so the I(xy) is maximum when H(y) is maximum. This occurs when p(0)=p(1) at receiver (output) H(y)=1. C=1-H(p)

3 Example . Find the capacity of a binary symmetric channel with a binary error of 0.125.
(a) Variation of information transfer with output probability (b) Variation of Capacity with error probability

4 How to overcome the problem of information loss in noisy channel?
Physical solution? (b) System solution. (Channel coding). Source coding: The task of source coding is to represent the source information with the minimum of symbols under the assumption that channel is noisy-free. When a code is transmitted over a channel in the presence of noise, errors will occur. Channel coding: The task of channel coding is to represent the source information in a manner that minimises the error probability in decoding. Redundancy; --- put extra amount of information to compensate information loss; (temperature control of a room in winter for different outdoor temperature).

5 Symbol error is the error based on some decision rule; If a received code word (some bits might be in error) is classified as the wrong symbol (different than the original symbol it meant). Binomial distribution plays an important role in channel coding; A binomial distribution experiment consists of n identical trials, (think of coding a symbol by a binary digit sequence i.e. code word , so n is length of the code word). Each trial has two possible outcomes, S or F, respectively, with a probability p. Easily S can be defined as a transmission error (10 or 01). The probability p is bit error rate. is used to calculate probability of r bit errors in a codeword.

6 CY2G2 Information Theory 5
Coding in noisy channel error protection:-- improve tolerance of errors error detection: --- indicate occurrence of errors. or error correction Binary coding for error protection Example: Assume Binary Symmetrical Channel, p=0.01 ( error probability) Coding by repetition Code A=00000, B=11111, use majority decision rule.  If more 0’s than 1’s A 2 errors tolerated without producing symbol error. Use binomial probability distribution to find symbol error probability p(e) How to overcome the problem of information loss in noisy channel? Physical solution? (b) system solution. (purely computational). Redundancy; ---put extra amount of information to compensate information loss; (temperature control of a room in winter for different outdoor temperature). (i) compensate as necessary not too much for energy consumption ( transfer speed); (ii) release heating in a controlled way to maintain the temperature over a period of time (by channel coding for all possible symbols). Discuss the difference of source coding and channel code; (maybe just two stages of coding). error protection; code word; binary error rate (given, fixed) symbol error rate ( depends on classification of a binomial distribution) binomial distribution distribution function;

7 Information rate M  number of equiprobable code words. n  number of binary digits P(e) if R

8 2) Coding by selection of code words
( using 5 digits, there are 32 possible code words, But we don’t have to use them all. ) Two selections ( i.e. repetition) A=00000, B=11111 This gives Thirty -two selections

9 A compromise between two extremes
4 selections A compromise between two extremes A lot of code words to give reasonable R. Code words are as different as possible to reduce p(e), e.g. A 00000 B 00111 C 11001 D 11110 Each code word differs from all the other in at least three digit positions. Hamming distance is the number of digits positions in which a pair of code words differ.

10 CY2G2 Information Theory 5
Minimum Hamming distance (MHD) is the smallest hamming distance for the set of code words. MHD=3. One error can be tolerated.

11 32 Selections 2 Selections 4 Selections A 00000 Q 10000 B 00001 R
10001 11111 00111 C 00010 S 10010 11001 D 00011 T 10011 11110 E 00100 U 10100 F 00101 V 10101 G 00110 W 10110 H X 10111 I 01000 Y 11000 J 01001 Z K 01010 11010 L 01011 . 11011 M 01100 , 11100 N 01101 ; 11101 O 01110 : P 01111 ?


Download ppt "CY2G2 Information Theory 5"

Similar presentations


Ads by Google