Presentation is loading. Please wait.

Presentation is loading. Please wait.

Theory of Information Lecture 13

Similar presentations


Presentation on theme: "Theory of Information Lecture 13"— Presentation transcript:

1 Theory of Information Lecture 13
Lecture 14 Decision Rules, Nearest Neighbor Decoding (Section 4.2, 4.3)

2 What Is a Decision Rule? received encoded message message decoded
Communications channel model received message … x … encoded message … … decoded message noise decision rule An (n,M)-code means a block code of length n and size M (i.e. every codeword has length n, and there are M codewords altogether) Definition Let C be an (n,M)-code over a code alphabet A, and assume C does not contain the symbol ?. A decision rule for C is a function f: AnC{?}. Intuition: f(x)=c means assuming that the received word x was meant to be c, i.e. that c was sent. In other words, decoding, or interpreting x as c. If we cannot identify such a codeword, i.e. cC, the symbol ? is used to declare a decoding error, so in this case always c=?.

3 Two Sorts of Decision Rules
Goal: maximize the probability of correct decoding Code alphabet: {0,1} Code: {0000,1111} Channel: 90% 10% 11% 89% If 0111 is received, how would you interpret it? If 0011 is received, how would you interpret it? But if you knew that 1111 is sent 99% of the time, how would you interpret 0011?

4 Which Rule Is Better? Ideal observer decision rule Maximum likelihood
Advantages:

5 Ideal Observer Decision Rule
25% 35% x c3 c1 c2 40% c1 P(c1 sent | x received) x P(cM sent | x received) cM Ideal observer decision rule decodes received codeword x as codeword c with maximal probability P(c sent | x received).

6 Maximum Likelihood Decision Rule
50% 70% x c3 c1 c2 80% c1 P(x received | c1 sent ) x P(x received | cM sent) cM Maximum likelihood decision rule decodes received codeword x as codeword c = f(x) with maximal probability P(x received | c sent).

7 One Rule as a Special Case of the Other
50% 70% x c3 c1 c2 80% Assume all codewords are equally likely to be sent. What would be the backward probabilities then? c1 c2 x c3 Theorem 4.2.2: For the uniform input distribution, ideal observer decoding coincides with maximum likelihood decoding.

8 Example 4.2.1 Suppose codewords of C = {000, 111} are sent over a binary symmetric channel with crossover prob. p = If string 100 is received, how should it be decoded by the maximum likelihood decision rule? P(100 received | 000 sent)= P(100 received | 111 sent)= Would the same necessarily be the case under ideal observer decoding?

9 Sometimes the Two Rules Yield Different Results
95% 5% 6% 1 1 94% Assume: C={00,11}; 11 is sent 70% of the time; 01 is received. How would 01 be decoded by the maximum likelihood rule? How would 01 be decoded by the ideal observer rule? As As

10 Handling Ties In the case of ties, it is reasonable for a given decision rule to declare an error. Suppose codewords of C = {0000, 1111} are sent over a binary symmetric channel with crossover prob. p = ¼ . How should the following strings be decoded by the maximum likelihood decision rule? 0000 1011 0011 P(0011 received | 0000 sent)= P(0011 received | 1111 sent)=

11 Hamming Distance and Nearest Neighbor Decoding
Definition Let x and y be two strings of the same length over the same alphabet. The Hamming distance between x and y, denoted d(x,y), is defined to be the number of places in which x and y differ. E.g.: d(000,100) = 1, d(111,100) = 2. The decision rule assigning a received word the closest codeword (in Hamming distance) is called the nearest neighbor decision rule. Theorem For BSC, the maximum likelihood decision rule is equivalent to the nearest neighbor decision rule.

12 Exercise 7 of Section 4.3 Construct a binary channel for which maximum likelihood decoding is not the same as nearest neighbor decoding. 10% 90% Let C= {001, 011} Assume 000 is received. 50% 50% P(000 received | 001 sent) = P(000 received | 011 sent) =

13 Theory of Information Lecture 13
Homework Exercises 2,3,4,5 of Section 4.2. Exercises 1,2,3 of Section 4.3.


Download ppt "Theory of Information Lecture 13"

Similar presentations


Ads by Google