Presentation is loading. Please wait.

Presentation is loading. Please wait.

資訊理論 授課老師 : 陳建源 研究室 : 法 401 網站 Ch4: Channel.

Similar presentations


Presentation on theme: "資訊理論 授課老師 : 陳建源 研究室 : 法 401 網站 Ch4: Channel."— Presentation transcript:

1 資訊理論 授課老師 : 陳建源 Email:cychen07@nuk.edu.tw 研究室 : 法 401 網站 http://www.csie.nuk.edu.tw/~cychen/ Ch4: Channel

2 4. 1 Introduction The input alphabet of the channel is the output alphabet of the coder SourceCoderChannelDecoderRecipient Communication system radio, optical fibre The output alphabet of the channel is the input alphabet of the decoder The output alphabet of the channel may not the same as its the input alphabet

3 Ch4: Channel 4. 1 Introduction A noisy channel is characterized by the probability that a given output letter stems from an input letter Channel noiseless Memory : the output letter depends upon a sequence of input letters a b Channel noisy a b 1, b 2,…

4 Ch4: Channel 4. 2 Capacity of a memoryless channel A memoryless channel is completely specified by giving P(b s |a k ), s=1,…,r, k=1,…,n. Channel The probability of the output letter b s AaiAai BbiBbi transition probability

5 Ch4: Channel 4. 2 Capacity of a memoryless channel If the transition probabilities are fixed , only the input probabilities can be manipulated. Mutual information between the input and output

6 Ch4: Channel 4. 2 Capacity of a memoryless channel The maximum being taken over all possible input probabilities p 1,p 2, …p n while the transition probabilies P(b s |a k ) are held fixed. Def: The capacity C of a memoryless channel I defined by 已知 maximum Lagrange’s multiplier

7 Ch4: Channel 4. 2 Capacity of a memoryless channel 已知 maximum Lagrange’s multiplier 偏微分得到

8 Ch4: Channel 4. 2 Capacity of a memoryless channel Example 4.2a 0 0 11 1-ε ε ε A a 1 =0 a 2 =1 B b 1 =0 b 2 =1 令 對之 偏微分 即對

9 Ch4: Channel 4. 2 Capacity of a memoryless channel Example 4.2a 之 偏微分得到 =0 得到

10 Ch4: Channel 4. 2 Capacity of a memoryless channel Example 4.2a 當 p 1 =1/2, p 2 =1/2 K=1 K=2 亦同

11 Ch4: Channel 4. 2 Capacity of a memoryless channel Example 4.2b 0 0 11 1-ε ε ε A a 1 =0 a 2 =1 B b 1 =0 b 2 =1 b 3 =2 2 =0 得到 已知

12 Ch4: Channel 4. 2 Capacity of a memoryless channel Example 4.2b 0 0 11 1-ε ε ε A a 1 =0 a 2 =1 B b 1 =0 b 2 =1 b 3 =2 2 得到 K=1 K=2 亦同

13 Ch4: Channel 4. 2 Capacity of a memoryless channel Example 4.2c 0 0 1 1 1 1-ε 1/2 A a 1 =0 a 2 =1 a 3 =2 B b 1 =0 b 2 =1 2 =0 得到 已知

14 Ch4: Channel 4. 2 Capacity of a memoryless channel Example 4.2c 0 0 1 1 1 1-ε 1/2 A a 1 =0 a 2 =1 a 3 =2 B b 1 =0 b 2 =1 2 得到 K=1 K=2K=3

15 Ch4: Channel 4. 3 Convexity Theorem 4.3a The mutual information is a concave function of the input probability, i.e.

16 Ch4: Channel 4. 3 Convexity Theorem 4.3b I(p) is a maximum (equal to the channel capacity) if, and only if, p is such that

17 Ch4: Channel 4. 5 Uniqueness Theorem 4.5a The output probabilities which correspond to the capacity of the channel are unique. Theorem 4.5b In an input which achieves capacity with the largest number of zero probabilies, the non-zero probabilities are determined uniquely and their number does not exceed the number of output letters.

18 Ch4: Channel 練習 0 0 11 1 1 A a 1 =0 a 2 =1 B b 1 =0 b 2 =1 Noiseless binary channel C=1 bit Noise channel with nonoverlapping output 0 1 1 2 1/2 2/3 1/2 1/3 A a 1 =0 a 2 =1 B b 1 =1 b 2 =2 b 3 =3 b 4 =4 3 4 C=1 bit

19 Ch4: Channel 練習 Noisy typewriter 0 0 1 1 1/2 A a 1 =0 a 2 =1 a 3 =2 a 4 =3 B b 1 =0 b 2 =1 b 3 =2 b 4 =3 2 3 C=1 bit 2 3 1/2 26 字母 C=log13 bits

20 Ch4: Channel 練習 Binary symmetric channel I(A,B)=H(B)-H(B|A) ≦ 1-H(B|A) 0 0 11 1-ε ε ε A a 1 =0 a 2 =1 B b 1 =0 b 2 =1

21 Ch4: Channel 練習 Binary erasure channel I(A,B)=H(B)-H(B|A) 0 0 1 1 1-ε ε ε A a 1 =0 a 2 =1 B b 1 =0 b 2 =1 b 3 =2 2 p=1/2 I(A,B)=1-ε

22 Ch4: Channel 練習 Z channel 0 0 11 1 1/2 A a 1 =0 a 2 =1 B b 1 =0 b 2 =1

23 Ch4: Channel 4. 6 Transmission properties I(A, B)=H(A)-H(A|B) Shannon’s theorem I. If H(A) ≦ C, there is a code such that transmission over the channel is possible with an arbitrarily small number of errors, i.e. the equivocation is arbitrarily Shannon’s theorem II. If H(A) > C, there is no code for which the equivocation is less than H(A)-C but there is one for which the equivocation is less than H(A)- C+ε where ε is an arbitrary positive quantity. the equivocation: measure of the uncertainty as to what was sent when observations are made on the output and so assesses the effect of noise during transmission.

24 Ch4: Channel 4. 7 Channels in cascade Channel 1Channel 2 A BC

25 Ch4: Channel 4. 7 Channels in cascade 0 0 11 3/4 1/4 0 1 3/4 1/4 A B C 0 0 11 5/8 3/8 A C

26 Ch4: Channel 4. 7 Channels in cascade 0 0 11 5/8 3/8 A C

27 Ch4: Channel 4. 7 Channels in cascade The transition probabilities p jk of an infinite cascade are given by


Download ppt "資訊理論 授課老師 : 陳建源 研究室 : 法 401 網站 Ch4: Channel."

Similar presentations


Ads by Google