Presentation is loading. Please wait.

Presentation is loading. Please wait.

Refinement of Two Fundamental Tools in Information Theory Raymond W. Yeung Institute of Network Coding The Chinese University of Hong Kong Joint work with.

Similar presentations


Presentation on theme: "Refinement of Two Fundamental Tools in Information Theory Raymond W. Yeung Institute of Network Coding The Chinese University of Hong Kong Joint work with."— Presentation transcript:

1 Refinement of Two Fundamental Tools in Information Theory Raymond W. Yeung Institute of Network Coding The Chinese University of Hong Kong Joint work with Siu Wai Ho and Sergio Verdu

2 P.2 Discontinuity of Shannon’s Information Measures  Shannon’s information measures: H(X), H(X|Y), I(X;Y) and I(X;Y|Z).  They are described as continuous functions [Shannon 1948] [Csiszár & Körner 1981] [Cover & Thomas 1991] [McEliece 2002] [Yeung 2002].  All Shannon's information measures are indeed discontinuous everywhere when random variables take values from countably infinite alphabets [Ho & Yeung 2005].  e.g., X can be any positive integer.

3 P.3  Let P X = {1, 0, 0,...} and  As n  , we have  However, Discontinuity of Entropy

4 P.4  Theorem 1: For any c  0 and any X taking values from a countably infinite alphabet with H(X) < , Discontinuity of Entropy n 0

5 P.5  Theorem 2: For any c  0 and any X taking values from countably infinite alphabet with H(X) < , Discontinuity of Entropy n 0

6 Pinsker’s inequality  By Pinsker’s inequality, convergence w.r.t. implies convergence w.r.t..  Therefore, Theorem 2 implies Theorem 1. P.6

7 Discontinuity of Entropy P.7

8 P.8  Theorem 3: For any X, Y and Z taking values from countably infinite alphabet with I(X;Y|Z) < , Discontinuity of Shannon’s Information Measures

9 Applications: channel coding theorem lossless/lossy source coding theorems, etc. Typicality Fano ’ s Inequality Shannon ’ s Information Measures Discontinuity of Shannon’s Information Measures P.9

10 To Find the Capacity of a Communication Channel Alice Capacity  C 1 Typicality Channel Bob Capacity  C 2 Fano ’ s Inequality P.10

11 P.11 On Countably Infinite Alphabet Applications: channel coding theorem lossless/lossy source coding theorems, etc. Typicality Fano ’ s Inequality Shannon ’ s Information Measures discontinuous!

12 12 Typicality  Weak typicality was first introduced by Shannon [1948] to establish the source coding theorem.  Strong typicality was first used by Wolfowitz [1964] and then by Berger [1978]. It was further developed into the method of types by Csiszár and Körner [1981].  Strong typicality possesses stronger properties compared with weak typicality.  It can be used only for random variables with finite alphabet.

13 13  Consider an i.i.d. source {X k, k  1}, where X k taking values from a countable alphabet X.  Let for all k.  Assume H(P X ) < .  Let X = (X 1, X 2, …, X n ) Notations  For a sequence x = (x 1, x 2, …, x n )  X n,  N(x; x) is the number of occurrences of x in x  q(x; x) = n -1 N(x; x) and  Q X = {q(x; x)} is the empirical distribution of x  e.g., x = (1, 3, 2, 1, 1). N(1; x) = 3, N(2; x) = N(3; x) =1 Q X = {3/5, 1/5, 1/5}.

14 14  Definition (Weak typicality): For any  > 0, the weakly typical set W n [X]  with respect to P X is the set of sequences x = (x 1, x 2, …, x n )  X n such that Weak Typicality

15 15  Note that Weak Typicality  Definition 1 (Weak typicality): For any  > 0, the weakly typical set W n [X]  with respect to P X is the set of sequences x = (x 1, x 2, …, x n )  X n such that while Empirical entropy

16 16  Theorem 4 (Weak AEP): For any  > 0:  1) If x  W n [X] , then  2) For sufficiently large n,  3) For sufficiently large n, Asymptotic Equipartition Property

17 17 Illustration of AEP X n − Set of all n-sequences Typical Set of n-sequences: Prob. ≈ 1 ≈ Uniform distribution

18 18  Strong typicality has been defined in slightly different forms in the literature.  Definition 2 (Strong typicality): For | X | 0, the strongly typical set T n [X]  with respect to P X is the set of sequences x = (x 1, x 2, …, x n )  X n such that the variational distance between the empirical distribution of the sequence x and P X is small. Strong Typicality

19 19  Theorem 5 (Strong AEP): For a finite alphabet X and any  > 0:  1) If x  T n [X] , then  2) For sufficiently large n,  3) For sufficiently large n, Asymptotic Equipartition Property

20 Breakdown of Strong AEP  If strong typicality is extended (in the natural way) to countably infinite alphabets, strong AEP no longer holds  Specifically, Property 2 holds but Properties 1 and 3 do not hold. P.20

21 21 Typicality X n finite alphabet Weak Typicality: Strong Typicality:

22 22 Unified Typicality X n countably infinite alphabet Weak Typicality: Strong Typicality:  x s.t. but

23 23 Unified Typicality X n countably infinite alphabet Weak Typicality: Unified Typicality: Strong Typicality:

24  Definition 3 (Unified typicality): For any  > 0, the unified typical set U n [X]  with respect to P X is the set of sequencesx = (x 1, x 2, …, x n )  X n such that  Weak Typicality: Strong Typicality:  Each typicality corresponds to a “distance measure”  Entropy is continuous w.r.t. the distance measure induced by unified typicality 24 Unified Typicality

25 25  Theorem 6 (Unified AEP): For any > 0:  1) If x  U n [X] , then  2) For sufficiently large n,  3) For sufficiently large n, Asymptotic Equipartition Property

26 26  Theorem 7: For any x  X n, if x  U n [X] , then x  W n [X]  and x  T n [X] , Unified Typicality

27 27  Consider a bivariate information source {(X k, Y k ), k  1} where (X k, Y k ) are i.i.d. with generic distribution P XY.  We use (X, Y) to denote the pair of generic random variables.  Let (X, Y) = ((X 1, Y 1 ), (X 2, Y 2 ), …, (X n, Y n )).  For the pair of sequence (x, y), the empirical distribution is Q XY = {q(x,y; x,y)} where q(x,y; x,y) = n -1 N(x,y; x,y). Unified Jointly Typicality

28 28  Definition 4 (Unified jointly typicality): For any  > 0, the unified typical set U n [XY]  with respect to P XY is the set of sequences(x, y)  X n  Y n such that  This definition cannot be simplified. Unified Jointly Typicality

29 29  Definition 5: For any x  U n [X] , the conditional typical set of Y is defined as  Theorem 8: For x  U n [X] , if then where  0 as   0 and then n   Conditional AEP

30 Illustration of Conditional AEP P.30

31 31  Rate-distortion theory  A version of rate-distortion theorem was proved by strong typicality [Cover & Thomas 1991][Yeung 2008]  It can be easily generalized to countably infinite alphabet  Multi-source network coding  The achievable information rate region in multisource network coding problem was proved by strong typicality [Yeung 2008]  It can be easily generalized to countably infinite alphabet Applications

32 P.32 Fano’s Inequality  Fano's inequality: For discrete random variables X and Y taking values on the same alphabet X = {1, 2,  }, let  Then where for 0 < x < 1 and h(0) = h(1) = 0.

33 P.33 Motivation 1  This upper bound on is not tight.  For fixed and, can always find such that  Then we can ask, for fixed P X and, what is

34 P.34 Motivation 2  If X is countably infinite, Fano’s inequality no longer gives an upper bound on H(X|Y).  It is possible that which can be explained by the discontinuity of entropy.   Then H(X n |Y n ) = H(X n )   but  Under what conditions   0  H(X|Y)  0 for countably infinite alphabets ? 

35 P.35 Tight Upper Bound on H(X|Y)  Theorem 9: Suppose, then where the right side is the tight bound dependent on  and P X. (This is the simplest of the 3 cases.)  Let

36 P.36 Generalizing Fano’s Inequality  Fano's inequality [Fano 1952] gives an upper bound on the conditional entropy H(X|Y) in terms of the error probability  Pr  X  Y .  e.g. P X = [0.4, 0.4, 0.1, 0.1] [Ho & Verdú 2008]  H(X|Y)H(X|Y) [Fano 1952]

37 P.37 Generalizing Fano’s Inequality  e.g., X is a Poisson random variable with mean equal to 10.  Fano's inequality no longer gives an upper bound on H(X|Y).  H(X|Y)H(X|Y)

38 P.38 Generalizing Fano’s Inequality  H(X|Y)H(X|Y)  e.g. X is a Poisson random variable with mean equal to 10.  Fano's inequality no longer gives an upper bound on H(X|Y). [Ho & Verdú 2008]

39 P.39 Joint Source-Channel Coding Encoder Channel Decoder k-to-n joint source-channel code

40 P.40 Error Probabilities  The average symbol error probability is defined as  The block error probability is defined as

41 P.41 Symbol Error Rate  Theorem 10: For any discrete memoryless source and general channel, the rate of a k-to-n joint source- channel code with symbol error probability k satisfies where S* is constructed from {S 1, S 2,..., S k } according to

42 P.42 Block Error Rate  Theorem 11: For any general discrete source and general channel, the block error probability  k of a k- to-n joint source-channel code is lower bounded by

43  Weak secrecy has been considered in [Csiszár & Körner 78, Broadcast channel] and some seminal papers.  [Wyner 75, Wiretap channel I] only stated that “a large value of the equivocation implies a large value of P ew ”, where the equivocation refers to and P ew means  It is important to clarify what exactly weak secrecy implies. P.43 Information Theoretic Security

44 P.44 Weak Secrecy  E.g., P X = (0.4, 0.4, 0.1, 0.1). [Ho & Verdú 2008]  = P[X  Y] H(X|Y)H(X|Y) [Fano 1952] H(X)H(X)

45 P.45 Weak Secrecy  Theorem 12: For any discrete stationary memoryless source (i.i.d. source) with distribution P X, if  Then  Remark:  Weak Secrecy together with the stationary source assumption is insufficient to show the maximum error probability.  The proof is based on the tight upper bound on H(X|Y) in terms of error probability.

46 P.46 Summary Applications: channel coding theorem lossless/lossy source coding theorems Typicality Fano ’ s Inequality Weak Typicality Strong Typicality Shannon ’ s Information Measures

47 P.47 On Countably Infinite Alphabet Applications: channel coding theorem lossless/lossy source coding theorem Typicality Weak Typicality Shannon ’ s Information Measures discontinuous!

48 P.48 Unified Typicality Applications: channel coding theorem MSNC/lossy SC theorems Typicality Unified Typicality Shannon ’ s Information Measures

49 P.49 Unified Typicality Generalized Fano’s Inequality Applications: results on JSCC, IT security MSNC/lossy SC theorems Typicality Generalized Fano ’ s Inequality Shannon ’ s Information Measures

50 P.50 A lot of fundamental research in information theory are still waiting for us to investigate. Perhaps...

51 References  S.-W. Ho and R. W. Yeung, “On the Discontinuity of the Shannon Information Measures,” IEEE Trans. Inform. Theory, vol. 55, no. 12, pp. 5362–5374, Dec. 2009.  S.-W. Ho and R. W. Yeung, “On Information Divergence Measures and a Unified Typicality,” IEEE Trans. Inform. Theory, vol. 56, no. 12, pp. 5893–5905, Dec. 2010.  S.-W. Ho and S. Verdú, “On the Interplay between Conditional Entropy and Error Probability,” IEEE Trans. Inform. Theory, vol. 56, no. 12, pp. 5930–5942, Dec. 2010.  S.-W. Ho, “On the Interplay between Shannon's Information Measures and Reliability Criteria,” in Proc. 2009 IEEE Int. Symposium Inform. Theory (ISIT 2009), Seoul, Korea, June 28-July 3, 2009.  S.-W. Ho, “Bounds on the Rates of Joint Source-Channel Codes for General Sources and Channels,” in Proc. 2009 IEEE Inform. Theory Workshop (ITW 2009), Taormina, Italy, Oct. 11-16, 2009. P.51

52 P.52 Q & A

53 P.53 Why Countably Infinite Alphabet?  An important mathematical theory can provide some insights which cannot be obtained from other means.  Problems involve random variables taking values from countably infinite alphabets.  Finite alphabet is the special case.  Benefits: tighter bounds, faster convergent rates, etc.  In source coding, the alphabet size can be very large, infinite or unknown.

54 P.54 Discontinuity of Entropy  Entropy is a measure of uncertainty.  We can be more and more sure that a particular event will happen as time goes, but at the same time, the uncertainty of the whole picture keeps on increasing.  If one found the above statement counter-intuitive, he/she may have the concept that entropy is continuous rooted in his/her mind.  The limiting probability distribution may not fully characterize the asymptotic behavior of a Markov chain.

55 Discontinuity of Entropy Suppose a child hides in a shopping mall where the floor plan is shown in the next slide. In each case, the chance for him to hide in a room is directly proportional to the size of the room. We are only interested in which room the child locates in but not his exact position inside a room. Which case do you expect is the easiest to locate the child? P.55

56 Case A 1 blue room + 2 green rooms Case B 1 blue room + 16 green rooms Case C 1 blue room + 256 green rooms Case D 1 blue room + 4096 green rooms Case ACase BCase CCase D The chance in the blue room 0.50.6220.6980.742 The chance in a green room 0.250.03260.001180.000063 P.56

57 Discontinuity of Entropy From Case A to Case D, the difficulty is increasing. By the Shannon entropy, the uncertainty is increasing although the probability of the child being in the blue room is also increasing. We can continue to construct this example and make the chance in the blue room approaching to 1! The critical assumption is that the number of rooms can be unbounded. So we have seen that “ There is a very sure event ” and “ large uncertainty of the whole picture ” can exist at the same time. Imagine there is a city where everyone has a normal life everyday with probability 0.99. With probability 0.01, however, any kind of accident that beyond our imagination can happen. Would you feel a big uncertainty about your life if you were living in that city? P.57

58 P.58 Weak secrecy is insufficient to show the maximum error probability. Example 1: Let W, V and X i be binary random variables. Suppose W and V are independent and uniform. Let

59 P.59 Let Then Y1Y1 Y2Y2 Y3Y3 Y4Y4... X1X1 X4X4 X9X9 X 16 X2X2 X3X3 X8X8 X 15 X5X5 X6X6 X7X7 X 14 X 10 X 11 X 12 X 13 ==== Example 1

60 1 Joint Unified Typicality 32 Q = {q(xy)} X Y 222 P = {p(xy)} X Y D(Q||P) << 1 Ans: Can be changed to P.60 ?

61 1 Joint Unified Typicality 31 Q = {q(xy)} X Y 222 P = {p(xy)} X Y ? D(Q||P) << 1 Ans: Can be changed to P.61

62  Theorem 5 (Consistency): For any (x,y)  X n  Y n, if (x,y)  U n [XY] , then x  U n [X]  and y  U n [Y] .  Theorem 6 (Unified JAEP): For any  > 0:  1) If (x, y)  U n [XY] , then  2) For sufficiently large n,  3) For sufficiently large n, Asymptotic Equipartition Property P.62


Download ppt "Refinement of Two Fundamental Tools in Information Theory Raymond W. Yeung Institute of Network Coding The Chinese University of Hong Kong Joint work with."

Similar presentations


Ads by Google