Presentation is loading. Please wait.

Presentation is loading. Please wait.

Shannon ’ s theory part II Ref. Cryptography: theory and practice Douglas R. Stinson.

Similar presentations


Presentation on theme: "Shannon ’ s theory part II Ref. Cryptography: theory and practice Douglas R. Stinson."— Presentation transcript:

1 Shannon ’ s theory part II Ref. Cryptography: theory and practice Douglas R. Stinson

2 Shannon ’ s theory 1949, “ Communication theory of Secrecy Systems ” in Bell Systems Tech. Journal. Two issues: What is the concept of perfect secrecy? Does there any cryptosystem provide perfect secrecy? It is possible when a key is used for only one encryption How to evaluate a cryptosystem when many plaintexts are encrypted using the same key?

3 Perfect secrecy Definition: A cryptosystem has perfect secrecy if Pr[ x | y ] = Pr[ x ] for all x  P, y  C Idea: Oscar can obtain no information about the plaintext by observing the ciphertext Alice Bob Oscar x y Pr[Head]=1/2 Pr[Tail]=1/2 Case 1: Pr[Head | y ]=1/2 Pr[Tail | y ]=1/2 Case 2: Pr[Head | y ]=1 Pr[Tail | y ]=0

4 Perfect secrecy when |K|=|C|=|P| (P,C,K,E,D) is a cryptosystem where |K|=|C|=|P|. The cryptosystem provides perfect secrecy iff every keys is used with equal probability 1/|K| For every x  P, y  C, there is a unique key K such that Ex. One-time pad in Z 2 P: 010 K: 101 P: 111 K: 000 C: 111 ?

5 Outline Introduction One-time pad Elementary probability theory Perfect secrecy Entropy Properties of entropy Spurious keys and unicity distance Product system

6 Preview (1) We want to know: the average amount of ciphertext required for an opponent to be able to uniquely compute the key, given enough computing time Plaintext Ciphertext xnxn ynyn K

7 Preview (2) That is, we want to know: How much information about the key is revealed by the ciphertext = conditional entropy H(K|C n ) We need the tools of entropy

8 Entropy (1) Suppose we have a discrete random variable X What is the information gained by the outcome of an experiment? Ex. Let X represent the toss of a coin, Pr[head]=Pr[tail]=1/2 For a coin toss, we could encode head by 1, and tail by 0 => i.e. 1 bit of information

9 Entropy (2) Ex. Random variable X with Pr[ x 1 ]=1/2, Pr[ x 2 ]=1/4, Pr[ x 3 ]=1/4 The most efficient encoding is to encode x 1 as 0, x 2 as 10, x 3 as 11. Pr[ x 1 ]=1/2 Pr[ x 2 ]=1/4 uncertaintyinformationcodeword length

10 Entropy (3) Notice: probability 2 -n => n bits p => -log 2 p Ex.(cont.) The average number of bits to encode X

11 Entropy: definition Suppose X is a discrete random variable which takes on values from a finite set X. Then, the entropy of the random variable X is defined as

12 Entropy : example Let P={a, b}, Pr[a]=1/4, Pr[b]=3/4. K={K 1, K 2, K 3 }, Pr[K 1 ]=1/2, Pr[K 2 ]=Pr[K 3 ]= 1/4. encryption matrix: ab K1K1 12 K2K2 23 K3K3 34 H(P)= H(K)=1.5, H(C)=1.85

13 Properties of entropy (1) Def: A real-valued function f is a strictly concave ( 凹 ) function on an interval I if x y f(x) f(y)

14 Properties of entropy (2) Jensen’s inequality: Suppose f is a continuous strictly concave function on I, x1x1 xnxn Then Equality hold iff x 1 =...= x n

15 Properties of entropy (3) Theorem: X is a random variable having a probability distribution which takes on the values on p 1, p 2, … p n, p i >0, 1  i  n. Then H(X)  log 2 n with equality iff p i =1/n for all i * Uniform random variable has the maximum entropy

16 Properties of entropy (4) Proof:

17 Entropy of a natural language (1) H L : average information per letter in English 1. If the 26 alphabets are uniform random, = log 2 26  4.70 2. Consider alphabet frequency H(P)  4.19

18 Entropy of a natural language (2) 3. However, successive letters has correlations Ex. Digram, trigram Q: entropy of two or more random variables?

19 Properties of entropy (5) Def: Theorem: H(X,Y)  H(X)+H(Y) with equality iff X and Y are independent Proof: Let

20 Entropy of a natural language (3) 3. Let P n be the random variable that has as its probability distribution that of all n-gram of plaintext. tabulation of digrams => H(P 2 )/2  3.90 tabulation of trigrams => H(P 3 )/3 … tabulation of n-grams => H(P n )/4 1.0  H L  1.5

21 Entropy of a natural language (4) Redundancy of L is defined as Take H L =1.25, R L = 0.75 English language is 75% redundant !  We can compress English text to about one quarter of its original length

22 Conditional entropy Known any fixed value y on Y, information about random variable X Conditional entropy: the average amount of information about X that is revealed by Y Theorem: H(X,Y)=H(Y)+H(X|Y)

23 Theorem about H(K|C) (1) Let (P,C,K,E,D) be a cryptosystem, then H(K|C) = H(K) + H(P) – H(C) Proof: H(K,P,C) = H(C|K,P) + H(K,P) Since key and plaintext uniquely determine the ciphertext H(C|K,P) = 0 H(K,P,C) = H(K,P) = H(K) + H(P) Key and plaintext are independent

24 Theorem about H(K|C) (2) We have Similarly, Now, H(K,P,C) = H(K,C) = H(K) + H(C) H(K,P,C) = H(K,P) = H(K) + H(P) H(K|C)= H(K,C)-H(C) = H(K,P,C)-H(C) = H(K)+H(P)-H(C)

25 Results (1) Define random variables as Plaintext Ciphertext PnPn CnCn K => Set |P|=|C|,

26 Spurious( 假 ) keys (1) Ex. Oscar obtains ciphertext WNAJW, which is encrypted using a shift cipher K=5, plaintext river K=22, plaintext arena One is the correct key, and the other is spurious Goal: prove a bound on the expected number of spurious keys

27 Spurious keys (2) Given y  C n, the set of possible keys The number of spurious keys |K(y)|-1 The average number of spurious keys Plaintext Ciphertext PnPn CnCn K

28 Relate H(K|C n ) to spurious keys (1) By definition

29 Relate H(K|C n ) to spurious keys (2) We have derived So

30 Relate H(K|C n ) to spurious keys (3) Theorem: |C|=|P| and keys are chosen equiprobably. The expected number of spurious keys As n increases, right hand term => 0

31 Relate H(K|C n ) to spurious keys (4) Set For substitution cipher, |P|=|C|=26, |K|=26! The average amount of ciphertext required for an opponent to be able to unique compute the key, given enough time Unicity distance

32 Product cryptosystem S 1 = (P,P,K 1,E 1,D 1 ), S 2 = (P,P,K 2,E 2,D 2 ) The product of two cryptosystems is S 1 = (P,P, K 1  K 2,E,D) Encryption: Decryption:

33 Product cryptosystem (cont.) Two cryptosystem M and S commute if Idempotent cryptosystem: S 2 = S Ex. Shift cipher If a cryptosystem is not idempotent, then there is a potential increase in security by iterating it several times MxS = SxM

34 How to find non-idempotent cryptosystem? Thm: If S and M are both idempotent, and they commute, then S  M will also be idempotent Idea: find simple S and M such that they do not commute SxM is possibly non-idempotent (SXM) x (SxM) = S x (M x S) xM =S x (S x M) x M =(S x S) x (M x M) =S x M


Download ppt "Shannon ’ s theory part II Ref. Cryptography: theory and practice Douglas R. Stinson."

Similar presentations


Ads by Google