Presentation is loading. Please wait.

Presentation is loading. Please wait.

Perfect and Statistical Secrecy, probabilistic algorithms, Definitions of Easy and Hard, 1-Way FN -- formal definition.

Similar presentations


Presentation on theme: "Perfect and Statistical Secrecy, probabilistic algorithms, Definitions of Easy and Hard, 1-Way FN -- formal definition."— Presentation transcript:

1 Perfect and Statistical Secrecy, probabilistic algorithms, Definitions of Easy and Hard, 1-Way FN -- formal definition

2 Private-key Encryption, revisited Key generation alg tosses coins  key k ENC k (m,r)  c DEC k’ (c’)  m’ Correctness: for all k, r, m, DEC k (ENC k (m,r))=m

3 Perfect secrecy, revisited P r [ ENC k ( m 1 ) = c ] = P r [ ENC k ( m 2 ) = c ] Encryption scheme is perfectly secure if for every m 1 and m 2 and every c, Where prob is over key k from key- generation, and the coin-tosses of ENC.

4 Shannon’s secrecy Let D be a distribution on M. An encryption scheme satisfies Shannon’s secrecy with respect to D if for every m in D and every c, Where the probability is taken over D, key-gen, and coin-tosses of Enc. P r [ D = m j ENC k ( D ) = c ] = P r [ D = m ]

5 Claim: perfect secrecy holds if and only if Shannon’s secrecy holds We prove perfect secrecy implies Shannon Secrecy, the converse is similar. (prove it!)

6 Proof Proof: recall Bayes’s law: let E and F be events: P r [ E j F ] = P r [ F j E ] ¢ P r [ E ] P r [ F ] P r [ D = m j ENC k ( D ) = c ] = = P r [ ENC k ( D ) = c j D = m ] ¢ P r [ D = m ] P r [ ENC k ( D ) = c ]

7 Proof(cont) So, we need to show that: In other words we need to show that: P r [ ENC k ( D ) = c j D = m ] = P r [ ENC k ( D ) = c ] P r [ ENC k ( m ) = c ] = P r [ ENC k ( D ) = c ]

8 Proof(cont) (by perfect Secrecy)  P r [ ENC k ( D ) = c ] = = X m i 2 D P r [ ENC k ( m i ) = c ] ¢ P r [ D = m i ] = P r [ ENC k ( m ) = c ] ¢ X m i 2 D P r [ D = m i ] = P r [ ENC k ( m ) = c ] ¢ 1 : QED.

9 CLAIM: If an encryption scheme is perfectly secure, then the number of keys is at least the size of the message space. Proof: consider a cyphrertext c for some msg m. By perfect secrecy, c could be an encr. of every message. That is, for every m’ in D, there exists a k’ that encrypts/decrypts c to m’. Thus, c is decryptable to any message, using different keys. But the number of decryptions of c is at most the number of keys.

10 This is bad news – we need to relax the notion of secrecy to be able to use smaller keys How do we relax the notion?

11 A first attempt We will relax the notion of perfect secrecy. What about “statistical” security? Let X an Y be random variables over a finite set S. X and Y are called -close if for very event T µ S j P r [ X 2 T ] { P r [ Y 2 T ]j · ² T is called a statistical test. ²

12 Statistical Secrecy DEFINITION: Encryption scheme is statistically -secure if for every two messages m 1, and m 2 the random variables (over the choice of key) ENC k (m 1 ) and ENC k (m 2 ) are -close. ² ² Still can not go much below Shannon’s bound HW: show this!

13 Computational assumptions Probabilistic TM definition: (additional RANDOM tape which is read-once) P – languages that can be decided in time n c for some c NP – languages that can be verified (i.e. given a poly-size witness w we can in time n c check. Thus, there exists a relation R(x,w) is in P.

14 For us, every algorithm is public (no “security from obscurity”) What do we have as secrets? (randomness!) The need to define PPT (probabilistic polynomial time)

15 How do we deal with coin-toss? RP (“randomized poly”) – languages recognizable in polynomial time with 1- sided error 8 x 2 LP r [ M ( x ) accep t s ] > 1 2 8 x = 2 LP r [ M ( x ) accep t s ] = 0

16 Dealing with RP FACT : P µ RP µ NP WHY? First part: Can ignore coins Second part: Can “guess” the correct coins

17 Dealing with RP: amplification 8 x = 2 LP r [ M ( x ) accep t s ] = 0 L e t ² ( n ) = 1 po l y ( n ) Suppose we are given machine M s.t.: 8 x 2 LP r [ M ( x ) accep t s ] > ² ( n )

18 How do we amplify? Design new machine M’ that runs machine M k times with new coin-flips each time. M’ accepts iff any run of M accepts, otherwise M’ rejects. What is the probability of acceptance of M’ for x in L?

19 Computing prob of succ of M’ x 2 L w i t h pro b. 1 ¡ ² ( n ) M errors on Recall that e x ¸ 1 + x t h us, e ¡ ² ( n ) ¸ 1 ¡ ² ( n ) P r ( M ' ( x 2 L ) re j ec t s ) · · [ P r ( M ( x 2 L ) re j ec t s ] k · ( 1 ¡ ² ( n )) k · e ¡ ² ( n ) k So, make k sufficiently big!

20 2-sided error BPP (“Bounded probabilistic poly-time”) – languages recognizable in polynomial time with 2-sided error 8 x 2 LP r [ M ( x ) accep t s ] ¸ 2 3 8 x = 2 LP r [ M ( x ) accep t s ] · 1 3

21 BPP Amplification We wish to design new machine M’ s.t.: 8 x 2 LP r [ M ' ( x ) acce p t s ] ¸ 1 ¡ 2 ¡ p ( n ) 8 x = 2 LP r [ M ' ( x ) accep t s ] · 2 ¡ p ( n )

22 What do we do? Machine M’ runs machine M many times with independent coin-flips each time, and takes a MAJORITY vote. If M’ runs M k times, what is the probability that M’ makes a mistake?

23 Chernoff Bound Let X 1 … X m be independent 0,1 random variables with the same probability distribution. Then: P r 0 @ X i 2 1 ;; m X i ¸ ( 1 + ¯ ) E 1 A · e ¡ 1 2 ¯ 2 E

24 Back to BPP Let X i =1 if M makes a mistake on the ith run. P r ( X i = 1 ) · 1 3 (by def of BPP) Let’s be pessimistic and make it equal to 1/3 E = k 3

25 Plugging in If we make a majority vote, we fail if P X i ¸ k 2 T a k e ¯ = 1 2, p l ug i n t o C h erno ®b oun d : P r ( X X i > 3 2 k 3 ) · e ¡ 1 8 k 3

26 What do we know? P µ RP µ BPP What about BPP vs NP?

27 Non-uniformity, called P/poly Sometimes, we need to talk about different input length (circuits, or a TM with “advice” for every input length) SOME QUESTOIONS: Does it belong to NP? How easy is it? Does it need coin-flips?

28 P/poly does P/poly belong to NP? guess advice which makes x accept, if there is such an advice accept. Why does it not work?

29 P/poly (cont) How powerful is this class? It includes undecidable languages. Consider some standard enumeration of TM. Define poly advice for each input length to encode decidability of the i’th machine.

30 Do we need randomness for P/poly? Adellman: BPP µ P / po l y

31 Proof of Adellman’s thm. We assume BPP machine M s.t. 8 x 2 LP r [ M ( x ) accep t s ] ¸ 1 ¡ 2 ¡ ( n + 1 ) 8 x = 2 LP r [ M ( x ) accep t s ] · 2 ¡ ( n + 1 )

32 Proof that we don’t need coins P r [ M ma k esanerror ] · 2 ¡ ( n + 1 ) We want to show that there always exists a random string r(n) that works for all strings of length n. How do we show it???

33 Proof (cont) M uses r(n) random bits x 1 x 2 n r 2 r ( n ) 1  makes a mistake 0  correct r 1 If there is a column which is good for all x we are done!

34 Proof – the punch line Number of 1’s · (number of rows) (number of 1’s per row) Thus, the average number of 1’s per column = 2 n ¢ ( 2 ¡ ( n + 1 ) ¢ 2 r ( n ) ) · 2 n ¢ ( 2 ¡ ( n + 1 ) ¢ 2 r ( n ) ) 2 r ( n ) = 1 2 Thus, some column must have all 0’s.

35 Terminology For us, easy means polynomial time with coin-flips Fast  poly-time Probably fast  expected poly-time

36 Terminology (cont) Las Vegas  always correct, probably fast Monte-carlo  always fast, probably correct [1-sided (i.e. RP or co-RP)] Atlantic City  always fast, probably correct [2-sided (i.e. BPP)]

37 Back to Hardness For us, BPP is easy. But in BPP we can amplify decision of any language that is bounded away from half by 1/poly So we must define “hard” languages that can not be decided even with 1/poly advantage. So, how do we define “less” then 1/poly?

38 Examples of less then 1/poly g ( n ) = 1 n l ogn g ( n ) = 100 2 n

39 Try 1 – “negligible function” 8 c 8 ng ( n ) < 1 n c

40 Try 1 8 c 8 ng ( n ) < 1 n c Not good. Consider: g ( n ) = 100 2 n ; f ( n ) = 1 n 5 n = 2 ;g ( 2 ) = 1 = 25 ; f ( 2 ) = 1 = 32

41 Try 2 After n gets sufficiently big, it works! 8 c 8 na f t ernge t s b i g, t h eng ( n ) < 1 n c

42 Formally, now g ( ¢ ) i s neg li g ibl e i f : 8 c > 0 9 N c 8 n > N c g ( n ) < 1 n c

43 Some facts If g(n) is negligible then g(n)*poly(n) is still negligible! (HW: prove it!) We sometimes call any > 1/poly function “noticeable” (since we can amplify) There are functions that are NEITHER negligible NOR noticeable. Then we need to deal with “families” of input length, and typically resort to non-uniform proofs.


Download ppt "Perfect and Statistical Secrecy, probabilistic algorithms, Definitions of Easy and Hard, 1-Way FN -- formal definition."

Similar presentations


Ads by Google