Presentation is loading. Please wait.

Presentation is loading. Please wait.

Foundations of Cryptography Lecture 9 Lecturer: Moni Naor.

Similar presentations


Presentation on theme: "Foundations of Cryptography Lecture 9 Lecturer: Moni Naor."— Presentation transcript:

1 Foundations of Cryptography Lecture 9 Lecturer: Moni Naor

2 Recap of lecture 8 Tree signature scheme Proof of security of tree signature schemes Encryption

3 The Encryption problem: Alice would want to send a message m  {0,1} n to Bob –Set-up phase is secret They want to prevent Eve from learning anything about the message Alice Bob Eve m

4 The encryption problem Relevant both in the shared key and in the secret key setting Want to use many times Also add authentication… Other disruptions by Eve

5 What does `learn’ mean? If Eve has some knowledge of m should remain the same –Probability of guessing m Min entropy of m –Probability of guess whether m is m 0 or m 1 –Probability of computing some function f of m Ideally: the message sent is a independent of the message m –Implies all the above Shannon: achievable only if the entropy of the shared secret is at least as large as the message m entropy If no special knowledge about m –then |m| Achievable: one-time pad. –Let r  R {0,1} n –Think of r and m as elements in a group –To encrypt m send r+m –To decrypt z send m=z-r

6 Pseudo-random generators Would like to stretch a short secret (seed) into a long one The resulting long string should be usable in any case where a long string is needed –In particular: as a one-time pad Important notion: Indistinguishability Two probability distributions that cannot be distinguished –Statistical indistinguishability: distances between probability distributions –New notion: computational indistinguishability

7 Pseudo-random generators Definition : a function g:{0,1} * → {0,1}* is said to be a (cryptographic) pseudo- random generator if It is polynomial time computable It stretches the input g(x)|>|x| – denote by ℓ(n) the length of the output on inputs of length n If the input is random the output is indistinguishable from random For any probabilistic polynomial time adversary A that receives input y of length ℓ(n) and tries to decide whether y= g(x) or is a random string from {0,1} ℓ(n) for any polynomial p(n) and sufficiently large n |Prob[A=`rand’| y=g(x)] - Prob[A=`rand’| y  R {0,1} ℓ(n) ] | < 1/p(n) Important issues: Why is the adversary bounded by polynomial time? Why is the indistinguishability not perfect?

8 Construction of pseudo-random generators Idea given a one-way function there is a hard decision problem hidden there If balanced enough: looks random Such a problem is a hardcore predicate Possibilities: –Last bit –First bit –Inner product

9 Homework Assume one-way functions exist Show that the last bit/first bit are not necessarily hardcore predicates Generalization: show that for any fixed function h:{0,1} * → {0,1} there is a one-way function f:{0,1} * → {0,1} * such that h is not a hardcore predicate of f Show a one-way function f such that given y=f(x) each input bit of x can be guessed with probability at least 3/4

10 Hardcore Predicate Definition : let f:{0,1} * → {0,1}* be a function. We say that h:{0,1} * → {0,1} is a hardcore predicate for f if It is polynomial time computable For any probabilistic polynomial time adversary A that receives input y=f(x) and tries to compute h(x) for any polynomial p(n) and sufficiently large n |Prob[A(y)=h(x)] -1/2| < 1/p(n) where the probability is over the choice y and the random coins of A Sources of hardoreness: –not enough information about x not of interest for generating pseudo-randomness –enough information about x but hard to compute it

11 Single bit expansion Let f:{0,1} n → {0,1} n be a one-way permutation Let h:{0,1} n → {0,1} be a hardcore predicate for f Consider g:{0,1} n → {0,1} n+1 where g(x)=(f(x), h(x)) Claim : g is a pseudo-random generator Proof: can use a distinguisher for g to guess h(x) f(x), h(x))f(x), 1-h(x))

12 Hardcore Predicate With Public Information Definition : let f:{0,1} * → {0,1}* be a function. We say that h:{0,1} * x {0,1} * → {0,1} is a hardcore predicate for f if h(x,r) is polynomial time computable For any probabilistic polynomial time adversary A that receives input y=f(x) and public randomness r and tries to compute h(x,r) for any polynomial p(n) and sufficiently large n |Prob[A(y,r)=h(x,r)] -1/2| < 1/p(n) where the probability is over the choice y of r and the random coins of A Alternative view : can think of the public randomness as modifying the one-way function f: f’(x,r)=f(x),r.

13 Example: weak hardcore predicate Let h(x,i)= x i I.e. h selects the i th bit of x For any one-way function f, no polynomial time algorithm A(y,i) can have probability of success better than 1-1/2n of computing h(x,i) Homework : let c:{0,1} * → {0,1}* be a good error correcting code –|c(x)| is O(|x|) –distance between any two codewords c(x) and c(x’) is a constant fraction of |c(x)| It is possible to correct in polynomial time errors in a constant fraction of |c(x)| Show that for h(x,i)= c(x) i and any one-way function f, no polynomial time algorithm A(y,i) can have probability of success better than a constant of computing h(x,i)

14 Inner Product Hardcore bit The inner product bit: choose r  R {0,1} n let h(x,r) = r ∙x = ∑ x i r i mod 2 Theorem [Goldreich-Levin]: for any one-way function the inner product is a hardcore predicate Proof structure: There are many x ’s for which A returns a correct answer on ½+ε of the r ’s take an algorithm A that guesses h(x,r) correctly with probability ½+ε over the r ‘s and output a list of candidates for x –No use of the y info Choose from the list the/an x such that f(x)=y The main step!

15 Why list? Cannot have a unique answer! Suppose A has two candidates x and x’ –On query r it returns at `random’ either r ∙x or r ∙x’ Prob[A(y,r) = r ∙x ] =½ +½Prob[r∙x = r∙x’] = ¾

16 Warm-up (1) If A returns a correct answer on 1-1/2n of the r ’s Choose r 1, r 2, … r n  R {0,1} n Run A(y,r 1 ), A(y,r 2 ), … A(y,r n ) –Denote the response z 1, z 2, … z n If r 1, r 2, … r n are linearly independent then: there is a unique x satisfying r i ∙x = z i for all 1 ≤i ≤n Prob[z i = A(y,r i )= r i ∙x] ≥ 1-1/2n –Therefore probability that all the z i ‘s are correct is at least ½ –Do we need complete independence of the r i ‘s? `one-wise’ independence is sufficient Can choose r  R {0,1} n and set r i ∙ = r+e i e i =0 i-1 10 n-i All the r i `s are linearly independent Each one is uniform in {0,1} n

17 Warm-up (2) If A returns a correct answer on 3/4+ε of the r ’s Can amplify the probability of success! Given any r  {0,1} n Procedure A’(y,r): Repeat for j=1, 2, … –Choose r’  R {0,1} n –Run A(y,r+r’) and A(y,r’), denote the sum of responses by z j Output the majority of the z j ’s Analysis Pr[z j = r∙x] ≥ Pr[A(y,r’)=r∙x ^ A(y,r+r’)=(r+r’)∙x] ≥ ½+2ε –Does not work for ½+ε since success on r’ and r+ r’ is not independent Each one of the events ‘ z j = r∙x’ is independent of the others Therefore by taking sufficiently many j ’s can amplify to a value as close to 1 as we wish –Need roughly 1/ε 2 examples Idea for improvement: fix a few of the r’

18 The real thing Choose r 1, r 2, … r k  R {0,1} n Guess for j=1, 2, … k the value z j =r j ∙x –Go over all 2 k possibilities For all nonempty subsets S  {1,…,k} –Let r S = ∑ j  S r j –The implied guess for z S = ∑ j  S z j For each position x i –for each S  {1,…,k} run A(y,e i -r S ) –output the majority value of {z s +A(y,e i -r S ) } Analysis: Each one of the vectors e i -r S is uniformly distributed –A(y,e i -r S ) is correct with probability at least ½+ε Claim: For every pair of nonempty subset S ≠ T  {1,…,k}: – the two vectors r S and r T are pair-wise independent Therefore variance is as in completely independent trials – I is the number of correct A(y,e i -r S ), VAR(I) ≤ 2 k (½+ε) –Use Chebyshev’s Inequality Pr[|I-E(I)|≥ λ√VAR(I)]≤1/λ 2 Need 2 k = n/ε 2 to get the probability of error to 1/n –So process is good simultaneously for all positions x i, i  {1,…,n} ST

19 Analysis Number of invocations of A 2 k ∙ n ∙ (2 k -1) = poly(n, 1/ε) ≈ n 3 /ε 4 Size of resulting list of candidates for x for each guess of z 1, z 2, … z k unique x 2 k =poly(n, 1/ε) ) ≈ n/ε 2 guessespositionssubsets

20 Reducing the size of the list of candidates Idea: bootstrap Given any r  {0,1} n Procedure A’(y,r): Choose r 1, r 2, … r k  R {0,1} n Guess for j=1, 2, … k the value z j =r j ∙x –Go over all 2 k possibilities For all nonempty subsets S  {1,…,k} –Let r S = ∑ j  S r j –The implied guess for z S = ∑ j  S z j –for each S  {1,…,k} run A(y,r-r S ) output the majority value of {z s +A(y,r-r S ) For 2 k = 1/ε 2 the probability of error is, say, 1/8 Fix the same r 1, r 2, … r k for subsequent executions They are good for 7/8 of the r’s Run warm-up (2) Size of resulting list of candidates for x is ≈ 1/ε 2

21 Application: Diffie-Hellman The Diffie-Hellman assumption Let G be a group and g an element in G. Given g, a=g x and b=g y it is hard to find c=g xy for random x and y is probability of poly-time machine outputting g xy is negligible To be accurate: a sequence of groups Don’t know how to verify given c’ whether it is equal to g xy Homework: show that under the DH Assumption Given a=g x, b=g y and r  {0,1} n no polynomial time machine can guess r ∙g xy with advantage 1/poly – for random x,y and r


Download ppt "Foundations of Cryptography Lecture 9 Lecturer: Moni Naor."

Similar presentations


Ads by Google