Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Bright Side of Hardness Relating Computational Complexity and Cryptography Oded Goldreich Weizmann Institute of Science.

Similar presentations


Presentation on theme: "The Bright Side of Hardness Relating Computational Complexity and Cryptography Oded Goldreich Weizmann Institute of Science."— Presentation transcript:

1 The Bright Side of Hardness Relating Computational Complexity and Cryptography Oded Goldreich Weizmann Institute of Science

2 Background: The P versus NP Question Solving problems is harder than checking solutions systems that are easy to use but hard to abuse  CRYPTOGRAPHY Proving theorems is harder than verifying proofs useful problems are infeasible to solve

3 One-Way Functions The good/bad news depends on typical (average-case) hardness This leads to the def. of one-way functions (OWF) = creating images for which it is hard to find a preimage DEF: f :{0,1} *  {0,1} * is a one-way function if 1.f is easy to evaluate 2.f is hard to invert in an average-case sense; that is, for every efficient algorithm A E.g..: MULT 6783  8471 = 57458793 ????  ???? = 45812601

4 Applications to Cryptography Using any OWF (e.g., assuming intractability of factoring): Private-Key Encryption and Message Authentication ; Signature Schemes Commitment Schemes and more… Using any Trapdoor Permutation (e.g., assuming intractability of factoring): Public-Key Encryption; General Secure Multi-Party Computation and more…

5 Amplifying Hardness Is some “part of the preimage” of a OWF extremely hard to predict from the image? DEF: b :{0,1} *  {0,1} is a hardcore of f 1.b is easy to evaluate 2.b(x) is hard to predict from f(x) in an average-case sense; i.e., for every eff. alg. A xf(x) b(x) easy HARD Suppose that f is 1-1. Then, if f has a hardcore, then f is hard to invert. Recall: by def. it is hard to retrieve the entire preimage.

6 The Existence of a Hardcore Warm-up: show that b is moderately hard to predict in the sense that for every eff. alg. A THM: For every OWF f, the predicate b(x,r)=  i  [n] x i r i mod 2 is a hardcore of f’(x,r)=(f(x),r). COR (to the proof): If y H(y)  {0,1} m is hard to effect, then (y,S)  i  S H(y) i mod 2 is hard to predict (better than 50:50). O.w., obtain each (i.e., i th ) bit of x as follows. On input f(x), repeat: Select random r  {0,1} n and guess A(f(x),r+e (i) )+A(f(x),r). = = w.p. 0.76 b(x,r)+x i b(x,r) b(x,e (i) )=x i

7 1 st Application: Pseudorandom Generators Deterministic programs (alg’s) that stretch short random seeds to long(er) pseudorandom sequences. G seed totally random (mental experiment) (random) ? THM:  PRG if and only if  OWF

8 Pseudorandom Generators (form.) Deterministic alg’s that stretch short random seeds to long(er) pseudorandom sequences. G seed totally random ? THM:  PRG if and only if  OWF DEF: G is a pseudorandom generator (PRG) if 1.It is efficiently computable: s G(s) is easy. 2.It stretches: |G(s)| > |s| (or better |G(s)| >> |s|). 3.Its output is comput. indistinguishable from random; that is, for every efficient alg’ D (where )

9 PRG iff OWF (“Hardness vs Randomness”) THM:  PRG if and only if  OWF PRG implies OWF: Let G be a PRG (with doubling stretch). Define f(x)=G(x). Note: inverting f yields distinguishing G‘s output from random, since random 2|x|-bit strings are unlikely to have a preimage under f. OWF implies PRG (special case): Let f be a 1-1 OWF, with hardcore b. Then G(s)=f(s)b(s) is a PRG (with minimal stretch (of a single bit)). Recall (THM): PRGs with minimal stretch (of a single bit) imply PRGs with maximal stretch (i.e., allowing efficient map 1 |s| 1 |G(s)| ).

10 Pseudorandom Functions (PRF) THM: PRG imply PRF. DEF: {f s :{0,1} |s|  {0,1} |s| } is a pseudorandom function (PRF) if 1.Easy to evaluate: (s,x) f s (x) is easy. 2.It passes the “Turing Test (of randomness)”: q1q1 qtqt a1a1 atat either f s or totally random function Historical notes (re Turing)

11 Cryptography: private-key encryption (based on PRF) msg key E msg (same) key D E k (msg) = (1) r  R {0,1} n (2) ciphertext = (r,f k (r)  msg) D k (r,y) = y  f k (r) ciphertext ciphertext = (r,y)

12 Cryptography: message authentication (based on PRF) msg key S yes or no (same) key V S k (msg) = f k (msg) = tagV k (msg,tag) = 1 iff tag = f k (msg) msg + tag tag

13 More Cryptography: Sign and Commit Signature scheme  message authentication except that it allows universal verification (by parties not holding the signing key). THM: OWF imply Signature schemes. Commitment scheme = commit phase + reveal phase s.t. Hiding: commit phase hides the value being committed. Binding: commit phase determines a single value that can be later revealed. THM: PRG imply Commitment schemes.

14 A generic cryptographic task: forcing parties to follow prescribed instructions THM: Commitment schemes imply ZK proofs as needed. Party private input: x Public info.: y Prescribed instruction: for a predetermined f, send f(x,y). z  x s.t. z = f(x,y) ? The Party can prove the correctness of z by revealing x, but this will violate the privacy of x. Prove in “zero-knowledge” that x exists (w.o. revealing anything else).

15 Zero-Knowledge proof systems THM: OWF imply ZK proofs for 3-colorability. E.g., for graph 3-colrability (which is NP-complete). Prove that a graph G=(V,E) is 3-colorable w.o. revealing anything else (beyond what follows easily from this fact). The protocol = repeats the following steps suff. many (i.e., |E| 2 ) times: 1.Prover commits to a random relabeling of a (fixed) 3-coloring (i.e., commit to the color of each vertex separately). 2.Verifier requests to reveal the colors of the endpoints of a random edge. 3.Prover reveals the corresponding colors. 4.Verifier checks that the colors are legal and different.

16 Universal Results: general secure multi-party computations Any desired multi-party functionality can be implemented securely. (represents a variety of THMs in various models, some relying on computational assumptions (e.g., OWF etc) ). P 1 P 2 P m (predetermined f i ’s) x 1 x 2 x m (local inputs: x=(x 1,x 2,…,x m )) f 1 (x) f 2 (x) f m (x) (desired local outputs) The effect of a trusted party can be securely emulated by distrustful parties. x1x2xmx1x2xm trusted party f 1 (x) f m (x) P1: P2: Pm:P1: P2: Pm:

17 The End Note: the slides of this talk are available at http://www.wisdom.weizmann.ac.il/~oded/T/ecm08.ppt Material on the Foundations of Cryptography (e.g., surveys and a two-volume book) is available at http://www.wisdom.weizmann.ac.il/~oded/foc.html

18 Historical notes relating PRFs to Turing 1.The term “Turing Test of Randomness” is analogous to the famous “Turing Test of Intelligence” which refers to distinguishing a machine from Human via interaction. 2.Even more related is the following quote from Turing’s work (1950): “I have set up on a Manchester computer a small programme using only 1000 units of storage, whereby the machine supplied with one sixteen figure number replies with another within two seconds. I would defy anyone to learn from these replies sufficient about the programme to be able to predict any replies to untried values.” Back to the PRF slide


Download ppt "The Bright Side of Hardness Relating Computational Complexity and Cryptography Oded Goldreich Weizmann Institute of Science."

Similar presentations


Ads by Google