Foundations of Cryptography Lecture 2: One-way functions are essential for identification. Amplification: from weak to strong one-way function Lecturer:

Slides:



Advertisements
Similar presentations
Foundations of Cryptography Lecture 1 Lecturer: Moni Naor.
Advertisements

Lecturer: Moni Naor Weizmann Institute of Science
Linear-Degree Extractors and the Inapproximability of Max Clique and Chromatic Number David Zuckerman University of Texas at Austin.
Lecturer: Moni Naor Weizmann Institute of Science
Ulams Game and Universal Communications Using Feedback Ofer Shayevitz June 2006.
Foundations of Cryptography Lecture 3 Lecturer: Moni Naor.
Function Technique Eduardo Pinheiro Paul Ilardi Athanasios E. Papathanasiou The.
Models of Computation Prepared by John Reif, Ph.D. Distinguished Professor of Computer Science Duke University Analysis of Algorithms Week 1, Lecture 2.
Complexity Theory Lecture 6
Extracting Randomness From Few Independent Sources Boaz Barak, IAS Russell Impagliazzo, UCSD Avi Wigderson, IAS.
ONE WAY FUNCTIONS SECURITY PROTOCOLS CLASS PRESENTATION.
NP-Hard Nattee Niparnan.
Shortest Vector In A Lattice is NP-Hard to approximate
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
1 Complexity ©D.Moshkovitz Cryptography Where Complexity Finally Comes In Handy…
Many-to-one Trapdoor Functions and their Relations to Public-key Cryptosystems M. Bellare S. Halevi A. Saha S. Vadhan.
Complexity Theory Lecture 9 Lecturer: Moni Naor. Recap Last week: –Toda’s Theorem: PH  P #P. –Program checking and hardness on the average of the permanent.
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
Foundations of Cryptography Lecture 11 Lecturer: Moni Naor.
Foundations of Cryptography Lecture 5 Lecturer: Moni Naor.
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
Foundations of Cryptography Lecture 13 Lecturer: Moni Naor.
Foundations of Cryptography Lecture 4 Lecturer: Moni Naor.
Foundations of Cryptography Lecture 12 Lecturer: Moni Naor.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 18-1 Complexity Andrei Bulatov Probabilistic Algorithms.
Computability and Complexity 13-1 Computability and Complexity Andrei Bulatov The Class NP.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
CS151 Complexity Theory Lecture 7 April 20, 2004.
Perfect and Statistical Secrecy, probabilistic algorithms, Definitions of Easy and Hard, 1-Way FN -- formal definition.
1 Undecidability Andreas Klappenecker [based on slides by Prof. Welch]
Lecturer: Moni Naor Foundations of Cryptography Lecture 4: One-time Signatures, UOWHFs.
Lecturer: Moni Naor Foundations of Cryptography Lecture 3: One-way on its iterates, Authentication.
Lecturer: Moni Naor Weizmann Institute of Science
Introduction to Modern Cryptography, Lecture 7/6/07 Zero Knowledge and Applications.
Theoretical Cryptography Lecture 1: Introduction, Standard Model of Cryptography, Identification, One-way functions Lecturer: Moni Naor Weizmann Institute.
Lecturer: Moni Naor Weizmann Institute of Science
Chapter 11: Limitations of Algorithmic Power
1 Constructing Pseudo-Random Permutations with a Prescribed Structure Moni Naor Weizmann Institute Omer Reingold AT&T Research.
Introduction to Modern Cryptography, Lecture 9 More about Digital Signatures and Identification.
Theory of Computing Lecture 20 MAS 714 Hartmut Klauck.
On Everlasting Security in the Hybrid Bounded Storage Model Danny Harnik Moni Naor.
Lecturer: Moni Naor Foundations of Cryptography Lecture 3: One-way on its Iterates, Authentication.
Foundations of Cryptography Lecture 9 Lecturer: Moni Naor.
Foundations of Cryptography Lecture 8 Lecturer: Moni Naor.
Foundations of Cryptography Lecture 2 Lecturer: Moni Naor.
Theory of Computing Lecture 19 MAS 714 Hartmut Klauck.
1 CIS 5371 Cryptography 8. Asymmetric encryption-.
On necessary and sufficient cryptographic assumptions: the case of memory checking Lecture 4 : Lower bound on Memory Checking, Lecturer: Moni Naor Weizmann.
The RSA Algorithm Rocky K. C. Chang, March
Cryptography Lecture 8 Stefan Dziembowski
Nattee Niparnan. Easy & Hard Problem What is “difficulty” of problem? Difficult for computer scientist to derive algorithm for the problem? Difficult.
Ragesh Jaiswal Indian Institute of Technology Delhi Threshold Direct Product Theorems: a survey.
Great Theoretical Ideas in Computer Science.
Computational Complexity Theory Lecture 2: Reductions, NP-completeness, Cook-Levin theorem Indian Institute of Science.
CS151 Complexity Theory Lecture 13 May 11, Outline proof systems interactive proofs and their power Arthur-Merlin games.
Foundations of Cryptography Lecture 6 Lecturer: Moni Naor.
CSC 413/513: Intro to Algorithms NP Completeness.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
A Universal Turing Machine
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
1 Chapter 34: NP-Completeness. 2 About this Tutorial What is NP ? How to check if a problem is in NP ? Cook-Levin Theorem Showing one of the most difficult.
1 Turing’s Thesis. 2 Turing’s thesis: Any computation carried out by mechanical means can be performed by a Turing Machine (1930)
Computability Heap exercise. The class P. The class NP. Verifiers. Homework: Review RELPRIME proof. Find examples of problems in NP.
Donghyun (David) Kim Department of Mathematics and Computer Science North Carolina Central University 1 Chapter 7 Time Complexity Some slides are in courtesy.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
1 The RSA Algorithm Rocky K. C. Chang February 23, 2007.
Topic 36: Zero-Knowledge Proofs
Probabilistic Algorithms
Chapter 34: NP-Completeness
Presentation transcript:

Foundations of Cryptography Lecture 2: One-way functions are essential for identification. Amplification: from weak to strong one-way function Lecturer: Moni Naor Weizmann Institute of Science

Recap Key Idea of Cryptography: use the intractability of some problems to create secure system. The Identification Problem Shannon and Min Entropies One-way functions Solution to the identification problem with multiple verifiers Cryptographic Reductions: using one adversary to create another

Rigorous Specification of Security To define security of a system must specify: 1.What constitute a failure of the system 2.The power of the adversary –computational –access to the system –what it means to break the system. Why is this more relevant in cryptography than other realms?

Function and inversions We say that a function f is hard to invert if given y=f(x) it is hard to find x’ such that y=f(x’) –x’ need not be equal to x –We will use f -1 (y) to denote the set of preimages of y To discuss hard must specify a computational model Use two flavors: –Concrete –Asymptotic

Computational Models Asymptotic: Turing Machines with random tape –For classical models: precise model does not matter up to polynomial factor Random tape Input tape Both algorithm for evaluating f and the adversary are modeled by PTM

One-way functions - asymptotic A function f: {0,1} * → {0,1} * is called a one-way function, if f is a polynomial-time computable function –Also polynomial relationship between input and output length for every probabilistic polynomial-time algorithm A, every positive polynomial p(.), and all sufficiently large n’s Prob[ A(f(x))  f -1 (f(x)) ] ≤ 1/p(n) Where x is chosen uniformly in {0,1} n and the probability is also over the internal coin flips of A

Computational Models Concrete : Boolean circuits (example) –precise model makes a difference –Time = circuit size Output Input

One-way functions – concrete version A function f:{0,1} n → {0,1} n is called a (t,ε) one-way function, if f is a polynomial-time computable function (independent of t) for every t-time algorithm A, Prob[A(f(x))  f -1 (f(x)) ] ≤ ε Where x is chosen uniformly in {0,1} n and the probability is also over the internal coin flips of A Can either think of t and ε as being fixed or as t(n), ε(n) Boolean circuit

The two guards problem YIf Alice wants to approve and Eve does not interfere – Bob moves to state Y NIf Alice does not approve, then for any behavior from Eve and Charlie, Bob stays in N Similarly if Bob and Charlie are switched Alice Bob Eve Charlie

Are one-way functions essential to the two guards password problem ? Precise definition: –for every probabilistic polynomial-time algorithm A controlling Eve and Charlie – every polynomial p(.), –and all sufficiently large n’s Y Prob[ Bob moves to Y | Alice does not approve ] ≤ 1/p(n) Recall observation : what Bob and Charlie received in the setup phase might as well be public Claim: can get rid of interaction: –given an interactive identification protocol possible to construct a noninteractive one. In the new protocol: Alice` sends Bob` the random bits Alice used to generate the setup information Bob` simulates the conversation between Alice and Bob in original protocol and accepts only if simulated Bob accepts. Probability of cheating is the same

One-way functions are essential to the two guards password problem Are we done? Given a noninteracive identification protocol want to define a one-way function Define f(r): the setup phase mapping between the random bits r of Alice and the information y given to Bob and Charlie Problem: the function f(r) is not necessarily one-way… –Can be unlikely ways to generate it. Can be exploited to invert. –Example: Alice chooses x, x’  {0,1} n if x’= 0 n set y=x o.w. set y=f(x) –The protocol is still secure, but with probability 1/2 n not complete –The resulting function f(x,x’) is easy to invert: given y  {0,1} n set inverse as (y, 0 n )

One-way functions are essential to the two guards password problem… However: possible to estimate the probability that Bob accepts on a given string from Alice –Should be close to 1 for most r Second attempt: define function f(r) as –the mapping that Alice does in the setup phase between her random bits r and the information given to Bob and Charlie, –plus a bit indicating whether probability of Bob accepts given r is greater than 2/3 Theorem : the two guards password problem has a solution if and only if one-way functions exist Arbitrary…

Examples of One-way functions Examples of hard problems: Subset sum Discrete log Factoring (numbers, polynomials) into prime components How do we get a one-way function out of them? Easy problem

Subset Sum Subset sum problem: given –n numbers 0 ≤ a 1, a 2,…, a n ≤ 2 m –Target sum T –Find subset S ⊆ {1,...,n} ∑ i  S a i,=T ( n,m )-subset sum assumption : for uniformly chosen –a 1, a 2,…, a n  R {0,…2 m - 1} and S ⊆ {1,...,n} –For any probabilistic polynomial time algorithm, the probability of finding S’ ⊆ {1,...,n} such that ∑ i  S a i = ∑ i  S’ a i is negligible, where the probability is over the random choice of the a i ‘s, S and the inner coin flips of the algorithm –Not true for very small or very large m. most difficult case m=n Subset sum one-way function f:{0,1} mn+n → {0,1} mn+m f(a 1, a 2,…, a n, b 1, b 2,…, b n ) = (a 1, a 2,…, a n, ∑ i=1 n b i a i mod 2 m )

Exercise Show a function f such that – if f is polynomial time invertible on all inputs, then P=NP – f is not one-way

Discrete Log Problem Let G be a group and g an element in G. Let y=g z and x the minimal non negative integer satisfying the equation. x is called the discrete log of y to base g. Example: y=g x mod p in the multiplicative group of Z p In general: easy to exponentiate via repeated squaring –Consider binary representation What about discrete log? –If difficult, f(g,x) = (g, g x ) is a one-way function

Why is it polynomial time computable? Integer Factoring Consider f(x,y) = x y Easy to compute Is it one-way? – No : if f(x,y) is even, can set inverse as (f(x,y)/2,2) If factoring a number into prime factors is hard: –Specifically given N= P Q, the product of two random large (n-bit) primes, it is hard to factor –Then somewhat hard – there are a non-negligible fraction of such numbers ~ 1/n 2 from the density of primes –Hence a weak one-way function Alternatively: –let g(r) be a function mapping random bits into random primes. –The function f(r 1,r 2 ) = g(r 1 ) g(r 2 ) is one-way

Weak One-way function A function f: {0,1} n → {0,1} n is called a weak one-way function, if f is a polynomial-time computable function There exists a polynomial p( ¢ ), for every probabilistic polynomial-time algorithm A, and all sufficiently large n’s Prob[A[f(x)]  f -1 (f(x)) ] ≤ 1-1/p(n) Where x is chosen uniformly in {0,1} n and the probability is also over the internal coin flips of A

Exercise: weak exist if strong exists Show that if strong one-way functions exist, then there exists a a function which is a weak one-way function but not a strong one

What about the other direction? Given – a function f that is guaranteed to be a weak one-way Let p(n) be such that Prob[A[f(x)]  f -1 (f(x)) ] ≤ 1-1/p(n) – can we construct a function g that is (strong) one-way? An instance of a hardness amplification problem Simple idea: repetition. For some polynomial q(n) define g(x 1, x 2,…, x q(n) )=f(x 1 ), f(x 2 ), …, f(x q(n) ) To invert g need to succeed in inverting f in all q(n) places –If q(n) = p 2 (n) seems unlikely (1-1/p(n)) p 2 (n) ≈ e -p(n) –But how to we show? Sequential repetition intuition – not a proof.

Want: Inverting g with low probability implies inverting f with high probability Given a machine B that inverts g want a machine B’ – operating in similar time bounds – inverts f with high probability Idea: given y=f(x) plug it in some place in g and generate the rest of the locations at random z=(y, f(x 2 ), …, f(x q(n) )) Ask machine B to invert g at point z Probability of success should be at least (exactly) B ’s Probability of inverting g at a random point Once is not enough How to amplify? –Repeat while keeping y fixed –Put y at random position (or sort the inputs to g )

Proof of Amplification for Repetition of Two Concentrate on a two-repetition: g(x 1, x 2 )=f(x 1 ), f(x 2 ) Goal: show that the probability of inverting g is roughly squared the probability of inverting f just as would be sequentially Claim: Let  (n) be a function that for some p(n) satisfies 1/p(n) ≤  (n) ≤ 1-1/p(n) Let ε (n) be any inverse polynomial function suppose that for every polynomial time A and sufficiently large n Prob[A[f(x)]  f -1 (f(x)) ] ≤  (n) Then for every polynomial time B and sufficiently large n Prob[B[g(x 1, x 2 )]  g -1 (g(x 1, x 2 )) ] ≤  2 (n) + ε(n)

Proof of Amplification for Two Repetition Given a better than  2 +ε algorithm B for inverting g construct the following: B’(y): Inversion algorithm for f –Repeat t times Choose x’ at random and compute y’=f(x’) Run B(y,y’). Check the results If correct: Halt with success –Output failure Inner loop Helpful for constructive algorithm

Probability of Success Define S={y=f(x) | Prob[Inner loop successful| y ] > β } Since the choices of the x’ are independent Prob[B’ succeeds| y  S] > 1-(1- β ) t Taking t= n/ β means that when y  S almost surely B’ will invert it Hence want to show that Prob[ y  S] >  (n) –Probability is over the choice of x

The success of B Fix the random bits of B. Define P={(y 1, y 2 )| B succeeds on (y 1,y 2 )} y1y1 y2y2 P P= P ⋂ {( y 1,y 2 )| y 1,y 2  S} ⋃ P ⋂ {( y 1,y 2 )| y 1  S} ⋃ P ⋂ {( y 1,y 2 )| y 2  S} Well behaved part Want to bound P by a square

S is the only success... But Prob[B[y 1, y 2 ]  g -1 (y 1, y 2 ) | y 1  S ] ≤ β and similarly Prob[B[y 1, y 2 ]  g -1 (y 1, y 2 ) | y 2  S ] ≤ β so Prob[(y 1, y 2 )  P and y 1,y 2  S ] ≥ Prob[(y 1, y 2 )  P ] - 2 β ≥  2 + ε - 2 β Setting β =ε/3 we have Prob[(y 1, y 2 )  P and y 1,y 2  S ] ≥  2 + ε /3

Contradiction But Prob[(y 1, y 2 )  P and y 1,y 2  S ] ≤ Prob[y 1  S ] Prob[y 2  S ] = Prob 2 [y  S ] So Prob[y  S ] ≥ √(α 2 + ε/3) > α

Is there an ultimate one-way function? aka `universal’ Short of showing that P≠NP implies the existence of one-way functions, can we show a specific function f so that if some one-way exists, then f is one-way If f 1 :{0,1} * → {0,1} * and f 2 :{0,1} * → {0,1} * are guaranteed to: –Be polynomial time computable –At least one of them is one-way. then can construct a function g:{0,1} * → {0,1} * which is one-way: g(x 1, x 2 ) = (f 1 (x 1 ),f 2 (x 2 )) Robust Combiner Can generalizes to a 1 -out- m combiner

The Construction If a 5n 2 time one-way function is guaranteed to exist, can construct an O(n 2 log n) one-way function g : – Idea: enumerate Turing Machine and make sure they run 5n 2 steps g(x 1, x 2,…, x log (n) )=M 1 (x 1 ), M 2 (x 2 ), …, M log n (x log (n) ) –Eventually get to the TM of the one-way function If a one-way function is guaranteed to exist, then there exists a 5n 2 time one-way: – Idea: concentrate on the prefix, ignore the rest n’ where n=p(n’)

Ultimate one-way function conclusions Original proof due to L. Levin Be careful what you wish for Problem with resulting one-way function: –Cannot learn about behavior on large inputs from small inputs –Whole rational of considering asymptotic results is eroded! Construction does not work for non-uniform one-way functions Notion of robust combiner seems fundamental –See “Robust Combiners for Oblivious Transfer and Other Primitives” by Harnik, Kilian, Naor, Reingold and Rosen, Eurocrypt 2005

Distributionally One-Way Functions A function f:{0,1}*  {0,1}* is one-way if: –it is computable in poly-time –the probability of successfully finding an inverse in poly-time is negligible (on a random input) A function f :{0,1}*  {0,1}* is distributionally one-way if: –it is computable in poly-time –No poly-time algorithm can successfully find a random inverse (on a random input) Distribution on inverting algorithm far from uniform on the pre-images Theorem [Impagliazzo Luby 89]: distributionally one-way functions exist iff one-way functions exists Example: function from two guard problem