Foundations of Cryptography Lecture 9 Lecturer: Moni Naor.

Slides:



Advertisements
Similar presentations
Lecturer: Moni Naor Weizmann Institute of Science
Advertisements

Lecturer: Moni Naor Weizmann Institute of Science
Foundations of Cryptography Lecture 3 Lecturer: Moni Naor.
Computational Privacy. Overview Goal: Allow n-private computation of arbitrary funcs. –Impossible in information-theoretic setting Computational setting:
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Foundations of Cryptography Lecture 2: One-way functions are essential for identification. Amplification: from weak to strong one-way function Lecturer:
Many-to-one Trapdoor Functions and their Relations to Public-key Cryptosystems M. Bellare S. Halevi A. Saha S. Vadhan.
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
Cryptography for Unconditionally Secure Message Transmission in Networks Kaoru Kurosawa.
Foundations of Cryptography Lecture 11 Lecturer: Moni Naor.
Foundations of Cryptography Lecture 5 Lecturer: Moni Naor.
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
Foundations of Cryptography Lecture 13 Lecturer: Moni Naor.
Foundations of Cryptography Lecture 4 Lecturer: Moni Naor.
Complexity Theory Lecture 11 Lecturer: Moni Naor.
Foundations of Cryptography Lecture 12 Lecturer: Moni Naor.
Immunizing Encryption Schemes from Decryption Errors Cynthia Dwork Moni Naor Omer Reingold Weizmann Institute of ScienceMicrosoft Research.
Computability and Complexity 20-1 Computability and Complexity Andrei Bulatov Random Sources.
Foundations of Cryptography Lecture 8: Application of GL, Next-bit unpredictability, Pseudo-Random Functions. Lecturer: Moni Naor Announce home )deadline.
Seminar in Foundations of Privacy Gil Segev Message Authentication in the Manual Channel Model.
Perfect and Statistical Secrecy, probabilistic algorithms, Definitions of Easy and Hard, 1-Way FN -- formal definition.
CNS2010handout 10 :: digital signatures1 computer and network security matt barrie.
Analysis of Security Protocols (V) John C. Mitchell Stanford University.
Foundations of Cryptography Lecture 5: Signatures and pseudo-random generators Lecturer: Moni Naor.
Lecturer: Moni Naor Foundations of Cryptography Lecture 4: One-time Signatures, UOWHFs.
ACT1 Slides by Vera Asodi & Tomer Naveh. Updated by : Avi Ben-Aroya & Alon Brook Adapted from Oded Goldreich’s course lecture notes by Sergey Benditkis,
Introduction to Modern Cryptography, Lecture ?, 2005 Broadcast Encryption, Traitor Tracing, Watermarking.
Co-operative Private Equality Test(CPET) Ronghua Li and Chuan-Kun Wu (received June 21, 2005; revised and accepted July 4, 2005) International Journal.
Lecturer: Moni Naor Foundations of Cryptography Lecture 3: One-way on its iterates, Authentication.
The Goldreich-Levin Theorem: List-decoding the Hadamard code
Lecturer: Moni Naor Foundations of Cryptography Lecture 11: Security of Encryption Schemes.
On The Cryptographic Applications of Random Functions Oded Goldreich Shafi Goldwasser Silvio Micali Advances in Cryptology-CRYPTO ‘ 84 報告人 : 陳昱升.
Lecturer: Moni Naor Foundations of Cryptography Lecture 12: Commitment and Zero-Knowledge.
Oded Regev Tel-Aviv University On Lattices, Learning with Errors, Learning with Errors, Random Linear Codes, Random Linear Codes, and Cryptography and.
Derandomizing LOGSPACE Based on a paper by Russell Impagliazo, Noam Nissan and Avi Wigderson Presented by Amir Rosenfeld.
CRYPTOGRAPHY WHAT IS IT GOOD FOR? Andrej Bogdanov Chinese University of Hong Kong CMSC 5719 | 6 Feb 2012.
Introduction to Modern Cryptography, Lecture 7/6/07 Zero Knowledge and Applications.
CS151 Complexity Theory Lecture 10 April 29, 2004.
Lecturer: Moni Naor Weizmann Institute of Science
Lecturer: Moni Naor Foundations of Cryptography Lecture 6: pseudo-random generators, hardcore predicate, Goldreich-Levin Theorem, Next-bit unpredictability.
1 Constructing Pseudo-Random Permutations with a Prescribed Structure Moni Naor Weizmann Institute Omer Reingold AT&T Research.
Foundations of Privacy Lecture 11 Lecturer: Moni Naor.
Lecturer: Moni Naor Foundations of Cryptography Lecture 9: Pseudo-Random Functions and Permutations.
Foundations of Cryptography Lecture 10: Pseudo-Random Permutations and the Security of Encryption Schemes Lecturer: Moni Naor Announce home )deadline.
Cramer-Shoup is Plaintext Aware in the Standard Model Alexander W. Dent Information Security Group Royal Holloway, University of London.
Foundations of Cryptography Lecture 8 Lecturer: Moni Naor.
Foundations of Cryptography Lecture 2 Lecturer: Moni Naor.
Foundations of Cryptography Rahul Jain CS6209, Jan – April 2011
CMSC 414 Computer and Network Security Lecture 3 Jonathan Katz.
A Few Simple Applications to Cryptography Louis Salvail BRICS, Aarhus University.
One-Time Pad Or Vernam Cipher Sayed Mahdi Mohammad Hasanzadeh Spring 2004.
CS555Spring 2012/Topic 51 Cryptography CS 555 Topic 5: Pseudorandomness and Stream Ciphers.
Foundations of Cryptography Lecture 6 Lecturer: Moni Naor.
Cryptography Lecture 2 Stefan Dziembowski
Secure Computation (Lecture 5) Arpita Patra. Recap >> Scope of MPC > models of computation > network models > modelling distrust (centralized/decentralized.
Cryptography Lecture 2 Arpita Patra. Summary of Last Class  Introduction  Secure Communication in Symmetric Key setting >> SKE is the required primitive.
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
Cryptography and Privacy Preserving Operations Lecture 2: Pseudo-randomness Lecturer: Moni Naor Weizmann Institute of Science.
Pseudo-random generators Talk for Amnon ’ s seminar.
1 4.1 Hash Functions and Data Integrity A cryptographic hash function can provide assurance of data integrity. ex: Bob can verify if y = h K (x) h is a.
1 Introduction to Quantum Information Processing CS 467 / CS 667 Phys 467 / Phys 767 C&O 481 / C&O 681 Richard Cleve DC 3524 Course.
Cryptography and Privacy Preserving Operations Lecture 1
Modern symmetric-key Encryption
Topic 5: Constructing Secure Encryption Schemes
Cryptography Lecture 5.
Cryptography Lecture 6.
Cryptography Lecture 12 Arpita Patra © Arpita Patra.
Cryptography Lecture 5.
Cryptography Lecture 8.
Cryptography Lecture 6.
Presentation transcript:

Foundations of Cryptography Lecture 9 Lecturer: Moni Naor

Recap of lecture 8 Tree signature scheme Proof of security of tree signature schemes Encryption

The Encryption problem: Alice would want to send a message m  {0,1} n to Bob –Set-up phase is secret They want to prevent Eve from learning anything about the message Alice Bob Eve m

The encryption problem Relevant both in the shared key and in the secret key setting Want to use many times Also add authentication… Other disruptions by Eve

What does `learn’ mean? If Eve has some knowledge of m should remain the same –Probability of guessing m Min entropy of m –Probability of guess whether m is m 0 or m 1 –Probability of computing some function f of m Ideally: the message sent is a independent of the message m –Implies all the above Shannon: achievable only if the entropy of the shared secret is at least as large as the message m entropy If no special knowledge about m –then |m| Achievable: one-time pad. –Let r  R {0,1} n –Think of r and m as elements in a group –To encrypt m send r+m –To decrypt z send m=z-r

Pseudo-random generators Would like to stretch a short secret (seed) into a long one The resulting long string should be usable in any case where a long string is needed –In particular: as a one-time pad Important notion: Indistinguishability Two probability distributions that cannot be distinguished –Statistical indistinguishability: distances between probability distributions –New notion: computational indistinguishability

Pseudo-random generators Definition : a function g:{0,1} * → {0,1}* is said to be a (cryptographic) pseudo- random generator if It is polynomial time computable It stretches the input g(x)|>|x| – denote by ℓ(n) the length of the output on inputs of length n If the input is random the output is indistinguishable from random For any probabilistic polynomial time adversary A that receives input y of length ℓ(n) and tries to decide whether y= g(x) or is a random string from {0,1} ℓ(n) for any polynomial p(n) and sufficiently large n |Prob[A=`rand’| y=g(x)] - Prob[A=`rand’| y  R {0,1} ℓ(n) ] | < 1/p(n) Important issues: Why is the adversary bounded by polynomial time? Why is the indistinguishability not perfect?

Construction of pseudo-random generators Idea given a one-way function there is a hard decision problem hidden there If balanced enough: looks random Such a problem is a hardcore predicate Possibilities: –Last bit –First bit –Inner product

Homework Assume one-way functions exist Show that the last bit/first bit are not necessarily hardcore predicates Generalization: show that for any fixed function h:{0,1} * → {0,1} there is a one-way function f:{0,1} * → {0,1} * such that h is not a hardcore predicate of f Show a one-way function f such that given y=f(x) each input bit of x can be guessed with probability at least 3/4

Hardcore Predicate Definition : let f:{0,1} * → {0,1}* be a function. We say that h:{0,1} * → {0,1} is a hardcore predicate for f if It is polynomial time computable For any probabilistic polynomial time adversary A that receives input y=f(x) and tries to compute h(x) for any polynomial p(n) and sufficiently large n |Prob[A(y)=h(x)] -1/2| < 1/p(n) where the probability is over the choice y and the random coins of A Sources of hardoreness: –not enough information about x not of interest for generating pseudo-randomness –enough information about x but hard to compute it

Single bit expansion Let f:{0,1} n → {0,1} n be a one-way permutation Let h:{0,1} n → {0,1} be a hardcore predicate for f Consider g:{0,1} n → {0,1} n+1 where g(x)=(f(x), h(x)) Claim : g is a pseudo-random generator Proof: can use a distinguisher for g to guess h(x) f(x), h(x))f(x), 1-h(x))

Hardcore Predicate With Public Information Definition : let f:{0,1} * → {0,1}* be a function. We say that h:{0,1} * x {0,1} * → {0,1} is a hardcore predicate for f if h(x,r) is polynomial time computable For any probabilistic polynomial time adversary A that receives input y=f(x) and public randomness r and tries to compute h(x,r) for any polynomial p(n) and sufficiently large n |Prob[A(y,r)=h(x,r)] -1/2| < 1/p(n) where the probability is over the choice y of r and the random coins of A Alternative view : can think of the public randomness as modifying the one-way function f: f’(x,r)=f(x),r.

Example: weak hardcore predicate Let h(x,i)= x i I.e. h selects the i th bit of x For any one-way function f, no polynomial time algorithm A(y,i) can have probability of success better than 1-1/2n of computing h(x,i) Homework : let c:{0,1} * → {0,1}* be a good error correcting code –|c(x)| is O(|x|) –distance between any two codewords c(x) and c(x’) is a constant fraction of |c(x)| It is possible to correct in polynomial time errors in a constant fraction of |c(x)| Show that for h(x,i)= c(x) i and any one-way function f, no polynomial time algorithm A(y,i) can have probability of success better than a constant of computing h(x,i)

Inner Product Hardcore bit The inner product bit: choose r  R {0,1} n let h(x,r) = r ∙x = ∑ x i r i mod 2 Theorem [Goldreich-Levin]: for any one-way function the inner product is a hardcore predicate Proof structure: There are many x ’s for which A returns a correct answer on ½+ε of the r ’s take an algorithm A that guesses h(x,r) correctly with probability ½+ε over the r ‘s and output a list of candidates for x –No use of the y info Choose from the list the/an x such that f(x)=y The main step!

Why list? Cannot have a unique answer! Suppose A has two candidates x and x’ –On query r it returns at `random’ either r ∙x or r ∙x’ Prob[A(y,r) = r ∙x ] =½ +½Prob[r∙x = r∙x’] = ¾

Warm-up (1) If A returns a correct answer on 1-1/2n of the r ’s Choose r 1, r 2, … r n  R {0,1} n Run A(y,r 1 ), A(y,r 2 ), … A(y,r n ) –Denote the response z 1, z 2, … z n If r 1, r 2, … r n are linearly independent then: there is a unique x satisfying r i ∙x = z i for all 1 ≤i ≤n Prob[z i = A(y,r i )= r i ∙x] ≥ 1-1/2n –Therefore probability that all the z i ‘s are correct is at least ½ –Do we need complete independence of the r i ‘s? `one-wise’ independence is sufficient Can choose r  R {0,1} n and set r i ∙ = r+e i e i =0 i-1 10 n-i All the r i `s are linearly independent Each one is uniform in {0,1} n

Warm-up (2) If A returns a correct answer on 3/4+ε of the r ’s Can amplify the probability of success! Given any r  {0,1} n Procedure A’(y,r): Repeat for j=1, 2, … –Choose r’  R {0,1} n –Run A(y,r+r’) and A(y,r’), denote the sum of responses by z j Output the majority of the z j ’s Analysis Pr[z j = r∙x] ≥ Pr[A(y,r’)=r∙x ^ A(y,r+r’)=(r+r’)∙x] ≥ ½+2ε –Does not work for ½+ε since success on r’ and r+ r’ is not independent Each one of the events ‘ z j = r∙x’ is independent of the others Therefore by taking sufficiently many j ’s can amplify to a value as close to 1 as we wish –Need roughly 1/ε 2 examples Idea for improvement: fix a few of the r’

The real thing Choose r 1, r 2, … r k  R {0,1} n Guess for j=1, 2, … k the value z j =r j ∙x –Go over all 2 k possibilities For all nonempty subsets S  {1,…,k} –Let r S = ∑ j  S r j –The implied guess for z S = ∑ j  S z j For each position x i –for each S  {1,…,k} run A(y,e i -r S ) –output the majority value of {z s +A(y,e i -r S ) } Analysis: Each one of the vectors e i -r S is uniformly distributed –A(y,e i -r S ) is correct with probability at least ½+ε Claim: For every pair of nonempty subset S ≠ T  {1,…,k}: – the two vectors r S and r T are pair-wise independent Therefore variance is as in completely independent trials – I is the number of correct A(y,e i -r S ), VAR(I) ≤ 2 k (½+ε) –Use Chebyshev’s Inequality Pr[|I-E(I)|≥ λ√VAR(I)]≤1/λ 2 Need 2 k = n/ε 2 to get the probability of error to 1/n –So process is good simultaneously for all positions x i, i  {1,…,n} ST

Analysis Number of invocations of A 2 k ∙ n ∙ (2 k -1) = poly(n, 1/ε) ≈ n 3 /ε 4 Size of resulting list of candidates for x for each guess of z 1, z 2, … z k unique x 2 k =poly(n, 1/ε) ) ≈ n/ε 2 guessespositionssubsets

Reducing the size of the list of candidates Idea: bootstrap Given any r  {0,1} n Procedure A’(y,r): Choose r 1, r 2, … r k  R {0,1} n Guess for j=1, 2, … k the value z j =r j ∙x –Go over all 2 k possibilities For all nonempty subsets S  {1,…,k} –Let r S = ∑ j  S r j –The implied guess for z S = ∑ j  S z j –for each S  {1,…,k} run A(y,r-r S ) output the majority value of {z s +A(y,r-r S ) For 2 k = 1/ε 2 the probability of error is, say, 1/8 Fix the same r 1, r 2, … r k for subsequent executions They are good for 7/8 of the r’s Run warm-up (2) Size of resulting list of candidates for x is ≈ 1/ε 2

Application: Diffie-Hellman The Diffie-Hellman assumption Let G be a group and g an element in G. Given g, a=g x and b=g y it is hard to find c=g xy for random x and y is probability of poly-time machine outputting g xy is negligible To be accurate: a sequence of groups Don’t know how to verify given c’ whether it is equal to g xy Homework: show that under the DH Assumption Given a=g x, b=g y and r  {0,1} n no polynomial time machine can guess r ∙g xy with advantage 1/poly – for random x,y and r