Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ruhr-Universität Bochum, Germany

Similar presentations


Presentation on theme: "Ruhr-Universität Bochum, Germany"— Presentation transcript:

1 Ruhr-Universität Bochum, Germany
Provable Security Sebastian Faust Ruhr-Universität Bochum, Germany

2 public-key cryptography
Cryptography in the past ≈ securing communication Encrypt Enc(k,m) ?de45# next target Decrypt Dec(k,C) key k key k Agree on a secret key k adversary Adv. Learns nothing about m Modern cryptography Much more than encryption… mental poker signature schemes electronic auctions key agreement e-cash electronic voting public-key cryptography zero-knowledge multiparty-computations sevenites now

3 How to analyze security?
One approach: Analyze the security with respect to one attack fix new attack Cryptoscheme 1 Cryptoscheme 2 secure against attack 1 secure against new attack But: Adversary may find new attack Resembles cat-and-mouse game Goal of modern cryptography: Hopefully stop cat-and-mouse game! Show security against broad classes of adversaries One important tool: security proofs

4 Why security proofs? Proofs are useful! How does it work?
In many areas of computer science “proofs” are not essential e.g., instead of proving that algorithm is efficient just simulate its behavior on ”typical“ inputs In cryptography this is not true Why? Notion of “typical adversary” makes little sense Proofs are useful! How does it work?

5 Provable Security 1. Security definition ???
What security property shall the scheme achieve? Encrypt Ciphertext shall „hide“ message message Key K ciphertext

6 Crypto scheme is secure
Provable Security 1. Security definition What security property shall the scheme achieve? 2. Assumptions What assumptions are needed for security? Really any attack? 3. Proof If assumption holds If attack is in the model Prove that scheme satisfies definition if assumption holds prove If assumption holds Crypto scheme is secure Shows: only way to break the scheme is to break assumption  Secure against any attack within model!

7 Why definitions? We need to know what we want in order to achieve it
Allows to compare schemes: some definitions may be stronger than others Allows for proofs: security proof only meaningful with definition Coming up with the right definition is non-trivial Next: An example for public-key encryption

8 Public key encryption (PKE)
A public-key encryption (PKE) scheme is a triple (Gen, Enc, Dec): Gen is a key-generation randomized algorithm that takes as input a security parameter 1n and outputs a key pair (pk,sk). Enc is an encryption algorithm that takes as input the public key pk and a message m, and outputs a ciphertext c, Dec is an decryption algorithm that takes as input the private key sk and the ciphertext c, and outputs a message m’. pk sk m c := Enc(pk,m) Dec(sk,c) m Alice Bob pk Correctness: Dec(sk, ) c := Enc(pk,m) = m m

9 How to define security 1. The threat model: 2. The security goal:
Describes what the adversary can see and do m c := Enc(pk,m) Dec(sk,c) m Alice Bob knows pk sk Adversary has no knowledge about sk! 2. The security goal: What does it mean to break scheme?

10 What is the security goal?
Informal: adversary does not learn m pk outputs m c := Encpk(m) Attempt 1: adversary cannot compute m Q: Is this sufficient? A: No! Enc(pk,m) m m1 ... m|m|/2 ? Adversary does not learn entire m but would you consider this scheme secure? Too weak security guarantee!

11 Not really necessary to learn “something”
What is the security goal? Not really necessary to learn “something” 2. Attempt: Adv. learns nothing about m m Adversary knows that pk c := Encpk(m) “I love you” with prob. 0.5 m := “I don’t love you” with prob. 0.5 But adversary may already know something about m Too strong security guarantee!  unachievable

12 What is the security goal?
3. Attempt: Adv. learns nothing new about m m Adversary knows that “I love you” with prob. 0.5 m := “I don’t love you” with prob. 0.5 m Adversary still knows that pk c := Encpk(m) “I love you” with prob. 0.5 pk m := “I don’t love you” with prob. 0.5 Makes sense: How to formalize it?

13 The semantic security game
Security parameter Adversary 1n Challenger (pk,sk) pk pk 1. Generate challenge keys 2. Choose messages (pk,sk) = Gen(1n) m0, m1 m0 and m1 2. Flip challenge bit b in {0,1} c 3. Encrypt mb: c = Encpk(mb) Adversary knows that b := “0” with prob. 0.5 “1” with prob. 0.5

14 The semantic security game
Security parameter Adversary 1n Challenger (pk,sk) pk pk 1. Generate challenge keys 2. Choose messages (pk,sk) = Gen(1n) m0, m1 m0 and m1 2. Flip challenge bit b in {0,1} c 3. Encrypt mb: c = Encpk(mb) Adversary still knows that b := “0” with prob. 0.5 “1” with prob. 0.5 We want: Adversary cannot guess bit b after seeing c How to formalize?

15 The semantic security game
Security parameter Adversary 1n Challenger (pk,sk) pk Adversary can always guess correctly with prob. 0.5 pk 1. Generate challenge keys 2. Choose messages (pk,sk) = Gen(1n) m0, m1 m0 and m1 Must be “very small”! ε := advantage of adversary 2. Flip challenge bit b in {0,1} c 3. Encrypt mb: c = Encpk(mb) 4. Adv. outputs bit b’ We want: Pr[b=b’] ≤ ε

16 A subtlety of the definition…
Consider the following adversary: pk (pk,sk) pk Choose messages of different length m0 m0 m1 Flip challenge bit b in {0,1} m1 c Case 1: b = 0: c=Enc(pk,m0) Adv. outputs bit b’ = 0 Adv. outputs bit b’ = 1 c Case 2: b = 1: c=Enc(pk,m1) We need: Adversary wins always: Pr[b=b’] = 1 |m0| = |m1|

17 The semantic security game
Security parameter Adversary 1n Challenger (pk,sk) pk pk 1. Generate challenge keys 2. Choose messages (pk,sk) = Gen(1n) m0, m1 m0 , m1 s.t. |m0|=|m1| 2. Flip challenge bit b in {0,1} c 3. Encrypt mb: c = Encpk(mb) 4. Adv. outputs bit b’ We want: Informal: “Learn nothing new from c about m except its length” “means” Pr[b=b’] ≤ small

18 Example: Textbook RSA RSA = (Gen, Enc, Dec):
φ(N) = (p-1)(q-1) Key generation Gen(1n)  (pk,sk): - N=pq, where p,q primes s.t. |p|=|q|=n pk = (N,e) - e is coprime to φ(N) sk = (N,d) - d is s.t. ed = 1 (mod φ(N)) pk sk pk c m Decryption Decsk(c) : m’:= cd mod N Encryption Encpk(m) for m in ZN*: c := me mod N Correctness: cd mod N = med mod N = med mod φ(N) = m mod N

19 Textbook RSA semantically secure?
pk 1. Generate challenge keys 2. Choose messages (pk,sk) = ((N,e),(N,d)) m0, m1 m0 , m1 in ZN* 2. Flip challenge bit b in {0,1} c 3. Encrypt: c = (mb)e mod N 4. Adv. outputs bit b’ How can adversary win the game? he just chooses any m0,m1 , computes c0= (m0)e and c1= (m1)e himself If c = c0 output b’=0; otherwise b’=1. Adversary wins with Pr[b=b’] = 1 What is the problem? Encryption is deterministic! Take home message: Encryption has to be randomized

20 Randomized RSA encoding
Idea: before encrypting a message we usually encode it (adding some randomness). Advantage: makes encryption non deterministic Enc(N,e)(m;r) := (m||r)e mod N  prevents the previous attacker This idea is used in real-life! RSA OAEP in PKCS Encryption Standard

21 RSA OAEP How to encrypt? m Encoding(m;r) RSA RSA(Encoding(m;r))

22 RSA OAEP How to decrypt? RSA-1(y) m ciphertext y Encoding(m;r)
Check if the encoding is valid.... output m

23 Security of the RSA OAEP?
It is randomized and resists simple adversary Hope: Includes many realistic attacks But we do not only want resistance against one attack! We want: Security against all “large class” of adversaries

24 Semantic security What is “very small”? What is “a large class”?
pk 1. Generate challenge keys 2. Choose messages (pk,sk) = Gen(1n) m0, m1 m0 , m1 s.t. |m0|=|m1| 2. Flip challenge bit b in {0,1} What is “very small”? What is “a large class”? c 3. Encrypt mb: c = Encpk(mb) 4. Adv. outputs bit b’ We say a PKE is semantically secure, if for a “large class” of adversaries, we have: Pr[b=b’] ≤ “very small”

25 Large class of adversaries?
= All “efficient” adversaries In other words: Attacker is computationally-bounded What does it mean? Ideas: 1. “Attacker can use at most 1000 Intel i7 Processors for at most 100 years...” 2. “Attacker can buy equipment worth 1 million euro and use it for 30 years..”. it’s hard to reason formally about it Alternative?

26 Complexity theory Polynomial-time computable by probabilistic algorithm = “Efficient computation” 1. What is polynomial-time computable? Computes the output in T(n) = O(nc) steps (for a constant c). Length of x: n = |x| Algorithm x y What is a step? 2. What is a probabilistic algorithm? r Access to random coins in each step Algorithm x y Or: Additional randomness as input Gives the adversary more power

27 Tapes contain values from finite alphabet
What is a step? Model of computation Tapes contain values from finite alphabet Common model: Poly-time Turing machine Heads can move left and right depending on content of tape, current state and instructions A probabilistic Turing Machine has an additional tape with random bits. 1 Poly-time Turing machine: Heads can make O(nc) moves

28 Is this the right approach?
Advantages Many models of computation (TM, RAMs, circuits,...) are “equivalent” up to a “polynomial reduction”. Therefore we do need to specify the details of the model. The formulas for running time get much simpler (we use asymptotics). Disadvantage Asymptotic results don’t tell us anything about security of the concrete systems. However Usually one can prove formally an asymptotic result and then argue informally that “the constants are reasonable” (and can be calculated if one really wants).

29 PPT Semantic security What is “very small”?
pk 1. Generate challenge keys 2. Choose messages (pk,sk) = Gen(1n) m0, m1 m0 , m1 s.t. |m0|=|m1| What is “very small”? 2. Flip challenge bit b in {0,1} c 3. Encrypt mb: c = Encpk(mb) 4. Adv. outputs bit b’ PPT We say a PKE is semantically secure, if for all “large class” adversaries, we have: Pr[b=b’] ≤ “very small”

30 approaches 0 faster than the inverse of any polynomial
What does “very small” mean? “very small” = „negligibe” approaches 0 faster than the inverse of any polynomial Formally A function µ : N → R is negligible in n if for every positive integer c there exists an integer N such that for all n > N We call such a function negligible in n: negl(n)

31 Negligible or not? f(n) := n-2 No, inverse poly. n-3 is always smaller
Yes, for sufficient large n f(n) := 2-n/2 Yes, for sufficient large n f(n) := n-1000 No, n-1001 is always smaller

32 PPT Semantic security negl(n) What is “very small”?
pk 1. Generate challenge keys 2. Choose messages (pk,sk) = Gen(1n) m0, m1 m0 , m1 s.t. |m0|=|m1| What is “very small”? 2. Flip challenge bit b in {0,1} c 3. Encrypt mb: c = Encpk(mb) 4. Adv. outputs bit b’ PPT We say a PKE is semantically secure, if for all “reasonable” adversaries, we have: Pr[b=b’] ≤ “very small” negl(n) Successful break: If adversary runs in PPT time and has advantage at least O(n-c) for some c.

33 How can we use the definition?
Successful breaks? Security parameter n = the length of the secret key sk Suppose: sk is a random element of {0,1}n Consider adversary that guesses k. But: He is right with probability 2-n  This probability is negligible. Consider adversary that enumerate all possible keys k But: This takes time 2n (“brute fore attack”)  This time is exponential. How can we use the definition?

34 Provable Security 1. Security definition 2. Assumptions 3. Proof
What security property shall the scheme achieve? 2. Assumptions What assumptions are needed for security? 3. Proof Prove that scheme is secure against all PPT adversaries

35 How to reason about all PPT adversaries?
Secure against all PPT adversaries Hold for all PPT adversaries We want: For all PPT adversaries Proof Assumption Encryption b’ Pr[b=b’] = negl(n) First attempt: Enumerate over all possible PPT adversaries Not possible: there are too many! Second attempt: Base security on assumption Assumptions holds for all PPT adversaries  scheme is secure for all PPT adversaries

36 Provable security is about relations between assumptions and security of cryptoschemes
Examples of A: “Factoring is hard” “RSA assumption” in this we have to “believe” Some “computational assumption A” holds This we will prove then scheme X is secure. Examples of X: “semantic security”

37 Assumptions: Properties & Example
Assumption shall be… simple & universal well-undersood & easy to analyze Example: “Factoring is hard” oracle security parameter 1n adversary choose: N = pq where p and q are random primes such that |p| = |q| = n N Assumption: No PPT algorithm to compute p and q with negl(n) probability Factoring studied for centuries!

38 Is factoring necessary for RSA?
Yes: Otherwise we can invert! How? RSA sem. secure Factoring must be hard implies build Proof by Reduction: b’ Given build e,N=pq Breaks semantic security in PPT Factors large integers in PPT Compute φ(N) =(p-1)(q-1) N=pq m0, m1 Compute d = e-1 mod φ(N) Decrypt: m’ = Dec(d,N)(c) p, q c = Encpk(mb) If m’ = m0 output 0; else 1 Pr[b=b’] = Pr[ succeeds in factoring] If runs in PPT, then also runs in PPT

39 Is hardness of factoring sufficient?
implies RSA OAEP semantically secure Factoring is hard implies?? Can we use the RSA function to build semantically secure encryption?

40 Rest of the talk Goal: build semantically secure encryption based on RSA assumption RSA assumption & harcore bits Hardcore bits  semantic security RSA assumption  existence of hardcore bits RSA assumption semantic security implies

41 All PPT adversaries win above game with negligible probability.
RSA assumption (Game 1) oracle security parameter 1k choose: N = pq where p and q are random primes such that |p| = |q| = k y – a random element of ZN* , e – a random element of Zφ(N)* adversary (y,e,N) outputs x We say that the adversary wins if x = RSA-1(e,N) (y) mod N = yd mod N RSA assumption All PPT adversaries win above game with negligible probability. Factoring harder than RSA assumption

42 Hardcore bits of RSA In other words: LSB(x) = x mod 2
RSA assumption says: hard to compute x:=yd Maybe it is easy to compute some predicate of x ? (N,e,y) Example: Jacobi(x) := Jacobi(y) f(x) Hardcore bits = “bits that are hardest to compute” Hardcore bits of RSA: Least significant bit of x! LSB(x) In other words: LSB(x) = x mod 2 x:

43 Hardcore bit: Game 2 security parameter 1k oracle adversary choose: N = pq where p and q are random primes such that |p| = |q| = k y – a random element of ZN* , e – is random element of Zφ(N)* (y,e,N) outputs b Adversary wins if b is the least significant bit of x= RSA-1(e,N) (y) mod N We say that LSB is hardcore bit of RSA function if for all PPT adversaries, we have: Pr[LSB(x)=b] ≤ negl(k)

44 Rest of the talk Goal: build semantically secure encryption based on RSA assumption RSA assumption & harcore bits Hardcore bits  semantic security RSA assumption  existence of hardcore bits RSA assumption semantic security implies

45 Why are hardcore bits useful?
1-Bit encryption from RSA hardcore bit: (N,e) – public key (N,d) – private key Enc1(N,e)(b) = xe mod N, where x  ZN* is random such that LSB(x) = b. b = 0  x = b = 1  x = Dec1(N,d)(y) = LSB(yd mod N) r a n d o m r a n d o m 1 Large ciphertext blow up: to encrypt 1 bit we need value from ZN*

46 LSB is hardcore  semantic secure
Suppose the LSB is a hardcore bit for RSA function. Then Enc is semantically secure. Simulate environment Proof by Reduction: Given build Carol e, N Charlie e,N=pq 0, 1 Wins in Game 2 y=xe Extracts LSB of x from y=xe in PPT y Breaks semantic security in PPT b’ b’ wins If wins implies i.e.: b‘ is correct i.e.: LSB(x) = b‘

47 Rest of the talk Goal: build semantically secure encryption based on RSA assumption RSA assumption & harcore bits Hardcore bits  semantic security RSA assumption  existence of hardcore bits RSA assumption semantic security implies

48 RSA assumption  hadcore bit
For simplicity suppose that this happens with probability 1 (not: small) Theorem Suppose the RSA assumption holds. Then LSB of RSA function is a hardcore bit Proof by reduction Suppose we are given PPT adversary that extracts the LSB We build PPT adversary that inverts the RSA assumption y=xe LSB(x) y=xe x How to recover from one bit x all bits of x ?

49 Outline of reduction Game 1 Game 2 Carol Charlie . . . (y,e,N)
(y1)d := x1 Carol (y,e,N) (y1,e,N) LSB(x1) Charlie Game 1 (y2,e,N) LSB(x2) x=yd Game 2 . . . (yt,e,N) LSB(xt)

50 First observation How can Carol use this observation?
Charlie can be used to compute LSB of x:=yd mod N. Can it also be used to compute LSB of c · x mod N = c · yd (for some c)? This works because ce · y is still a random value outputs b’ = LSB((ce· y)d) = LSB (ced · yd ) = LSB (c · yd ) = LSB (c · x) (ce · y, e, N) How can Carol use this observation?

51 Outline of the reduction
(2ey)d := 2edxed := 2x (y,e,N) (2ey,e,N) LSB(2x) (4ey,e,N) LSB(4x) (8ey,e,N) x=yd LSB(8x) . . . Why is it useful?

52 LSB(2x) reveals if 2x is odd or even
How is it useful? (2e · y, e, N) What does it tell us about x? Suppose LSB(2x) was even x≤(N-1)/2 x>(N-1)/2 1 . . . N-1 LSB(2x mod N) x 2(N-1)/2 = N-1 2((N-1)/2 +1) =N+1 mod N = 1 2x 2 4 . . . 2N-2 2 4 . . . N-1 1 . . . N-2 Remember: N=pq is odd 2x mod N = 2x = 2x - N LSB(2x) reveals if 2x is odd or even even odd Moral: x  [1,...,(N-1)/2] iff 2x mod N is even

53 How is it useful? (4e · y, e, N) Suppose LSB(2x) was even Suppose LSB(4x) was odd (N-1)/2 (N-1)/4 3(N-1)/4 1 . . . N-1 LSB(4x) x 4x 4 . . . 4N-4 4x mod N 4 . . . N-1 3 . . . N-2 2 . . . N-3 1 . . . N-4 = 4x = 4x - N = 4x – 2N = 4x - 3N even even odd odd Moral: x  [1,...,(N-1)/4]  [(N/2)+1,...,3(N-1)/4] iff 4x mod N is even

54 How is it useful? Suppose LSB(8x) was even (N-1)/8 . . . 7(N-1)/8 . . . x 8x . . . 8x mod N = 8x = 8x-N = 8x-2N = 8x-3N = 8x-4N = 8x-5N = 8x-6N = 8x-7N even odd even odd even odd even odd Moral: x  [1,...,(N-1)/8]  [(2N/8)+1,...,3(N-1)/8]  [4(N/8)+1,...,5(N-1)/8]  [6(N/8)+1,...,7(N-1)/8] iff 8x mod N is even

55 So we can use bisection 1 N-1 calculate LSB((2e·y)d) = LSB(2x)
1 calculate LSB((4e·y)d) =LSB(4x) 1 calculate LSB((8e·y)d) =LSB(8x) 1 Recover x calculate LSB((16e·y)d) =LSB(16x) . . .

56 Putting things together
Hardness of RSA assumption Existence of hardcore bits Semantic security of encryption

57 Conclusions Provable security is large ares of research
More powerful threat model: active adversaries Many other primitives: signatures, symmetric crypto Many nice techniques Is provable security useful in practice? Some of it yes: helps to get confidence in security (e.g., some standards are proven secure) Helps to reason about attacks at design-time Are provable secure schemes unbreakable?

58 No! Crypto implementations get broken
Example: Acoustic cryptanalysis, Crypto 2014 Computers emit noise due to vibration of their components Extract secret key from noise pattern If computer computes with secret key, then noise pattern depends on key  extract key Record noise Decrypt s with secret key Send encrypted s What is wrong? Idealized trust models

59 Model does not cover all real world attacks!
gate Model does not cover all real world attacks!

60 Model does not cover all real world attacks!
Reality gate Model does not cover all real world attacks!

61 Are provable secure schemes unbreakable?
Conclusions Are provable secure schemes unbreakable? It depends on the threat model! Thanks to Stefan Dziembowski for providing some of the slides of this talk


Download ppt "Ruhr-Universität Bochum, Germany"

Similar presentations


Ads by Google