Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dov Gordon & Jonathan Katz University of Maryland.

Similar presentations


Presentation on theme: "Dov Gordon & Jonathan Katz University of Maryland."— Presentation transcript:

1 Dov Gordon & Jonathan Katz University of Maryland

2 What is Fairness? Before the days of secure computation… (way back in 1980) It meant a fair exchange: of two signatures of two secret keys of two bits certified mail Over time, developed to include general computation: F(x,y): X × Y Z (1) × Z (2)

3 Does that verify? NO. Does that verify? NO. Exchanging Signatures [Even-Yacobi80] Impossible: if we require both players to receive the signature at the same time Does that verify? NO. Does that verify? NO. Does that verify? Yes!! Sucker! Impossible: later, in 1986, Cleve would show that exchanging two bits is impossible!

4 Gradual Release Reveal it bit by bit! (halve the brute force time.) Prove each bit is correct and not junk. Assume that the resulting partial problem is still (relatively) hard. Notion of fairness: almost equal time to recover output on an early abort. [Blum83, Even81, Goldreich83, EGL83, Yao86, GHY87, D95, BN00, P03, GMPY06]

5 Gradual Convergence Reduce the noise, increase the confidence; (probability of correctness increases over time) E.g., result i = output c i, where c i 0 with increasing i. Removes assumptions about resources. Notion of fairness: almost equal confidence at the time of an early abort. [LMR83, VV83, BG89, GL90]

6 Drawbacks (release, convergence) Key decisions are external to the protocol: Should a player brute force the output? Should a player trust the output? If the adversary knows how the decision is made, can violate fairness. Fairness can be violated by an adversary who is willing to: run slightly longer than the honest parties are willing to run. accept slightly less confidence in the output. No a priori bound on honest parties running time. Assumes known computational resources for each party. If the adversary has prior knowledge, they will receive useful output first.

7 Our Results We demonstrate a new framework for partial fairness. We place the problem in the real/ideal paradigm. We demonstrate feasibility for a large class of functions. We show that our feasibility result is tight.

8 Defining Security (2 parties) protocol x Real world: x y F 1 (x, y) F 2 (x, y) view output view F 1 (x, y) Ideal world: x

9 Defining Security (2 parties) Real world: Ideal world: view output Indistinguishable! view F 1 (x, y) Security with Complete Fairness

10 The Standard Relaxation protocol x Real world: x y F 1 (x, y) F 2 (x, y) view output view F 1 (x, y)/ continue abort Ideal world: x

11 The Standard Relaxation Real world: Ideal world: view output Indistinguishable! view F 1 (x, y)/ Security with abortNote: no fairness at all!

12 Our Relaxation Stick with real/ideal paradigm Real world and ideal world are indistinguishable relaxed-ideal -indistinguishable * * I.e., For all PPT A, |Pr[A(real)=1] – Pr[A(ideal)=1]| < (n) + negl (Similar to: [GL01], [Katz07]) Full securitySecurity with abort -Security Offers complete fairness, but it can only be achieved for a limited set of functions. Can be achieved for any poly-time function, but it offers no fairness!

13 Protocol 1 ShareGen x y a 1, …, a r b 1, …, b r a 1 (2), …, a r (2) b 1 (2), …, b r (2) a 1 (1), …, a r (1) b 1 (1), …, b r (1) a i (1) a i (2) = a i b i (1) b i (2) = b i a i : output of Alice if Bob aborts in round i+1. b i : output of Bob if Alice aborts in round i+1. To compute F(x,y): X × Y Z (1) × Z (2)

14 Protocol 1 similar to: [GHKL08], [MNS09] a1a1 a2a2 a3a3 aiai arar...... a1a1 a2a2 a3a3 aiai arar...... b1b1 b2b2 b3b3 bibi brbr......... b1b1 b2b2 b3b3 bibi brbr... a1a1 b1b1 a2a2 a3a3 b2b2 b3b3 aiai bibi arar brbr xy

15 Protocol 1 s1s1 s2s2 s3s3 sisi arar...... arar... bibi brbr...... s1s1 s2s2 s3s3 bibi brbr... a1a1 b1b1 a2a2 a3a3 b2b2 b3b3 aiai b i-1 xy

16 Protocol 1 a1a1 a2a2 a3a3 aiai arar......... b1b1 b2b2 b3b3 bibi brbr... a1a1 b1b1 a2a2 a3a3 b2b2 b3b3 aiai bibi arar brbr Choose round i* uniformly at random. For i i* a i = b i = F(x,y) For i ˂ i*: a i = F(x,Y) where Y is uniform For i ˂ i*: b i = F(X,y) where X is uniform xy = F 1 (x,y) F 2 (x,y) = = F 1 (x,y)F 2 (x,y) = How does we choose ?... bibi brbr arar...

17 Protocol 1: analysis What are the odds that aborts in round i*? If she knows nothing about F 1 (x, y), it is at most 1/r. But this is not a reasonable assumption! Probability that F 1 (x, Y) = z or F 1 (x, Y) = z may be small! Identifying F 1 (x, y) in round i* may be simple. I know the output is z or z a1a1 a2a2 a3a3 zz a6a6 a7a7 z

18 A Key Lemma Consider the following game, (parameterized by (0,1] and r 1): Fix distributions D 1 and D 2 s.t. for every z Pr[D 1 =z] Pr[D 2 =z] Challenger chooses i * uniformly from {1, …, r} For i < i * choose a i according to D 1 For i i * choose a i according to D 2 For i = 1 to r, give a i to the adversary in iteration i The adversary wins if it stops the game in iteration i * Lemma: Pr[Win] 1/ r

19 Protocol 1: analysis D 1 = F 1 (x, Y) for uniform Y D 2 = F 1 (x, y) So Pr[D 1 = F 1 (x, y)] Pr[Y=y] = 1/|Y| Probability that P 1 aborts in iteration i * is at most |Y|/r Setting r = |Y| -1 gives -security Need |Y| to have polynomial size Need to be 1/poly α = 1/|Y|

20 Protocol 1: summary Theorem: Fix function F and = 1/poly: If F has poly- size domain (for at least one player) then there is an -secure protocol computing F (under standard assumptions). The protocol is private Also secure-with-abort (after a small tweak)

21 Handling large domains With the previous approach, = 1/|Y| becomes negligibly small: this causes r to become exponentially large Solution: if the range of Alices function is poly-size With probability 1-, choose a i as before: a i = F 1 (x, Y) With probability, choose a i Z (1) (uniformly) is polynomial again! I know the output is z or z but… Pr[a i = z] ε/|Z (1) | α = ε/|Z (1) |

22 Protocol 2: summary Theorem: Fix function F and = 1/poly: If F has poly- size range (for at least one player) then there is an - secure protocol computing F (under standard assumptions). The protocol is private The protocol is not secure-with-abort anymore

23 Our Results are Tight (wrt I/O size) Theorem: There exists a function with super- polynomial size domain and range that cannot be efficiently computed with -security. Theorem: There exists a function with super- polynomial size domain and poly-size range that cannot be computed with -security and with security- with-abort simultaneously.

24 Summary We suggest a clean notion of partial fairness. Based on the real/ideal paradigm. Parties have well defined outputs at all times. We show feasibility for functions with poly-size domain/range, and infeasibility for certain functions outside that class. Open: can we find a definition of partial fairness that has the above properties, and can be achieved for all functions?

25 Thank You!

26 Gradual Convergence: equality b c 1 = 0 F(x,y) = 1 if x = y 0 if x y Suppose b = f(x,y) = 0 whp Allice can bias Bob to output 1 xy b c 2 = 1 b c 3 = 1 Hope Im lucky! For small i, c i has a lot of entropy! Bobs output is (almost) random Accordingly, [BG89] instructs Bob to always respond by aborting. Cant trust that output But what if Alice runs until the last round!

27 Gradual Convergence: drawbacks If parties always trust their output, adversary can induce a bias. Decision of whether an honest party should trust the output is external to the protocol: If made explicit, the adversary can abort just at that point. If the adversary is happy with less confidence, he can receive useful output alone. If the adversary has higher confidence a priori, he will receive useful output first.


Download ppt "Dov Gordon & Jonathan Katz University of Maryland."

Similar presentations


Ads by Google