Presentation is loading. Please wait.

Presentation is loading. Please wait.

Rational Cryptography Some Recent Results Jonathan Katz University of Maryland.

Similar presentations


Presentation on theme: "Rational Cryptography Some Recent Results Jonathan Katz University of Maryland."— Presentation transcript:

1 Rational Cryptography Some Recent Results Jonathan Katz University of Maryland

2 Rational cryptography “Applying cryptography to game theory” When can a cryptographic protocol be used to implement a gave involving a trusted party? [B92, DHR00, LMS05, ILM05, ADGH06, …] “Applying game theory to cryptography” How to deal with rational, computationally bounded parties in cryptographic protocols? [HT04, GK06, LT06, KN08, ACH11, …]

3 The dream? We want protocols that are resilient to malicious behavior We believe that (most) parties act rationally, i.e., in their own self interest Can we get “better” cryptographic protocols by focusing on rational adversaries rather than arbitrary adversaries?

4 The dream? Can we construct more efficient protocols if we assume a rational adversary (with known utilities)? Can we circumvent impossibility results if we assume a rational adversary (with known utilities)? YES!

5 Two examples Fairness Two-party setting [Groce-K (Eurocrypt ’12)] The multi-party setting, and other extensions [Beimel-Groce-K-Orlov ‘12] Byzantine agreement / broadcast [Groce-Thiruvengadam-K-Zikas (ICALP ’12)]

6 Fairness

7 Two parties computing a function f using some protocol (Intuitively) the protocol is fair if either both parties learn the output, or neither party does Note: fairness is non-trivial even without privacy, and even in the fail-stop setting

8 The challenge? f(x, y) X xy

9 Impossibility of fairness [Cleve ’86]: Fair computation of boolean XOR is impossible

10 Dealing with impossibility Fairness for specific functions [GHKL08] Limited positive results known Partial fairness [BG89, GL90, GMPY06, MNS09, GK10, …] Physical assumptions [LMPS04, LMS05, IML05, ILM08] Here: what can be done if we assume rational behavior?

11 Rational fairness Fairness in a rational setting [ACH11] Look at a specific function/utilities/setting Main goal is to explore and compare various definitions of rational fairness Main result is “pessimistic”: boolean XOR can be computed in a rationally fair way, but only with probability of correctness at most ½

12 Consider the following game… 1. Parties run a protocol to compute some function f 2. Receive inputs x 0, x 1 from known distribution 3. Run the protocol… 4. Output an answer 5. Utilities depend on both parties’ outputs, and the true answer f(x 0, x 1 ) D

13 Utilities Each party prefers to learn the correct answer, and otherwise prefers that the other party output an incorrect answer This generalizes the setting of rational secret sharing [HT04, GK06, LT06, ADGH06, KN08, FKN10, …] CorrectIncorrect Correct (a 0, a 1 )(b 0, c 1 ) Incorrect (c 0, b 1 )(d 0, d 1 ) b 0 > a 0 ≥ d 0 ≥ c 0 b 1 > a 1 ≥ d 1 ≥ c 1 Player 1 Player 0

14 Deviations? Two settings: Fail-stop: parties can (only) choose to abort the protocol, at any point Byzantine: parties can arbitrarily deviate from the protocol (including changing their input) Parties are computationally bounded

15 Definition Fix f, a distribution D, and utilities for the parties. A protocol π computing f is rationally fair (for f, D, and these utilities) if running π is a (Bayesian) computational Nash equilibrium Note: stronger equilibrium notions have been considered in other work We leave this for future work

16 Question For which settings of f, D, and the parties’ utilities do rationally fair protocols exist?

17 Consider the following game… 1. Parties have access to a trusted party computing f 2. Receive inputs x 0, x 1 from known distribution 3. Send input or  to trusted party; get back result or  4. Output an answer 5. Utilities depend on both parties’ outputs, and the true answer f(x 0, x 1 ) D

18 Revisiting [ACH11] The setting of [ACH11]: f = boolean XOR D = independent, uniform inputs utilities: Evaluating f with a trusted party gives both parties utility 0 They can get the same expected utility by random guessing! The parties have no incentive to run any protocol computing f Running the ideal-world protocol is a Nash equilibrium, but not strict Nash CorrectIncorrect Correct (0, 0)(1, -1) Incorrect (-1, 1)(0, 0)

19 Back to the ideal world To fully define a protocol for the ideal world, need to define what a party should output when it receives  from the trusted party (cooperate, W 0 ): if receive , then generate output according to the distribution W 0 (x 0 )

20 Definition Fix f, a distribution D, and utilities for the parties. These are incentive compatible if there exist W 0, W 1 such that ((cooperate, W 0 ), (cooperate, W 1 )) is a Bayesian strict Nash equilibrium (Actually only need strictness for one party)

21 Main result If computing f in the ideal world is a strict Nash equilibrium, then there is a real-world protocol π computing f such that following the protocol is a computational Nash equilibrium If f, a distribution D, and the utilities are incentive compatible, then there is a protocol π computing f that is rationally fair (for f, D, and the same utilities )

22 The protocol I Use ideas from [GHKL08, MNS09, GK10] ShareGen Choose i* from geometric distribution with parameter p For each i ≤ n, create values r i, 0 and r i,1 If i ≥ i*, r i, 0 = r i,1 = f(x 0, x 1 ) If i < i*, r i, 0 and r i,1 are chosen according to distributions W 0 (x 0 ) and W 1 (x 1 ), respectively Secret share each r i, j value between P 0 and P 1

23 The protocol II Compute ShareGen (unfairly) In each round i, parties exchange shares P 0 learns r i,0 and then P 1 learns r i, 1 If the other party aborts, output the last value learned If the protocol finishes, output r n,0 and r n,1 Note: correctness holds with all but negligible probability; can modify the protocol so it holds with probability 1

24 Will P 0 abort early? Assume P 0 is told when i * has passed Aborting afterward cannot increase its utility Consider round i ≤ i * : If P 0 does not abort  utility a 0 If P 0 aborts: i = i *  utility b 0 i < i *  utility strictly less than a 0 Because strict Nash in ideal world

25 Will P 0 abort early? Use W 0, W 1 with full support Always possible Set p to a small enough constant so that the above is strictly less than a 0 Expected utility if abort Probability i = i* Probability i < i* +   = b0b0 a 0 - 

26 Summary By setting p=O(1) small enough, we get a protocol π computing f for which following π is a computational Nash equilibrium Everything extends to the Byzantine case also, with suitable changes to the protocol

27 Recent extensions [BGKO] More general classes of utility functions Arbitrary functions over the parties’ inputs and outputs Randomized functions Extension to the multi-party setting, with coalitions of arbitrary size

28 Open questions Does a converse hold? I.e., in any non-trivial setting *, does existence of a rationally fair protocol imply that the ideal- world computation is strict Nash for one party? Stronger equilibrium notions More efficient protocols Handling f with exponential-size range * You get to define “non-trivial”

29 Byzantine agreement / broadcast

30 Definitions Byzantine agreement: n parties with inputs x 1, …, x n run a protocol giving outputs y 1, …, y n. Agreement: All honest parties output the same value y Correctness: If all honest parties hold the same input, then that will be the honest parties’ output Broadcast: A dealer holds input x; parties run a protocol giving outputs y 1, …, y n. Agreement: All honest parties output the same value Correctness: If the dealer is honest, all honest parties output x

31 Rational BA/broadcast? Definitions require the security properties to hold against arbitrary actions of an adversary controlling up to t parties What if the adversary has some (known) preference on outcomes? E.g., Byzantine generals: Adversary prefers that only some parties attack (disagreement) Else prefers that no parties attack (agree on 0) Least prefers that they all attack (agree on 1)

32 Rational BA/broadcast Consider preferences over {agree on 0, agree on 1, disagreement} (Informally:) A protocol achieves rational BA / broadcast (for a given ordering of the adversary’s preferences) if: When all parties (including the adversary) follow the protocol, agreement and correctness hold The adversary never has any incentive to deviate from the protocol

33 Note A different “rational” setting from what we have seen before Previously: each party is rational Here: some parties honest; adversary rational Though could also model honest parties as rational parties with a specific utility function

34 A surprise(?) Assuming the adversary’s complete preference order is known, rational BA is possible for any t < n(!) with no setup Classical BA impossible for t ≥ n/3 w/o setup (Classical BA undefined for t ≥ n/2)

35 Protocol 1 Assume the adversary’s preferences are agree on b > agree on 1-b > disagreement Protocol Every party sends its input to all other parties If a party receives the same value from everyone, output it; otherwise output 1-b Analysis: If honest parties all hold b, no reason to deviate In any other case, deviation doesn’t change outcome

36 Protocol 2 Assume the adversary’s preferences are disagreement > agree on b > agree on 1-b Protocol All parties broadcast their input using detectable broadcast If a party receives the same value from everyone, output it; otherwise output 1-b Analysis: Adversary has no incentive to interfere with any of the detectable broadcasts Agreement/correctness hold in every case

37 Other results We also show certain conditions where partial knowledge of the adversary’s preferences is sufficient for achieving BA/broadcast for t < n See paper for details

38 Other surprises(?) (Sequential) composition is tricky in the rational setting E.g., classical reduction of BA to broadcast fails Main problem: incentives in the sub-protocol may not match incentives in the larger protocol Some ideas for handling this via different modeling of rational protocols

39 Summary Two settings where game-theoretic analysis allows us to circumvent cryptographic impossibility results Fairness Byzantine agreeement/broadcast Other examples? Realistic settings where such game-theoretic modeling makes sense? Auctions? (cf. [MNT09] )

40 Thank you!


Download ppt "Rational Cryptography Some Recent Results Jonathan Katz University of Maryland."

Similar presentations


Ads by Google