Presentation is loading. Please wait.

Presentation is loading. Please wait.

Collusion Free Protocols

Similar presentations


Presentation on theme: "Collusion Free Protocols"— Presentation transcript:

1 Collusion Free Protocols
Joël Alwen

2 Parallel Terminology and Goals
Cryptography Goal: Compute a joint functionality How? Protocol. Private Input Players: Honest or Malicious Protocol is “good” is indistinguishable from using the ideal functionality.` Game Theory Play a game How? Strategy. Type (called game of incomplete information) Players: Rational Strategies are “good” if they are stable (i.e. they form an equilibrium because deviations are irrational)

3 Cryptography: Multi-party Computation
Ideal Players Real Players Get Private Input Send it to “Ideal Functionality” F Receive Private Output Get Private Input Interact (run protocol ) Compute Private Output “Protocol  realizes functionality F” F F takes input from players, evaluates the function and hands back the private outputs. It can be probabilistic, and/or reactive with a secret internal state.

4 Many Security Definitions
GMW - Computational assumptions. BGW, CCD, BR - Physical assumptions. CFGN – Adaptive corruptions. C,CLOS – Universally Composible. GMW87: Ideal Real Secure “Adversary Power Preservation” Same Security Paradigm

5 (Traditional) Monolithic Adversary
All corrupt real parties controlled by one evil puppet master. Ideal counter parts all controlled by one simulator. F View FakeView output  is secure if for any evil puppet master their simulator outputs a (fake) view such that: {FakeView, Ideal-I/O}  {View,Real-I/O}

6 “ Whatever Adv can Adv can too ”
Poker Example Real 2-Adv  Real k-Adv Ideal k-Adv Ideal 2-Adv “ Whatever Adv can Adv can too ” Learn / Influence trust More specifically, The two situations we worry about are that the adversary can 1. INFLUENCE the state of the game, or 2. LEARN the state of the game of the honest players. This definitions models both of those. Red learns, green learns How do we use it? “because the answer is no in the ideal, it is no in the real world” which is a good thing… Not Enough !

7 Power Preservation too little if Ideal power too much
trust “These are my cards” NEED NEW SECURITY ! What does Carter guarentee? shuffles, deals. Ideal is not the right word for this scenario! Cards on a 3rd click. Even if carter deals, it is not fair.

8 No (undetectable) collusion!
Goal No (undetectable) collusion! “Every adversary acts on his own!” Evil Without Collusion

9 What is a Collusion? Intuitively: some “illegal” correlation between players actions or knowledge. In other words a joint computation beyond what is computed by the ideal functionality. Example: secretly share information with each other (a.k.a. steganography) Example: showing your cards to another player in poker. Or coordinate their strategies. Example: Use pre-arranged randomness in a commitment. Now it looks correct to a third party but is not hiding for colluder.

10 A New Idea: Individual Adversaries
Monolithic adversaries already perfectly colluding. So too strong. Idea: Corrupt players act separately. Each has their own simulator. Joint “fake views” still remain indistinguishable. View FakeView F { {FakeView}, Ideal-I/O}  { {View}, Real-I/O} Intuition: Anything they can compute together with  they can also compute with F.

11 Applications Practical Theoretical (Game Theory)

12 Collusions? Who cares? Practically: Auctions Online Poker House
Collusions can minimize winning bid  minimizes revenue of auction house. Example: Spectrum auction of the FCC Online Poker House Two players could share information about there hand thereby giving them a distinct advantage of the others. Enforcing Anonymity?

13 Collusions in Game Theory
Recall : Nash Equilibria (NE). Means all unilateral deviations are irrational But this is not robust against collusions. In particular a bilateral deviations might be rational Equilibria which remain robust against collusions are more desirable (stronger). Much research has gone into “realizing” certain games with “cheap talk” (i.e. fully connected network). Recently also robust against deviations by collusions of bounded size. [Hel05, ADHG06, ADH08]

14 Game Theoretic Applications (1)
Goal: Play extensive form game Game viewed as a state diagram in the form of a tree Each level corresponds to the player whose turn it is. Edges correspond to moves Notes to the current game state. S Player 1 Move A Move B Player 2 Move C Move D Move E Idea: Playing the game means traversing the tree to a leaf. So use some cryptographic protocol  to compute the state transition function R of the game. R : Game State X Move → Game State X Outputs Assume  is correct, private and fair. Enough? No.

15   must also be collusion free (CF)
Bayesian Games Bayesian Games. (Roughly speaking: games where players have an initial secret a.k.a. type.) Poker: your cards Auction: how much you are willing to pay If  allows collusion then while computing the first call of R corrupt players can (steganographically) exchange their types. Thus in  they soon have more information then in the ideal game and so the games are not the same.   must also be collusion free (CF)

16 Game of Imperfect Information
Games of Imperfect Information (I.e. games in which players do not perfectly observe the actions of other players.) There is at least one node such that at least one player has some uncertainty about which state the game is currently in. Example: Five Card Draw (Trading 0-3 cards secretly) S Player 1 Move A Move B Player 2 Move C Move D Collusions while running  could allow corrupt players to tell each other extra information about their moves.

17 Mediated Games Goal: Play a Mediated Game (MG) with minimal trust. M = Idea: Remove mediator. Jointly computed it’s functionality M by via a cryptographic protocol Example: Correlated Equilibria (CE) for games of incomplete information [Aum87] Example: Playing a NE in a Mediated Games.

18 OK. Let’s do it.

19 Protocols’ INTRINSIC Private Communication
“Main Problem” STEGANOGRAPHY Protocols’ INTRINSIC Private Communication Picture [with gray hair, change it to white]… you cant tell what the picture is … We want to know that there is only one possible message, even though you cannot predict it before hand. Osmosis analogy, info transpires

20 Entropy  Steganography.
Communication Wanted: PROTOCOLS Unwanted: COLLUSIONS Security  Entropy [GM] IN CS, collaboration is embodied as a protocol. Which is essentially a set of instructions for Participants to accomplish some goal. However, there are bad guys, and that we know. It is even worse to consider that these bad players may themselves coordinate in order to undermine the protocol. Such unwanted collaboration is collusion. This talk is about how to achieve the former without the latter. Wanted collaboration, unwanted. They are together. One is good, the other is a shadow. The moment there are bad guys, Today, keep the wa Entropy  Steganography. (provably! [HLA])

21 (Encryption+Broadcast+Envelopes) Collusions PROVABLY Impossible!
“Model” Computationally Secure (Encryption+Broadcast) Physically Secure (Perfect channels) Collusions Possible Traditional ones Collusions PROVABLY Impossible Computationally & Physically Secure [LMPS, LMS] (Encryption+Broadcast+Envelopes) Collusions PROVABLY Impossible! IN CS, collaboration is embodied as a protocol. Which is essentially a set of instructions for Participants to accomplish some goal. Although it is unfortunate enough that within any group, There are malicious participants, it is even worse to consider that these bad players may themselves coordinate in order to undermine the protocol. Such unwanted collaboration is collusion. This talk is about how to achieve the former without the latter. Among the two Traditional types of secure protocols this is impossible. First, the blue protocols are computationally secure. They achieve security by Assuming that participants cannot accomplish an exp amount of work, and therefore Use encryption and broadcast. One of the goals of this talk is to demonstrate why Collusions are unavoidable in this model. The yellow protocols are physically secure. They assume perfect channels in which only The end parties can decipher messages sent along that channel. In this model, it is a little Easier to see why collusions are unavoidable. Today ,we introduce a third type, in which we use components from both the blue and Yellow and are able to provably prevent collusions. We are reminded of the bronze age of humanity-- we have learned to take copper, maleable, Tin, which is soft, and make an alloy, which is stronger and more durable than either on their own. Hey, I take a world in which it is impos Sometimes, these phenonemenon happen. Copper, tin, bronze. WAR

22 Approach 1 Forced Action New Communication Channels
[LMPS04, LMS05, ILM05, ILM08]

23 Verifiable Determinism
“Main Problem” “Main Solution” Picture [with gray hair, change it to white]… you cant tell what the picture is … We want to know that there is only one possible message, even though you cannot predict it before hand. Osmosis analogy, info transpires Verifiable Determinism STEGANOGRAPHY

24 Verifiable Determinism
Pre-Processing Verifiable Determinism IN CS, collaboration is embodied as a protocol. Which is essentially a set of instructions for Participants to accomplish some goal. However, there are bad guys, and that we know. It is even worse to consider that these bad players may themselves coordinate in order to undermine the protocol. Such unwanted collaboration is collusion. This talk is about how to achieve the former without the latter. Wanted collaboration, unwanted. They are together. One is good, the other is a shadow. The moment there are bad guys, Today, keep the wa Protocol

25 COLLUSION FREENESS ALWAYS AVAILABLE!
[LMS] THM: Trapdoor Perms & Envelopes → All Finite Protocols Collusion-Free Proof: (Semi-Simple) [GMW] + [DMP]+… COLLUSION FREENESS ALWAYS AVAILABLE!

26 ZK ushers in Steganography!
ZK Prove that you are following honest ITMi This proof requires randomness! if my ZK proof has 31 0s, I have aces Whats the first idea that comes to mind? This really illustrates that the bottleneck is to find a way to make ZK proofs without entropy. But [GM] told us that we need entropy for security? That is certainly true, but perhaps we Incorporate the randomness without introducing a steganographic channel. ….old trick… Wanted: Steganographyless ZK “Essentially Possible Here”

27 PK= Public Commitments
ZK Preprocessing Ready for a 3-CNF formula with n vars X1: X2: Xn: L R 1 1. Commit to n pairs of bits C(b,R) b = 1 1 2. For all triples, OR 2n 8n3 Imagine how a boy scout makes a ZK proof. He might not know which theorem he wants to prove, but that certainly has nothing to do with whether we are prepared to prove something. Thus, we pre-process a data structure in a way which allows us to prove any 3cnf with <= n vars. X1R X2L XnR e.g., 1 1 X1L X2R XnR PK= Public Commitments SK= corresponding Rs 3. Interactive ZK Proof (All Commitments Correct)

28 Unique ZK Proof (with PK)
PUBLIC formula , and a PRIVATE witness, w (X1X2X12)(X2X32X112) … (X3X5X6) X1:(T) X2: (F) X112 :(T) 1 L R R 1 1. Specify L/R for each var R 2.  clauses, e.g. (X1  X2  X112) L 1 X1R X2L X112R Retrieve the right OR 1 opening Assigning variable to a commitment Each clause corresponds to a triple of commitments In particular, to a row of the table in the pk [we forgot about this… but we tried to do this, converged… and we remember that we don’t it before]. Everything opened should be 1. For each (PK, , w), ONLY 1 PROOF IS ACCEPTED ! For theorems with 1 witness, only 1 proof is accepted!

29 Information Theoretic Realization
Crypto: [ILM05,ILM08] Communication model: Envelopes and Ballot-box Players sit at a round table and create, (simultaneously) exchange, permute, open and reseal envelops and super-envelops. Ballot-box provides perfectly secret random permutations. +

30 Verifiable Secure Devices
Culmination: “Verifiable Secure Devices” [ILM08] Introduces verifiably “computer”. First true ideal game emulation. Avoids “Randomness Pollution” Prevents attacks like signaling via aborts conditioned on the value of the random string and prevents the introduction of new equilibria. (exa. CE) Can realize any Mediated Game secure with robust-ness against player aborts Soft setting: abort is announced but computation continues. Hard setting: abort remains secret. In both cases some default input is used for aborting user.

31 Problems Major: If players sit facing each other impossible to avoid side channels. Yet no digital analogues of primitives. If players meet in real life avoid “Randomness Pollution” very difficult. Minor: Set of simulators depends on set of adversaries. Alice and Bob could use  to compute any functionality which doesn’t depend on their views during . Example: Use  to exchange information unrelated to G. Permitted: Simulators “know” who is corrupt and in which way. (Just not their secret inputs and views.) Note, this is not a problem for the game theoretic applications mentioned above. But… Better: Simulators independent of remaining set of adversaries. Allow for further (or more robust) applications. Example: Online Poker. Players can no longer use  to secretly exchange phone numbers. (I.e. anonymity can now preserved in .)

32 A New Solution Concept Goal: avoid steganography
If colluders can not exchange any extra information in  beyond that in G then simulation if  can be done locally when playing only G. Idea: channel re-randomize communication Thus bandwidth of messages in  is exactly controlled.

33 Quick Example: Commitments
Let c:=Com(m,r) with len(c)≥k Not CF. To much correlation in real world views. c Fcom m “committed”  Fix: Let Com(c,r’)=c’ Then c’ hides both m and r. So views independent except for bit “committed”. c Fcom m “committed”  Rerand. c’ r’

34 Mediated Model  New Communication Model
Channel (called mediator) modeled as a corruptible Turing Machine (written PM). F F : Uncorruptable (ideal) functionality : Honest parties don’t use blue communication lines. (Corrupt can). : Mediator honest  ideal players separated Mediator corrupt  ideal adversary can perfectly coordinate through mediator but F still correct and private!

35 { {FakeView}, Ideal-I/O}  { {View}, Real-I/O}
Security Definition  is a collusion free realization of F  For all real world players Pi there exists an ideal world simulator Si such that for any set A{1,..,n} of corrupt parties, for all input vectors (x1,..,xn) and all auxiliary input vectors (aux1,..,auxn) it holds that the set {Si}iA of simulators produce fake views of  such that: { {FakeView}, Ideal-I/O}  { {View}, Real-I/O} F  is called secure if there is a simulator SM for the mediator PM such that for any A {1,..,n,M} the above equation holds.

36 Dealing with Aborts Problem: Aborts can be used for signaling information (round number) Solution 1: Aborts not allowed Easy, clean OK for some applications Example: Games with dominant punishment strategies  aborting is never rational. Not very robust. Solution 2: If any player aborts then all players abort. Previously used in cryptographic literature Easy, clean but still not robust

37 Model Aborts Explicitly
Solution 3: Ideal world has special abort message (“Pi with code j”) sent by Pi or PM and immediately distributed to rest by F. More complicated But makes security guarantees explicit. PM can force aborts. Aborts have more bandwidth then just one bit. Thus “with code j” corresponds real world round number of abort. Potentially more robust. Can now design F which withstands/deals with aborts. Can realize “soft setting” from ILM08.

38 Authentication Problem: Mediator can alter original message or even create fake new ones. Man-in-the-middle, hijack attacks, etc. Solution 1: Assume perfectly authenticated channels. [GMW87, BGW88] Strong assumption but clean Requires point to point connections though

39 Authentication Solution
Solution 2: Verifiable Authenticated Channels Each message associated with a receipt which can later be verified. Messages prefixed by session ID and round number. (Like UC) Example: Signature (Gen, Sig, Ver) to implement FAC m = message, tag = Sig(sid || rid || m) Mediator send m to recipient and use ZKPoK to prove it knows a valid tag with expected prefix. ZK  Doesn’t reveal info about tag (Collusion Free) PoK, Sig  M can’t forge messages m, t t := Sig(sid || rid || m) SK m ZKPoK Know t s.t. VerVK(t,m) = 1 VK m := “Tea for two?” FAC m m: from Stevo m: from Stevo Hehehe...

40 Setup Phase Goal: Establish signature key file F
Keys should be “honestly” generated to avoid signaling via bits in VK. Idea: Preprocessing phase Construct F. Distribute F. Begin . Com(r1)=c r2 (VKi,SKi)  Gen(r1r2) VKi ZKPoK: Know Dec(c)=r1 s.t. r1r2 generates VKi F.add(i, VKi)

41 Caveat: Distribute F As stated there are only point to point channels. So PM could distribute different copies of F to each player! Results in forking capability of PM in ideal world like [BCLPR] For simplicity restrict PM to sending same (possibly altered) F’ to all players. Same as assuming existence of an append only (by PM ) but publicly viewable BBS where PM publishes F’. [OpenVote, Prêt à Voter, Chaum04, Neff06,…]

42 Still Collusion Free? Problem: File F in view of all real world players. How to simulate this independently? Solution: Give all simulators read access to common random tape R. Individual simulation only an issue if mediator honest. (Else simulators use lines to communicate.) But if mediator honest then all PKs generated with random string (coin flipping) Interpret R=R1 || .. || Rn. Use rewinding to fix output of i-th coin flipping to Ri. By soundness of i-th key of all separate simulations will be equal. Minimal coordination between simulators. Compare to previous def: each simulator depends on remaining set of simulators. Much weaker coordination!

43 CF-Commitments   Goal: CF-realize FCOM functionality.
Idea: Construct  It randomizes  CF Is extractable  Maintain real and ideal I/O distributions Is equivocable  Can simulate commitment phase without m Fcom m “stevo committed” Com(m,r)=c Com(c,r’)=c’ ZK: know tag for Dec(c’) ZKPoK: know m, r for c Fcom “decommit” From Stevo: m Dec(c)=(m,r) m ZK: know r and r’ s.t. c’ = Com(Com(m,r),r’) and tag for m

44 Proof Sketch Hiding  < PM, R> don’t learn m
Com() hiding & 1st ZKPoK is zero knowledge Binding  <C, PM > can only open to m. Com() binding & 2nd ZKPoK is sound CF  Construct simulator for C and for R. SC : Extract m from 1st ZKPoK and send to FCOM SR : Comit to garbage. Simulate 1st ZKPoK. Get m from FCOM. Simulate 2nd ZKPoK C = R = PM = Decommit Phase Commit Phase Com(m,r)=c Dec(c)=(m,r) m Com(c,r’)=c’ ZKPoK: know m, r for c ZK: know r and r’ s.t. c’ = Com(Com(m,r),r’) and tag for m ZK: know tag for Dec(c’) ZK: know tag for Dec(c’)

45 Secure CF-ZKPoK Goal: Secure CF realize FZKPoK for 3-COL
Prover P Verifier V G, w FZKPoK G, Accept/Reject Reject G, Accept/ Idea: Transform 3-round public coin ZKPoK for 3-COL into secure CF ZKPoK. E : Edge set of G w : Edge  {R,G,B} is a 3-colouring of G Pcol : Set of permutation over {R,G,B} G=(V,E) {Com((w(e)))}eE  $ Pcol e Dec((w(e))) e $ E Accept  w(e) has different coloured endpoints

46 Secure CF-ZKPoK (2) G G v={Com((w(e)))}eE v’={Com(v)} e
  PE  $ Pcol t = Com() v={Com((w(e)))}eE v’={Com(v)} e e $ E (e) Dec((w((e))) ZKPoK: know tags & decommitments for accepting view.

47 Proof Sketch Need 3 simulators. One for P, V and PM PM honest
SP* based on extractor for 3-COL Rewind P* to extract 3-colouring of G = w. Send to FZKPoK SV* commit to garbage and use simulator for final ZKPoK to “prove” false theorem. Joint fake views have correct distribution V* sees e but not  and P* sees (e) P* sees  and V* sees ’=Com(v) ZKPoK is zero knowledge for V PM corrupt (High level collapses to traditional proof since no more isolation.) PM extracts keys of honest parties in preprocessing stage. A={P*, PM, V*}: easy. Simulate complete interaction. A={PM} or A={PM,V*} : Essentially run SP* against < PM,V*> A={P*, PM}: run extractor against <P*, PM > Notice PM can coordinate perfectly so for arbitrary P* or V* simulated (fake) views have correct joint distribution.

48 Future Work Done In the works………….?  Incorporate Environment GUC-CF
GUC-CF/EUC-CF equivalence (dummy adversary lemma) Composition Theorem Semi-honest MPC protocol In the works………….?  GUC-CF Semi-honest  Malicious Compiler Plain Mediated Model Sequential composition theorem MPC protocol Forking without “BBS” assumption


Download ppt "Collusion Free Protocols"

Similar presentations


Ads by Google