Foundations of Secure Computation

Slides:



Advertisements
Similar presentations
Polylogarithmic Private Approximations and Efficient Matching
Advertisements

Efficient Private Approximation Protocols Piotr Indyk David Woodruff Work in progress.
Efficiency vs. Assumptions in Secure Computation Yuval Ishai Technion & UCLA.
Computational Privacy. Overview Goal: Allow n-private computation of arbitrary funcs. –Impossible in information-theoretic setting Computational setting:
Foundations of Cryptography Lecture 2: One-way functions are essential for identification. Amplification: from weak to strong one-way function Lecturer:
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
Simple, Black-Box Constructions of Adaptively Secure Protocols joint work with Dana Dachman-Soled (Columbia University), Tal Malkin (Columbia University),
CIS 5371 Cryptography 3b. Pseudorandomness.
Computational Security. Overview Goal: Obtain computational security against an active adversary. Hope: under a reasonable cryptographic assumption, obtain.
Eran Omri, Bar-Ilan University Joint work with Amos Beimel and Ilan Orlov, BGU Ilan Orlov…!??!!
Improving the Round Complexity of VSS in Point-to-Point Networks Jonathan Katz (University of Maryland) Chiu-Yuen Koo (Google Labs) Ranjit Kumaresan (University.
General Cryptographic Protocols (aka secure multi-party computation) Oded Goldreich Weizmann Institute of Science.
Complexity 18-1 Complexity Andrei Bulatov Probabilistic Algorithms.
Models and Security Requirements for IDS. Overview The system and attack model Security requirements for IDS –Sensitivity –Detection Analysis methodology.
Co-operative Private Equality Test(CPET) Ronghua Li and Chuan-Kun Wu (received June 21, 2005; revised and accepted July 4, 2005) International Journal.
CS555Spring 2012/Topic 41 Cryptography CS 555 Topic 4: Computational Approach to Cryptography.
Foundations of Cryptography Lecture 9 Lecturer: Moni Naor.
Foundations of Cryptography Lecture 2 Lecturer: Moni Naor.
CMSC 414 Computer and Network Security Lecture 3 Jonathan Katz.
Information-Theoretic Security and Security under Composition Eyal Kushilevitz (Technion) Yehuda Lindell (Bar-Ilan University) Tal Rabin (IBM T.J. Watson)
How to play ANY mental game
Ragesh Jaiswal Indian Institute of Technology Delhi Threshold Direct Product Theorems: a survey.
Hardness of Learning Halfspaces with Noise Prasad Raghavendra Advisor Venkatesan Guruswami.
Secure Computation (Lecture 7-8) Arpita Patra. Recap >> (n,t)-Secret Sharing (Sharing/Reconstruction) > Shamir Sharing > Lagrange’s Interpolation for.
Slide 1 Vitaly Shmatikov CS 380S Introduction to Secure Multi-Party Computation.
Secure Computation (Lecture 3 & 4) Arpita Patra. Recap >> Why secure computation? >> What is secure (multi-party) computation (MPC)? >> Secret Sharing.
Secure Computation (Lecture 5) Arpita Patra. Recap >> Scope of MPC > models of computation > network models > modelling distrust (centralized/decentralized.
Cryptography Lecture 2 Arpita Patra. Summary of Last Class  Introduction  Secure Communication in Symmetric Key setting >> SKE is the required primitive.
On the Communication Complexity of SFE with Long Output Daniel Wichs (Northeastern) joint work with Pavel Hubáček.
Secure Computation (Lecture 2) Arpita Patra. Vishwaroop of MPC.
Secure Computation Lecture Arpita Patra. Recap > Shamir Secret-sharing > BGW Protocol based on secret-sharing > Offline/Online phase > Creating.
Secure Computation (Lecture 9-10) Arpita Patra. Recap >> MPC with honest majority in i.t. settings > Protocol using (n,t)-sharing, proof of security---
Cryptography Lecture 4 Arpita Patra. Recall o Various Definitions and their equivalence (Shannon’s Theorem) o Inherent Drawbacks o Cannot afford perfect.
Secure Computation Lecture Arpita Patra. Recap >Three orthogonal problems- (n,t)-sharing, reconstruction, multiplication protocol > Verifiable Secret.
Cryptography Lecture 3 Arpita Patra © Arpita Patra.
Round-Efficient Multi-Party Computation in Point-to-Point Networks Jonathan Katz Chiu-Yuen Koo University of Maryland.
Multi-Party Computation r n parties: P 1,…,P n  P i has input s i  Parties want to compute f(s 1,…,s n ) together  P i doesn’t want any information.
Topic 36: Zero-Knowledge Proofs
Adaptively Secure Multi-Party Computation from LWE (via Equivocal FHE)
Cryptography Lecture 5 Arpita Patra © Arpita Patra.
Randomness.
On the Size of Pairing-based Non-interactive Arguments
Foundations of Secure Computation
Modern symmetric-key Encryption
Oblivious Transfer and GMW MPC
Digital Signature Schemes and the Random Oracle Model
Course Business I am traveling April 25-May 3rd
Cryptography Lecture 2 Arpita Patra © Arpita Patra.
Topic 5: Constructing Secure Encryption Schemes
Cryptography Lecture 5.
On the Power of Hybrid Networks in Multi-Party Computation
Cryptography Lecture 3 Arpita Patra © Arpita Patra.
Cryptography Lecture 2 Arpita Patra © Arpita Patra.
Cryptography Lecture 6.
The Curve Merger (Dvir & Widgerson, 2008)
Cryptography Lecture 4 Arpita Patra © Arpita Patra.
Cryptography Lecture 12 Arpita Patra © Arpita Patra.
Cryptography Lecture 4 Arpita Patra © Arpita Patra.
Distributions and expected value
Cryptography Lecture 5 Arpita Patra © Arpita Patra.
Cryptography Lecture 4 Arpita Patra © Arpita Patra.
Computational Two Party Correlation
Cryptography Lecture 5.
Cryptography Lecture 8.
Cryptography Lecture 5 Arpita Patra © Arpita Patra.
Cryptography Lecture 6.
Example: multi-party coin toss
CS151 Complexity Theory Lecture 7 April 23, 2019.
Impossibility of SNARGs
Presentation transcript:

Foundations of Secure Computation Arpita Patra © Arpita Patra

Ideal World MPC x1 x2 Any task x3 x4 (y1,y2,y3,y4) = f(x1,x2,x3,x4)

Ideal World MPC x1 x1 x2 x2 y1 y2 y1 y2 y4 y3 y4 y3 x4 x3 x3 x4 Any task y1 y2 y4 y3 y4 y3 x4 x3 x3 x4 (y1,y2,y3,y4) = f(x1,x2,x3,x4) (y1,y2,y3,y4) = f(x1,x2,x3,x4) The Ideal World The Real World

How to Compare Real World with Ideal World? Fix the inputs of the parties, say x1,….xn . Call it 𝑥 Real-world view of adv should contain no more info than the ideal-world view of adv y1 y2 y4 x1 x2 x4 y3 x3 y1 y2 y4 x1 x2 x4 y3 x3 {x3, y3, r3, protocol transcript} (xi, yi) : The view of a party Pi on input 𝑥 - Allowed values From the view point of the adversary. ViewReali ( 𝑥 ): The view of a party Pi on input 𝑥 - Leaked Values (random variable) Ideal-world view of adv is {(xi, yi)}Pi in C Let C be the set of corrupted parties. Then the real-world view of the adversary is: ViewRealC ( 𝑥 ) = {(ViewReali ( 𝑥 ))}Pi in C A real-world protocol is secure if the leaked values contain no more info than allowed values

Real-world (leaked values) vs. Ideal world (allowed values) y1 y2 y1 y2 y4 {x3, y3} y4 {x3, y3, r3, protocol transcript} When can we say that the real-world view (leaked values) of adv contain no more info than the ideal-world view of adv If the leaked values can be efficiently computed (by some algorithm) from the allowed values. Such an algorithm is called SIMULATOR ---denoted as SIM Takes input {(xi, yi)}Pi ∈ C and simulates the view of the adversary in the real protocol. It is enough if SIM creates a view of the adversary that is “close enough” to the real view so that adv. can not distinguish the simulated view from its real view.

Real-world (leaked values) vs. Ideal world (allowed values) y1 y2 y1 y2 x1 x2 x1 x2 x4 x4 SIM y4 {x3, y3} y4 {x3, y3, r3, protocol transcript} Interaction with real-world adversary on behalf of the honest parties SIM produces the simulated view  ViewRealC ( 𝑥 ) SIM({(xi, yi)}Pi ∈ C ) = ViewIdealC ( 𝑥 ) Random Variable/distribution (over the random coins of SIM and adv) Random Variable/distribution (over the random coins of parties) SIM: Ideal Adversary The Ideal World The Real World

Real-world vs. Ideal world y1 y2 y1 y2 x1 x1 x2 x2 x4 x4 SIM y4 {x3, y3} y4 {x3, y3, r3, protocol transcript} Interaction on behalf of the honest parties  {ViewIdealC ( 𝑥 )} {ViewRealC ( 𝑥 )} If the two views (distributions) are perfectly indistinguishabe (even for a computationally unbounded distinguisher) then we get perfect privacy If the two views (distributions) are statistically indistinguishabe (even for a computationally unbounded distinguisher) then we get statistical privacy If the two views (distributions) are computationally indistinguishabe then we get computational privacy

Real-world vs. Ideal world : Some Notations y1 y2 y1 y2 x1 x1 x2 x2 x4 x4 SIM y4 {x3, y3} y4 {x3, y3, r3, protocol transcript} Interaction on behalf of the honest parties ViewRealC ( 𝑥 ) SIM({(xi, yi)}Pi ∈ C ) = ViewIdealC ( 𝑥 ) OutputIdealH ( 𝑥 ) : the output of the honest parties on input ( 𝑥 ) in the ideal world OutputRealH ( 𝑥 ) : the output of the honest parties on input ( 𝑥 ) in the real world ViewIdealC ( 𝑥 ) : the simulated view of the adversary produced by SIM ViewRealC ( 𝑥 ): the real view of the adversary seen in the protocol

Real-world vs. Ideal world : Definition 1 (For Deterministic Functionalities ) y1 y2 y1 y2 x1 x1 x2 x2 x4 x4 SIM y4 {x3, y3} y4 {x3, y3, r3, protocol transcript} Separate conditions for correctness and privacy A protocol for computing f is perfectly-secure if it satisfies the following conditions: Correctness: Privacy: OutputIdealH ( 𝑥 ) = OutputRealH ( 𝑥 ) {ViewIdealC ( 𝑥 )} = {ViewRealC ( 𝑥 )} A protocol for computing f is statistically-secure if it satisfies the following conditions: Correctness: Privacy:  s |OutputIdealH ( 𝑥 ) - OutputRealH ( 𝑥 )|≤ negl(k) {ViewIdealC ( 𝑥 )} {ViewRealC ( 𝑥 )}

Real-world vs. Ideal world : Definition 1 (For Deterministic Functionalities ) y1 y2 y1 y2 x1 x1 x2 x2 x4 x4 SIM y4 {x3, y3} y4 {x3, y3, r3, protocol transcript} Separate conditions for correctness and privacy A protocol for computing f is computationally-secure if it satisfies the following conditions: Correctness: Privacy:  c |OutputIdealH ( 𝑥 ) - OutputRealH ( 𝑥 )|≤ negl(k) {ViewIdealC ( 𝑥 )} {ViewRealC ( 𝑥 )}

Real-world vs. Ideal world : Definition 1 (For Deterministic Functionalities ) y1 y2 y1 y2 x1 x1 x2 x2 x4 x4 SIM y4 {x3, y3} y4 {x3, y3, r3, protocol transcript} Perfect-security: OutputIdealH ( 𝑥 ) = OutputRealH ( 𝑥 ) {ViewIdealC ( 𝑥 )} = {ViewRealC ( 𝑥 )} Statistical-security:  s |OutputIdealH ( 𝑥 ) - OutputRealH ( 𝑥 )|≤ negl(k) {ViewIdealC ( 𝑥 )} {ViewRealC ( 𝑥 )} Computational-security:  c |OutputIdealH ( 𝑥 ) - OutputRealH ( 𝑥 )|≤ negl(k) {ViewIdealC ( 𝑥 )} {ViewRealC ( 𝑥 )} For deterministic functions, output is fixed once inputs are fixed (irrespective of the random coins) So no probability distribution considered over OutputIdealH ( 𝑥 ) and OutputRealH ( 𝑥 )

Making “Very Small/Negligible” Precise– Asymptotic Approach >> “ Very Small / negligible in n” means those f(n) : - for every polynomial in n, p(n), there exists some positive integer N, such that f(n) < 1/p(n) , for all n > N n: Security parameter. A tunable parameter that tunes how difficult it is to break a cryptosystem - “grows slower than any inverse poly” >> Example: 1/2n , 1/2n/2 >> How about 1/n10 ? For 1/n20 there is NO N s.t. 1/n10 < 1/n20 - The more the value of n, the tougher the life of the adversary is.

Real-world vs. Ideal world : Definition 2 (For Randomized Functionalities ) y1 y2 y1 y2 x1 x1 x2 x2 x4 x4 SIM y4 {x3, y3} y4 {x3, y3, r3, protocol transcript} For randomized functions, output is not fixed even if inputs are fixed Need to consider probability distribution considered over OutputIdealH ( 𝑥 ) and OutputRealH ( 𝑥 ) Correctness and privacy combined in a single condition Perfect-security: {OutputIdealH ( 𝑥 ), ViewIdealC ( 𝑥 )} = {OutputRealH ( 𝑥 ), ViewRealC ( 𝑥 )} Statistical-security:  s {OutputIdealH ( 𝑥 ), ViewIdealC ( 𝑥 )} {OutputRealH ( 𝑥 ), ViewRealC ( 𝑥 )} Computational-security:  c {OutputIdealH ( 𝑥 ), ViewIdealC ( 𝑥 )} {OutputRealH ( 𝑥 ), ViewRealC ( 𝑥 )}

Real-world vs. Ideal world : Definition 2 vs Definition 1 y1 y2 y1 y2 x1 x1 x2 x2 x4 x4 SIM y4 {x3, y3} y4 {x3, y3, r3, protocol transcript} Perfect-security: {OutputIdealH ( 𝑥 ), ViewIdealC ( 𝑥 )} = {OutputRealH ( 𝑥 ), ViewRealC ( 𝑥 )} Statistical-security:  s {OutputIdealH ( 𝑥 ), ViewIdealC ( 𝑥 )} {OutputRealH ( 𝑥 ), ViewRealC ( 𝑥 )} Computational-security:  c {OutputIdealH ( 𝑥 ), ViewIdealC ( 𝑥 )} {OutputRealH ( 𝑥 ), ViewRealC ( 𝑥 )} Definition 2 subsumes definition 1 (definition 2 is stronger) Definition 2 captures randomized functions as well

Randomized Function and Definition 1 f( , ) = (r , ) r is a random bit . . . . r r . Sample r randomly and output Sample r randomly SIM Interaction on behalf of the honest party Sample and send a random r’ >> Does this protocol achieve privacy ? No! {ViewIdealC ( 𝑥 )} = {ViewRealC ( 𝑥 )} {r’ : r’ random} {r : r is random} The proof says the protocol achieves privacy ! The Ideal World The Real World

Randomized Function and Definition 2 f( , ) = (r , ) r is a random bit . . . . r r . Sample r randomly and output Sample r randomly SIM Interaction on behalf of the honest party Sample and send a random r’ >> Is this protocol secure? No! The proof says the protocol is insecure! {ViewIdealC ( 𝑥 )} = {ViewRealC ( 𝑥 )} {OutputIdealH ( 𝑥 ), ViewIdealC ( 𝑥 )} {OutputRealH ( 𝑥 ), ViewRealC ( 𝑥 )} ≠ {(r , r’}) | r,r’ random and independent} {(r , r)} | r random The Ideal World The Real World

Definition 1 is Enough! Consider the case when n = 2, t = 1 (the argument holds for general n and t) Let g(x, y; r) be a randomized functionality P1 and P2 has inputs x and y respectively The functionality picks a uniform randomness r and then computes g(x, y; r) Can we can replace functionality g by a deterministic functionality, where randomness r is ”contributed” by P1 and P2 (apart from their usual inputs x and y) ? Define f((x, r1), (y, r2)) ≝ to compute g(x, y; r) where r1+r2 can act as r. Party P1 will input (x, r1) Party P2 will input (y, r2) The role of r played by r1 + r2 If Pi is honest then ri will be random and so will be r For the rest of the course, we will consider only deterministic functionalities

Definition Applies for Dimension 2 (Networks) Complete Synchronous Dimension 3 (Distrust) Centralized Dimension 4 (Adversary) Threshold/non-threshold Polynomially Bounded and unbounded powerful Semi-honest Static