Secure Computation (Lecture 5) Arpita Patra. Recap >> Scope of MPC > models of computation > network models > modelling distrust (centralized/decentralized.

Slides:



Advertisements
Similar presentations
Efficient Private Approximation Protocols Piotr Indyk David Woodruff Work in progress.
Advertisements

Efficiency vs. Assumptions in Secure Computation Yuval Ishai Technion & UCLA.
Computational Privacy. Overview Goal: Allow n-private computation of arbitrary funcs. –Impossible in information-theoretic setting Computational setting:
Secure Evaluation of Multivariate Polynomials
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
Efficient Two-party and Multiparty Computation against Covert Adversaries Vipul Goyal Payman Mohassel Adam Smith Penn Sate UCLAUC Davis.
CS555Topic 241 Cryptography CS 555 Topic 24: Secure Function Evaluation.
CIS 5371 Cryptography 3b. Pseudorandomness.
Computational Security. Overview Goal: Obtain computational security against an active adversary. Hope: under a reasonable cryptographic assumption, obtain.
Introduction to Modern Cryptography, Lecture 12 Secure Multi-Party Computation.
Eran Omri, Bar-Ilan University Joint work with Amos Beimel and Ilan Orlov, BGU Ilan Orlov…!??!!
1 Cryptography on weak BSS model of computation Ilir Çapuni
Protecting Circuits from Leakage the computationally bounded and noisy cases Sebastian Faust Eurocrypt 2010, Nice Joint work with KU Leuven Tal Rabin Leo.
General Cryptographic Protocols (aka secure multi-party computation) Oded Goldreich Weizmann Institute of Science.
Secure Multi-party Computations (MPC) A useful tool to cryptographic applications Vassilis Zikas.
ACT1 Slides by Vera Asodi & Tomer Naveh. Updated by : Avi Ben-Aroya & Alon Brook Adapted from Oded Goldreich’s course lecture notes by Sergey Benditkis,
Co-operative Private Equality Test(CPET) Ronghua Li and Chuan-Kun Wu (received June 21, 2005; revised and accepted July 4, 2005) International Journal.
Secure Efficient Multiparty Computing of Multivariate Polynomials and Applications Dana Dachman-Soled, Tal Malkin, Mariana Raykova, Moti Yung.
1 Introduction to Secure Computation Benny Pinkas HP Labs, Princeton.
1. 2 Overview Some basic math Error correcting codes Low degree polynomials Introduction to consistent readers and consistency tests H.W.
Foundations of Privacy Lecture 11 Lecturer: Moni Naor.
Foundations of Cryptography Lecture 9 Lecturer: Moni Naor.
Slide 1 Vitaly Shmatikov CS 380S Oblivious Transfer and Secure Multi-Party Computation With Malicious Parties.
Foundations of Cryptography Lecture 2 Lecturer: Moni Naor.
CMSC 414 Computer and Network Security Lecture 3 Jonathan Katz.
Information-Theoretic Security and Security under Composition Eyal Kushilevitz (Technion) Yehuda Lindell (Bar-Ilan University) Tal Rabin (IBM T.J. Watson)
How to play ANY mental game
Efficient and Robust Private Set Intersection and multiparty multivariate polynomials Dana Dachman-Soled 1, Tal Malkin 1, Mariana Raykova 1, Moti Yung.
Ragesh Jaiswal Indian Institute of Technology Delhi Threshold Direct Product Theorems: a survey.
Great Theoretical Ideas in Computer Science.
Secure Computation (Lecture 7-8) Arpita Patra. Recap >> (n,t)-Secret Sharing (Sharing/Reconstruction) > Shamir Sharing > Lagrange’s Interpolation for.
Slide 1 Vitaly Shmatikov CS 380S Introduction to Secure Multi-Party Computation.
Secure Computation (Lecture 3 & 4) Arpita Patra. Recap >> Why secure computation? >> What is secure (multi-party) computation (MPC)? >> Secret Sharing.
Welcome to to Autumn School! Some practical issues.
Cryptography Lecture 2 Arpita Patra. Summary of Last Class  Introduction  Secure Communication in Symmetric Key setting >> SKE is the required primitive.
DISTRIBUTED CRYPTOSYSTEMS Moti Yung. Distributed Trust-- traditionally  Secret sharing: –Linear sharing over a group (Sum sharing) gives n out of n sharing.
On the Communication Complexity of SFE with Long Output Daniel Wichs (Northeastern) joint work with Pavel Hubáček.
Rational Cryptography Some Recent Results Jonathan Katz University of Maryland.
Secure Computation (Lecture 2) Arpita Patra. Vishwaroop of MPC.
Secure Computation Lecture Arpita Patra. Recap >> Improving the complexity of GMW > Step I: Offline: O(n 2 c AND ) OTs; Online: i.t., no crypto.
Secure Computation Lecture Arpita Patra. Recap > Shamir Secret-sharing > BGW Protocol based on secret-sharing > Offline/Online phase > Creating.
Secure Computation (Lecture 9-10) Arpita Patra. Recap >> MPC with honest majority in i.t. settings > Protocol using (n,t)-sharing, proof of security---
Efficient Private Matching and Set Intersection Mike Freedman, NYU Kobbi Nissim, MSR Benny Pinkas, HP Labs EUROCRYPT 2004.
Secure Computation Lecture Arpita Patra. Recap >Three orthogonal problems- (n,t)-sharing, reconstruction, multiplication protocol > Verifiable Secret.
Cryptography Lecture 3 Arpita Patra © Arpita Patra.
Round-Efficient Multi-Party Computation in Point-to-Point Networks Jonathan Katz Chiu-Yuen Koo University of Maryland.
Cryptographic methods. Outline  Preliminary Assumptions Public-key encryption  Oblivious Transfer (OT)  Random share based methods  Homomorphic Encryption.
Completeness Theorems for Non-Cryptographic Fault-Tolerant Distributed Computation Michael Ben-Or Shafi Goldwasser Avi Wigderson Lecture: Mickey Hakimi.
Secret Sharing Schemes: A Short Survey Secret Sharing 2.
Linear, Nonlinear, and Weakly-Private Secret Sharing Schemes
Multi-Party Computation r n parties: P 1,…,P n  P i has input s i  Parties want to compute f(s 1,…,s n ) together  P i doesn’t want any information.
Topic 36: Zero-Knowledge Proofs
Adaptively Secure Multi-Party Computation from LWE (via Equivocal FHE)
Cryptography Lecture 5 Arpita Patra © Arpita Patra.
On the Size of Pairing-based Non-interactive Arguments
Foundations of Secure Computation
Advanced Protocols.
Foundations of Secure Computation
Oblivious Transfer and GMW MPC
Topic 14: Random Oracle Model, Hashing Applications
Course Business I am traveling April 25-May 3rd
Topic 5: Constructing Secure Encryption Schemes
Cryptography Lecture 5.
On the Power of Hybrid Networks in Multi-Party Computation
Cryptography Lecture 3 Arpita Patra © Arpita Patra.
Cryptographic protocols 2016, Lecture 9 multi-party computation
Cryptography Lecture 12 Arpita Patra © Arpita Patra.
Computational Two Party Correlation
Cryptography Lecture 5.
Secret Sharing: Linear vs. Nonlinear Schemes (A Survey)
Presentation transcript:

Secure Computation (Lecture 5) Arpita Patra

Recap >> Scope of MPC > models of computation > network models > modelling distrust (centralized/decentralized adversary) > modelling adversary > Various Parameters/questions asked in MPC >> Defining Security of MPC > Ideal World & Real world > Indistinguishability of the view of adversary in real and ideal (with the help of simulator) world > Indistinguishability of the joint dist. of the output of the honest parties and the view of adversary in real and ideal (with the help of simulator) world.

Ideal World MPC x1x1 x2x2 x3x3 x4x4 Any task (y 1,y 2,y 3,y 4 ) = f(x 1,x 2,x 3,x 4 )

Ideal World MPC Any task y1y1 y2y2 y4y4 y3y3 The Ideal World y1y1 y2y2 y4y4 y3y3 The Real World (y 1,y 2,y 3,y 4 ) = f(x 1,x 2,x 3,x 4 ) x1x1 x2x2 x3x3 x4x4 x1x1 x2x2 x3x3 x4x4

How do you compare Real world with Ideal World? >> Fix the inputs of the parties, say x 1,….x n >> Real world view of adv contains no more info than ideal world view View Real i : The view of P i on input (x 1,….x n ) - Leaked Values {x 3, y 3, r 3, protocol transcript} The Real World y1y1 y2y2 y4y4 {View Real i } Pi in C {x 3, y 3 } y1y1 y2y2 y4y4 The Ideal World View Ideal i : The view of P i on input (x 1,….x n ) - Allowed values {View Ideal i } Pi in C Our protocol is secure if the leaked values contains no more info than allowed values

Real world (leaked values) vs. Ideal world (allowed values) {x 3, y 3, r 3, protocol transcript} The Real World y1y1 y2y2 y4y4 {x 3, y 3 } y1y1 y2y2 y4y4 The Ideal World >> If leaked values can be efficiently computed from allowed values. >> Such an algorithm is called SIMULATOR (simulates the view of the adversary in the real protocol). >> It is enough if SIM creates a view of the adversary is “close enough” to the real view so that adv. can not distinguish from its real view.

Definition1: View indistinguishability of Adversary in Real and Ideal world {View Real i } Pi in C The Real World y1y1 y2y2 y4y4 {x 3, y 3 } y1y1 y2y2 y4y4 The Ideal World SIM Interaction on behalf of the honest parties SIM: Ideal Adversary {View Ideal i } Pi in C  Random Variable/distribution (over the random coins of parties) Random Variable/distribution (over the random coins of SIM and adv) {x 3, y 3, r 3, protocol transcript}

Definition 2: Indistinguishability of Joint Distributions of Output and View >> Joint distribution of output & view of the honest & corrupted parties in both the worlds can not be told apart Output Real i : The output of P i on input (x 1,….x n ) when P i is honest. View Real i : As defined before when P i is corrupted. The Real World [ {View Real i } Pi in C, {Output Real i } Pi in H ] The Ideal World Output Ideal i : The output of P i on input (x 1,….x n ) when P i is honest. View Ideal i : As defined before when P i is corrupted. [ {View Ideal i } Pi in C, {Output Ideal i } Pi in H ]  >> First note that this def. subsumes the previous definition (stronger) >> Captures randomized functions as well

Randomized Function and Definition 1 The Real WorldThe Ideal World >> Is this protocol secure? The proof says the protocol is secure! f(, ) = (r, ) r is a random bit r... Sample r randomly.. Sample r randomly and output r No! {View Real i } Pi in C SIM Interaction on behalf of the honest party Sample and send a random r’ {View Ideal i } Pi in C  r : r is random r’ : r’ random

Randomized Function and Definition 2 The Real WorldThe Ideal World >> Is this protocol secure? The proof says the protocol is insecure! f(, ) = (r, ) r is a random bit r... Sample r randomly.. Sample r randomly and output r No! {View Real i } Pi in C SIM {View Ideal i } Pi in C  [ {View Real i } Pi in C, {Output Real i } Pi in H ] [ {View Ideal i } Pi in C, {Output Ideal i } Pi in H ]  [{r’, r} | r,r’ random and independent] [{r, r} | r random] Interaction on behalf of the honest party Sample and send a random r’

Definition 1 is Enough! { {View Real i } Pi in C } {x1,.,xn, k} { {View Ideal i } Pi in C } {x1,.,xn, k}  { {View Real i } Pi in C,{Output Real i } Pi in H } {x1,..xn, k} { {View Ideal i } Pi in C,{Output Ideal i } Pi in H } {x1,.,xn, k}  >> For deterministic Functions: > View of the adversary and output are NOT co-related. > We can separately consider the distributions > Output of the honest parties are fixed for inputs >> For randomized functions: > We can view it as a deterministic function where the parties input randomness (apart from usual inputs). Compute f((x 1,r 1 ), (x 2,r 2 )) to compute g(x 1,x 2 ;r) where r 1 +r 2 can act as r.

Making Indistinguishability Precise Notations: o Security parameter k (natural number) o We wish security to hold for all inputs of all lengths, as long as k is large enough Definition (Function  is negligible): If for every polynomial p(  ) there exists an N such that for all k > N we have  (k) < 1/p(k) Definition (Probability ensemble X={X(a,k)}): o Infinite series, indexed by a string a and natural k o Each X(a,k) is a random variable In our context: o X(x 1,.,x n, k) = { {View Real i } Pi in C } {x1,.,xn, k} (Probability space: randomness of parties) o Y(x 1,.,x n, k) = { {View Ideal i } Pi in C } {x1,.,xn, k} (Probability space: randomness of the corrupt parties and simulator)

Computational Indistinguishability o X(x 1,.,x n, k) = { {View Real i } Pi in C } {x1,.,xn, k} (Probability space: randomness of parties) o Y(x 1,.,x n, k) = { {View Ideal i } Pi in C } {x1,.,xn, k} (Probability space: randomness of the corrupt parties and simulator) Definition (Computational indistinguishability of X = {X(a,k)}  c Y = {Y(a,k)}) For every polynomial-time distinguisher* D there exists a negligible function  such that for every a and all large enough k’s: |Pr[D(X(a,k) = 1 ] - Pr[D(Y(a,k) = 1 ]| <  (k) For our case a: (x 1,.,x n ) Alternative def. Adv D (X,Y) = The prob of D guessing the correct distribution |Adv D (X,Y)| < ½ +  (k) distinguisher D = Real Adv A

Statistical Indistinguishability o X(x 1,.,x n, k) = { {View Real i } Pi in C } {x1,.,xn, k} (Probability space: randomness of parties) o Y(x 1,.,x n, k) = { {View Ideal i } Pi in C } {x1,.,xn, k} (Probability space: randomness of the corrupt parties and simulator) Definition (Statistical indistinguishability of X = {X(a,k)}  s Y = {Y(a,k)}) For every* distinguisher D there exists a negligible function  such that for every a and all large enough k’s: |Pr[D(X(a,k) = 1 ] - Pr[D(Y(a,k) = 1 ]| <  (k) For our case a: (x 1,.,x n ) Alternative def. Adv D (X,Y) = The prob of D guessing the correct distribution |Adv D (X,Y)| < ½ +  (k) May have unbounded power

Perfect Indistinguishability o X(x 1,.,x n, k) = { {View Real i } Pi in C } {x1,.,xn, k} (Probability space: randomness of parties) o Y(x 1,.,x n, k) = { {View Ideal i } Pi in C } {x1,.,xn, k} (Probability space: randomness of the corrupt parties and simulator) Definition (Perfect indistinguishability of X = {X(a,k)}  P Y = {Y(a,k)}) For every* distinguisher D such that for every a and for all k: |Pr[D(X(a,k) = 1 ] - Pr[D(Y(a,k) = 1 ]| = 0 For our case a: (x 1,.,x n ) Alternative def. Adv D (X,Y) = The prob of D guessing the correct distribution |Adv D (X,Y)| = ½ May have unbounded power

Definition Applies for Dimension 2 (Networks) Complete Synchronous Dimension 3 (Distrust) Centralized Dimension 4 (Adversary) Threshold/non- threshold Polynomially Bounded and unbounded powerful Semi-honest Static

What is so great about the definition paradigm? >> One definition for all > Sum: (x 1 + x 2 + … + x n ) = f(x 1, x 2, …, x n ) > OT: (-, x b ) = f((x 1, x 2 ), b) > BA: (y, y, …,y) = f(x 1, x 2, …, x n ): y = majority(x 1, x 2, …, x n )/default value >> Easy to tweak the ideal world and weaken/strengthen security Real world protocol achieves whatever ideal world achieves >> Coming up with the right ideal world is tricky and requires skill > Will have fun with it in malicious world!

Information Theoretic MPC with Semi-honest Adversary and honest majority [BGW88] Dimension 2 (Networks) Complete Synchronous Dimension 3 (Distrust) Centralized Dimension 4 (Adversary) Threshold (t) Unbounded powerful Semi-honest Static Dimension 1 (Models of Computation) Arithmetic Michael Ben-Or, Shafi Goldwasser, Avi Wigderson:Shafi Goldwasser, Avi Wigderson: Completeness Theorems for Non-Cryptographic Fault-Tolerant Distributed Computation (Extended Abstract). STOC 1988.STOC 1988.

(n, t) - Secret Sharing Scheme [Shamir 1979, Blackley 1979] Secret s Dealer

(n, t) - Secret Sharing Scheme [Shamir 1979, Blackley 1979] Secret s Dealer v1v1 v2v2 v3v3 vn vn Sharing Phase …

(n, t) - Secret Sharing Scheme [Shamir 1979, Blackley 1979] Secret s Dealer v1v1 v2v2 v3v3 vn vn Sharing Phase … Less than t +1 parties have no info’ about the secret Reconstruction Phase

(n, t) - Secret Sharing Scheme [Shamir 1979, Blackley 1979] Secret s Dealer v1v1 v2v2 v3v3 vn vn Sharing Phase …  t +1 parties can reconstruct the secret Secret s Reconstruction Phase

Shamir-sharing: (n,t) - Secret Sharing for Semi-honest Adversaries Secret x is Shamir-Shared if x2x2 x3x3 x n x1x1 … Random polynomial of degree t over F p s.t p>n P1P1 P2P2 PnPn P3P3

Reconstruction of Shamir-sharing: (n,t) - Secret Sharing for Semi-honest Adversaries x2x2 x3x3 x n x1x1 P1P1 P2P2 PnPn P3P3 PiPi The same is done for all P i Lagrange’s Interpolation

Shamir-sharing Properties Property 2: Any t parties have ‘no’ information about the secret. Pr[secret =s before secret sharing] – Pr[secret =s after secret sharing] = 0 Property 1: Any (t+1) parties have ‘complete’ information about the secret. >> Both proof can be derived from Lagrange’s Interpolation.

Lagrange’s Interpolation >> Assume that h(x) is a polynomial of degree at most t and C is a subset of F p of size t+1 >> Poly of degree t >> At i, it evaluates to 1 >> At any other point, it gives 0. Theorem: h(x) can be written as >> Assume for simplicity C = {1,……,t+1} where Proof: Consider LHS – RHS: Both LHS and RHS evaluates to h(i) for every i in C Both LHS and RHS has degree at most t LHS - RHS evaluates to 0 for every i in C and has degree at most t More zeros (roots) than degreeZero polynomial! LHS = RHS

Lagrange’s Interpolation >> Poly of degree t >> At i, it evaluates to 1 >> At any other point, it gives 0. Theorem: h(x) can be written as C = {1,……,t+1} where >> are public polynomials >> are public values, denote by r i >> Can be written as the linear combination of h(i)s The combiners are (recombination vector): r 1,….r t+1 Property 1: Any (t+1) parties have ‘complete’ information about the secret.

Lagrange’s Interpolation Property 2: Any t parties have ‘no’ information about the secret. Pr[secret =s before secret sharing] – Pr[secret =s after secret sharing] = 0 Proof: For any secret s from F p if we sample f(x) of degree at most t randomly s.t. f(0) = s and consider the following distribution for any C that is subset of F p \ {0} and of size t : ( {f(i)} i in C )uniform distribution in F p t For a fixed s, If not then two different sets of t+1 values will define the same polynomial t coefficients from F p t A unique element from the above distribution

Lagrange’s Interpolation Property 2: Any t parties have ‘no’ information about the secret. Pr[secret =s before secret sharing] – Pr[secret =s after secret sharing] = 0 Proof: For any secret s from F p if we sample f(x) of degree at most t randomly s.t. f(0) = s and consider the following distribution for any C that is subset of F p \ {0} and of size t : ( {f(i)} i in C )uniform distribution in F p t For a fixed s, t coefficients from F p t A unique element from the above distribution f s : F p t FptFpt Uniform distribution bijective Uniform distribution For every s, uniform dist and independent of dist. of s

Lagrange’s Interpolation Property 2: Any t parties have ‘no’ information about the secret. Pr[secret =s before secret sharing] – Pr[secret =s after secret sharing] = 0 Proof:

(n,t) Secret Sharing s : (n,t) Secret Sharing of secret s For MPC: Linear (n,t) Secret Sharing s 1 s 2 c s s1s1 s2s2 c : public constant s   from Linearity: The parties can do the following Linear

Linearity of (n, t) Shamir Secret Sharing a 1 a 2 a 3 a 11 22 33 a 1 a 2 a 3 a b 1 b 2 b 3 b b b 1 b 2 b each party does locally c 1 c 2 c 3

Linearity of (n, t) Shamir Secret Sharing a 1 a 2 a 3 a 11 22 33 a 1 a 2 a 3 a b 1 b 2 b 3 b b b 1 b 2 b c 1 c 2 c 3 a  b c 1 c 2 c 3 a+b Addition is Absolutely free

Linearity of (n, t) Shamir Secret Sharing 11 22 33 a 1 a 2 a 3 a 1 a 2 a 3 a a cc cc cc c is a publicly known constant d 1 d 2 d 3 c  a d 1 d 2 d 3 c  a Multiplication by public constants is Absolutely free

Non-linearity of (n, t) Shamir Secret Sharing a 1 a 2 a 3 b 1 b 2 b 3 b a 11 22 33 a 1 a 2 a 3 a b b 1 b 2 b 3    d 1 d 2 d 3 a  b d 1 d 3 d 2 a  b Multiplication of shared secrets is not free

Secure Circuit Evaluation x1x1 x2x2 x3x3 x4x4     c y

    y 3

1.(n, t)- secret share each input    

Secure Circuit Evaluation     Find (n, t)-sharing of each intermediate value 1.(n, t)- secret share each input 3

Secure Circuit Evaluation     Find (n, t)-sharing of each intermediate value 1.(n, t)- secret share each input

Secure Circuit Evaluation     Linear gates: Linearity of Shamir Sharing - Non-Interactive Find (n, t)-sharing of each intermediate value 1.(n, t)- secret share each input

Secure Circuit Evaluation     Non-linear gate: Require degree- reduction Technique. Interactive Find (n, t)-sharing of each intermediate value 1.(n, t)- secret share each input Linear gates: Linearity of Shamir Sharing - Non-Interactive

Secure Circuit Evaluation     No inputs of the honest parties are leaked. 2. No intermediate value is leaked Privacy follows (intuitively) because: 3