Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Complexity of Information-Theoretic Secure Computation Yuval Ishai Technion 2014 European School of Information Theory.

Similar presentations


Presentation on theme: "The Complexity of Information-Theoretic Secure Computation Yuval Ishai Technion 2014 European School of Information Theory."— Presentation transcript:

1 The Complexity of Information-Theoretic Secure Computation Yuval Ishai Technion 2014 European School of Information Theory

2 Information-Theoretic Cryptography Any question in cryptography that makes sense even if everyone is computationally unbounded Typically: unconditional security proofs Focus of this talk: Secure Multiparty Computation (MPC)

3 Talk Outline Gentle introduction to MPC Communication complexity of MPC –PIR, LDC, and related problems Open problems

4 How much do we earn? Goal: compute  x i without revealing anything else x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 xixi

5 A better way? x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 0≤r<M Assumption:  x i <M (say, M=10 10 ) (+ and – operations carried modulo M) m 1 =r+x 1 m 2 =m 1 +x 2 m 3 =m 2 +x 3 m 4 =m 3 +x 4 m 5 =m 4 +x 5 m 6 =m 5 +x 6 m 6 -r

6 A security concern x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 m1m1 m 2 =m 1 +x 2

7 Resisting collusions x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 r 43 r 12 r 16 r 65 r 51 r 32 r 25 x i + inbox i - outbox i

8 P 1,…,P k want to securely compute f(x 1,…,x k ) –Up to t parties can collude –Should learn (essentially) nothing but the output Questions –When is this at all possible? –How efficiently? More generally Information-theoretic (unconditional) security possible when t<k/2 [BGW88,CCD88,RB89] Computational security possible for any t (under standard cryptographic assumptions) [Yao86,GMW87,CLOS02] Or: information-theoretic security using correlated randomness [Kil88,BG89] Secure MPC protocol for f Similar feasibility results for security against malicious parties

9 P 1,…,P k want to securely compute f(x 1,…,x k ) –Up to t parties can collude –Should learn (essentially) nothing but the output Questions –When is this at all possible? –How efficiently? More generally Several efficiency measures: communication, randomness, rounds, computation Typical assumptions for rest of talk: * t=1, k = small constant * information-theoretic security * “semi-honest” parties, secure channels

10 Communication Complexity

11 Fully Homomorphic Encryption Gentry ‘09 Settles main communication complexity questions in complexity-based cryptography –Even under “nice” assumptions! [BV11] Main open questions –Further improve assumptions –Improve practical computational overhead FHE >> PKE >> SKE >> one-time pad

12 One-Time Pads for MPC Offline: –Set G[u,v] = f[u-dx, v-dy] for random dx, dy –Pick random G A,G B such that G = G A +G B –Alice gets G A,dx Bob gets G B,dy Protocol on inputs (x,y): –Alice sends u=x+dx, Bob sends v=y+dy –Alice sends z A = R A [u,v], Bob sends z B = R B [u,v] –Both output z=z A +z B 0 1 1 0 1 2 1 0 1 0 2 0 1 2 0 0 1 1 0 1 dy dx Trusted Dealer Alice Bob f(x,y) RARA RBRB (x)(x) (y)(y) [IKMOP13]

13 3-Party MPC for g(x,y,z) Carol (z) Alice Bob g(x,y,z) RARA RBRB (x) (y) zAzA zBzB Define f((x,z A ),(y,z B )) = g(x,y,z A +z B )

14 One-Time Pads for MPC The good: –Perfect security –Great online communication The bad: –Exponential offline communication Can we do better? –Yes if f has small circuit complexity –Idea: process circuit gate-by-gate k=3, t=1: can use one-time pad approach k>2t: use “multiplicative” (aka MPC-friendly) codes Communication  circuit size, rounds  circuit depth

15 MPC vs. Communication Complexity ab c Communication ComplexityMPC GoalEach party learns f(a,b,c) Each party learns only f(a,b,c)

16 ab c Communication ComplexityMPC GoalEach party learns f(a,b,c) Each party learns only f(a,b,c) Upper boundO(n) (n = input length) O(size(f)) [BGW88,CCD88] MPC vs. Communication Complexity

17 ab c Communication ComplexityMPC GoalEach party learns f(a,b,c) Each party learns only f(a,b,c) Upper boundO(n) (n = input length) O(size(f)) [BGW88,CCD88] Lower bound  (n) (for most f)  (n) (for most f) Big open question: poly(n) communication for all f ? “fully homomorphic encryption of information-theoretic cryptography” Big open question: poly(n) communication for all f ? “fully homomorphic encryption of information-theoretic cryptography” MPC vs. Communication Complexity

18 Question Reformulated Is the communication complexity of MPC strongly correlated with the computational complexity of the function being computed? efficiently computable functions All functions = communication-efficient MPC = no communication-efficient MPC

19 [KT00] 1990 1995 2000 The three problems are closely related [IK04]

20 xixi ? ? ? database x ∈ {0,1} n “Information- Theoretic” vs. Computational “Information- Theoretic” vs. Computational Main question: minimize communication (logn vs. n) Private Information Retrieval [Chor-Goldreich-Kushilevitz-Sudan95]

21 A Simple I.T. PIR Protocol S2S2 i i X n 1/2 q2 q2 q1 q1 a 2 =X·q 2 a 1 =X·q 1 S1S1 q 1 + q 2 = e i  2-server PIR with O(n 1/2 ) communication a 1 +a 2 =X·e i

22 0 1 1 0 1 1 1 0 1 1 0 0 0 0 0 1 Tool: (linear) homomorphic encryption Protocol: a b a+b =  n 1/2 i X= Client sends E(e i ) E(0) E(0) E(1) E(0) (=c 1 c 2 c 3 c 4 ) Server replies with E(X·e i ) c2c3c2c3 c1 c2c3c1 c2c3 c1c2c1c2 c4c4 Client recovers ith column of X  1-server CPIR with ~ O(n 1/2 ) communication A Simple Computational PIR Protocol [Kushilevitz-Ostrovsky97]

23 Why Information-Theoretic PIR? Cons: Requires multiple servers Privacy against limited collusions Worse asymptotic complexity (with const. k): 2 (logn)^  [Yekhanin07,Efremenko09] vs. polylog(n) [Cachin-Micali-Stadler99, Lipmaa05, Gilboa-I14] Pros: Interesting theoretical question Unconditional privacy Better “real-life” efficiency Allows for very short (logarithmic) queries or very short (constant-size) answers Closely related to locally decodable codes & friends

24 Locally Decodable Codes Requirements: High robustness Local decoding xy i Question: how large should m(n) be in a k-query LDC?   k=2: 2  (n) k=3: 2 2^O~(sqrt(logn))  (n 2 ) If 0.51

25 From I.T. PIR to LDC [Katz-Trevisan00] Uniform PIR queries  “smooth” LDC decoder  robustness Arrows can be reversed k-server PIR with  -bit queries and  -bit answers k-query LDC of length 2  over  ={0,1}  y[q]=Answer(x,q) Simplifying assumptions: Servers compute same function of (x,q) Each query is uniform over its support set Binary LDC  PIR with one answer bit per server

26 Applications of Local Decoding Coding –LDC, Locally Recoverable Codes (robustness) –Batch Codes (load balancing) Cryptography –Instance Hiding, PIR (secrecy) –Efficient MPC for “worst” functions Complexity theory –Locally random reductions, PCPs –Worst-case to average-case reductions, hardness amplification

27 Complexity of PIR: Total Communication Mainly interesting for k=2 Upper bound (k=2): O(n 1/3 ) [CGKS95] –Tight in a restricted model [RY07] Lower bound (k=2): 5logn [Man98,…,WW05] No natural coding analogue

28 Complexity of PIR: Short Answers Short answers = O(1) bit from each server –Closely related to k-query binary LDCs k=2 –Simple O(n) upper bound [CGKS05] PIR analogue of Hadamard code –Ω(n) lower bound [GKST02, KdW04] k > logn / loglogn –Simple polylog(n) upper bound [BF90,CGKS05] PIR analogue of RM code –Binary LDCs of length poly(n) and k=polylog(n) queries

29 Complexity of PIR: Short Answers k=3 –Lower bound [KdW04,…,Woo07]2logn –Upper bounds [CGKS95] O(n 1/2 ) [Yekhanin07] n O(1/loglogn) [Efremenko09] n O~(1/sqrt(logn)) Assuming infinitely many Mersenne primes More practical variant [BIKO12]

30 Complexity of PIR: Short Answers k=4,5,6,… –Lower bound [KdW04,…,Woo07]c(k). logn –Upper bounds [CGKS95] O(n 1/k-1 ) [Yekhanin07] n O(1/loglogn) [Efremenko09] n O~(1/(logn)^c’(k)) Assuming infinitely many Mersenne primes

31 Complexity of PIR: Short Queries Short queries = O(logn) bit to each server –Closely related to poly(n)-length LDCs over large Σ –Application: PIR with preprocessing [BIM00] k=2,3,4,… –Answer length = O(n 1/k+ε ) [BI01] –Lower bounds: ???

32 Complexity of PIR: Low Storage Different servers may store different functions of x –Goal: minimize communication subject to storage rate=1-ε –Corresponds to binary LDCs with rate 1-ε Rate = 1-ε, k=O(n ε ), 1-bit answers –Multiplicity codes [DGY11] –Lifting of affine-invariant codes [GKS13] –Expander codes [HOW13]

33 Best 2-Server PIR [CGKS95,BI01] Reduce to private polynomial evaluation over F 2 –Servers: x  p = degree-3 polynomial in m≈n 1/3 vars. –Client: i  z ∈ F 2 m –Local mappings must satisfy p x (z i )=x i for all x,i –Simple implementation: z(i) = i-th weight-3 binary vector Privately evaluate p(z) –Client: splits z into z=a+b, where a,b are random sends a to S1 and b to S2 –Servers: write p(z)=p(a+b) as p a (b)+p b (a) where deg(p a ),deg(p b ) ≤ 1, p a known to S1, and p b known to S2 Send descriptions of p a,p b to Client, who outputs p a (b)+p b (a) d=O(logn)  O(logn)-bit queries, O(n 1/2+ε )-bit answers

34 Tool: Secret Sharing Randomized mapping of secret s to shares (s 1,s 2,…,s k ) –Linear secret sharing: shares = L(s,r 1,…,r m ) Access structure: subset A of 2 [k] specifying authorized sets –Sets of shares not in A should reveal nothing about s –Optimal share complexity for given A is wide open –Here: k=3, each share hides s, all shares determine s Useful examples for linear schemes –Additive sharing: s=s 1 +s 2 +s 3 –Shamir’s secret sharing: s i =p(i) where p(x)=s+rx –CNF secret sharing: s=r 1 +r 2 +r 3, s 1 =(r 2,r 3 ), s 2 =(r 1,r 3 ), s 3 =(r 2,r 3 ) –CNF is “maximal”, Additive is “minimal” For any linear scheme: [v], x  [ ] (without interaction) –PIR with short answers reduces to client sharing [e i ] while hiding i –Enough to share a multiple of [e i ]

35 Tool: Matching Vectors [Yek07,Efr09, DGY10] Vectors u 1,…,u n in Z m h are S-matching if: – = 0 – ∈ S (0 ∉ S) Surprising fact: super-polynomial n(h) when m is a composite –For instance, n=h O(logh) for m=6, S={1,3,4} –Based on large set systems with restricted intersections modulo m [BF80, Gro00] Matching vectors can be used to compress “negated” shared unit vector –[v] = [,, …, ] –v is 0 only in i-th entry Apply local share conversion to obtain shares of [v’], where v’ is nonzero only in i-th entry –Efremenko09: share conversion from Shamir’ to additive, requires large m –Beimel-I-Kushilevitz-Orlov12: share conversions from CNF to additive, m=6,15,…

36 Matching Vectors & Circuits x1x1 x2x2 x3x3 xhxh VC-dim mod 6  2 h^logh <<< 2 2^h Actual dimension wide open; related to size of: Set systems with restricted intersections [BF80, Gro00] Matching vector sets [Yek07,Efr09, DGY10] Degree of representing “OR” modulo m [BBR92]

37 Share Conversion Given: CNF shares of s mod 6 s=0  s’  0 s  0  s’=0 s=1,3,4

38 Big Set System with Limited mod-6 Intersections

39 r-clique 11 3 Big Set System with Limited mod-6 Intersections

40 PIR  MPC Arbitrary polylogarithmic 3-server PIR  MPC with poly(|input|) communication [IK04] Applications of computationally efficient PIR [BIKK14] –2-server PIR  OT-complexity of secure 2-party computation –3-server PIR  Correlated randomness complexity Applications of “decomposable” PIR [BIKK14] –Private simultaneous messages protocols –Secret-sharing for graph access structures

41 Open Problems: PIR and LDC Understand limitations of current techniques –Better bounds on matching vectors? –More powerful share conversions? t-private PIR with n o(1) communication –Known with 3 t servers [Barkol-I-Weinreb08] –Related to locally correctable codes Any savings for (classes) of polynomial-time f:{0,1} n  {0,1} ? Barriers for strong lower bounds? –[Dvir10]: strong lower bounds for locally correctable codes imply explicit rigid matrices and size-depth lower bounds.

42 Open Problems: MPC High end: understand complexity of “worst” f –O(2 n^  ) vs.  (n) –Closely related to PIR and LDC Mid range: nontrivial savings for “moderately hard” f? Low end: bounds on amortized rate of finite f –In honest-majority setting –Given noisy channels


Download ppt "The Complexity of Information-Theoretic Secure Computation Yuval Ishai Technion 2014 European School of Information Theory."

Similar presentations


Ads by Google