Presentation is loading. Please wait.

Presentation is loading. Please wait.

Secure Computation of Linear Algebraic Functions

Similar presentations


Presentation on theme: "Secure Computation of Linear Algebraic Functions"— Presentation transcript:

1 Secure Computation of Linear Algebraic Functions
Enav Weinreb – CWI, Amsterdam Joint work with: Matt Franklin, Eike Kiltz, Payman Mohassel and Kobbi Nissim I will present results from several papers, written together with...

2 Talk Overview Secure Computation in General
Secure Linear Algebra Based on “Oblivious Gaussian Elimination” Secure Linear Algebra Based on Linearly Recurrent Sequences Recent Developments and Open Problems We will start with a general overview of secure computation We will then present two protocols for secure computation for linear algebraic functions, the first based on the well known Gaussian elimination procedure, and the second based on linearly recurrent sequences. We will end with reporting on a new improved protocol, and presenting a few open problems.

3 Secure Computation Alice has an input x Bob has an input y
Let f:{0,1}2n{0,1} be a Boolean function. Alice and Bob wish to compute f(x,y) without leaking any further information on their private inputs. The players cooperate but do not trust each other. The task of secure computation involves two players, Alice and Bob, each holding the inputs x and y respectively. They wish to compute a function f(x,y). However, the players do not trust each other, and wish to leak no information on their private input apart from the information that can be deduced from the output of f. As often happens in the business world, the players wish to cooperate on the one hand, but wish to disclose as less information as possible to each other.

4 Secure Computation - Example
The Millionaires’ Problem x y A well known example for secure computation is the following problem, knows as the millionaires’ problem. Each player has a certain amount of money. The players wish to compute which one of them has more money. x > y ?

5 Secure Computation - Example
The Millionaires’ Problem x 1,000,000,000$ For example, if a player has a billion dollars, and the output of the function is that he is the wealthier, then he has no information on the input of the other player, apart from the fact that it is smaller than his amount... x > y ? Answer: x < y

6 “Leak no further information”
How to formulate the security requirement? Ideal world - third trusted party. Alice and Bob send their inputs to the trusted party. Trusted party computes f(x,y) and sends answer to the players. Prove a claim of the form: “whatever a ‘bad’ Alice can do while interacting with Bob, could be done while interacting with the trusted party”. Computational security versus information theoretic security.

7 “Leak no further information”
Levels of security: Computational - adversary is computationally limited Information theoretic - adversary is computationally unbounded. “Leak no further information” Ideal World Real World y x y x How shell we formulate The requirement that a no information is leaked rather than f(x,y). Suppose we have designed a protocol and we want to prove that it is secure. In an ideal world Alice and Bob could use a third trusted party. The players would send the their inputs to the trusted party, who would then compute f(x,y) and send the output to both players. Clearly, no information rather than f(x,y) is leaked in this procoss. To prove the security of the protocol in the real world, we would like to prove that the players learn no more information on each other’s inputs, rather than f(x,y). We show that if Alice behaves in an adversarial way, and tries to study some function h(x), then there exists an adversary that could study the same information while communication with the third trusted party. This essentially means that no information rather than f(x,y) can be deduced from the protocol. We will regard two levels of security. Computation security, where the proof uses the fact that adversary is computationally bounded and cannot perform sertain computations, such as factoring large numbers. And the information theoretical setting, where we need to defend against an adversary with unlimited power. f(x,y) f(x,y) f(x,y) h(x) h(x)

8 Complexity Measures and Adversary Model
Important complexity measures: Communication complexity Round complexity Computational complexity Adversary models: Honest but curious – adversary follows the protocol but tries to learn more information Malicious – adversary arbitrarily deviates from the protocol After defining secure computation, the next thing we want to discuss is whether this can be done efficiently. Efficiency is measured in terms of communication complexity, that is, the number of bits transmitted during the protocol execution, the round complexity, that is, the level of interactivity the protocol has, where low interactivity leads to much more efficient protocols in pratctice. A third complexity measure is the computational complexity of the players, but this measure will be ignored during this talk. Next, we discuss two types of adversaries: An honest but curious adversary behave according to the protocol sepcification, but then in the end of the day he reads the log of the communication, trying to deduce information on the other party. A malicious party can deviate arbitrarily from the protocol. We will focus on honest but curious adversaries throughout this talk.

9 Boolean Circuit Complexity
Let f:{0,1}2n  {0,1} We consider digital circuits with the gates {AND, OR, NOT} that compute f in the natural way. circuit size – number of gates circuit depth – max distance from an input wire to output 1 x1 x2 x3 x4 x5 x6 x7 x8 Interestingly, most known protocols relate to the complexity of the function Alice and Bob are trying to compute. To describe the complexity of a function, we use boolean circuits. For a function f, we go to the next electornic store and buy boolean gates. We use And OR and Not gates. The computation is done in the natural way. The complexity measures are the size of the circuit, that is, its number of gates, and the depth of the circuit, which is the maximum distance from the input wires and the output.

10 General Result – two-party [Yao]
Boolean circuit that computes f(x,y) with size s(n) implies secure two party protocol for computing f(x,y) with: communication complexity linear in s(n) 2 rounds. computational security. The following is a very famous result by Yao that relates the boolean circuit complexity of a function with the efficiency of secure protocols for it. If f has a circuit with s(n) gates, then it has a protocol with communication complexity linear in s(n) and with one message from Alice to Bob and One from Bob to Alice. The protocol is secure only against computationally bounded adversaries. However, it can be shown that in the two party case, most function do not have any secure protocol secure against computationally unbounded adversaries.

11 General Result – Multi-Party [BGW, CCD]
Boolean circuit that computes f(x1,...,xk) with size s(n) and depth d(n) implies A secure k-party protocol for computing f(x1,...,xk) with: communication complexity linear in s(n) round complexity d(n) Information theoretic security against: Less than k/2 adversarial players – honest but curious Less than k/3 adversarial players – malicious Secure computation can be naturally generalized to discuss protocols with more than two players. Here, if we have a majority of honest players, we can get security in the presence of unbounded adversaries as well. Specifically, a circuit with s(n) gates and depth d(n) implies a protocol with communication linear in s(n), round complexity d(n), and information theoretic security. The protocols are secure as long as less than k/2 players misbehave in an honest but curious way or as long as less than k/3 players behave in a malicious way.

12 Talk Overview Secure Computation in General
Secure Linear Algebra Based on “Oblivious Gaussian Elimination” Secure Linear Algebra Based on Linearly Recurrent Sequences Recent Developments and Open Problems We proceed to discuss secure computation of linear algebraic functions

13 Linear Algebraic Functions
Matrix singularity: Alice and Bob hold A ∊ Fnxn and B ∊ Fnxn respectively, where F is a finite field They wish to (securely) compute whether M=A+B is singular Efficient secure protocol for singularity leads to efficient protocols for: solving a joint system of equations (linear constraints may contain private information!) computing det(M), char.poly(M), min.poly(M) computing subspaces intersection more... Our main example will be matrix singularity. Alice and Bob hold square matrices over some finite field, and wish to compute whether the sum of their matrices is a singular matrix. A solution to this problem leads to a solution to many other problem, such as solving joint linear systems, computing the determinant, the characteristic and the minimal polynomials of the matrix and more...

14 Applying General Results
Circuit complexity of matrix singularity is similar to number of multiplications in matrix product. Best known result O(n2.38) [Coppersmith Winograd] Input size is only n2 - trivial non-cryptographic protocol has complexity n2 Can we achieve this in a secure protocol? Can we achieve this keeping the round complexity low? Before designing a protocol by ourselves, let us check what protocols can we derive from the general results we saw before. The circuit complexity of matrix singularity is strongly related to the number of multiplication required in computing matrix product. This problem has been intensively investigated, and the best bound known for it is n to the two point three eight. This gives us secure protocols with communication complexity n to the two point three eight. However, the input size is only n^2, hence without the privacy requirement, the problem has a protocol of complexity n squared, in which Alice simply send her input to Bob who computes the answer and sends it to Alice. The questions we address in this talk is whether we can achieve n squared communication in a secure protocol, and can we do it while keeping the round complexity low.

15 A previous result Information theoretic security
“Secure linear algebra in a constant number of rounds.” [Cramer Damgård] Information theoretic security constant round complexity communication complexity O(n3) In a previous work, Cramer and Damgard discussed the problem of secure linear algebra. Their protocols have information theoretical security in the multiparty setting. Their protocols enjoy constant round complexity, which is better the round complexity of general results, which is bounded by the depth of the circuit computing the function. However, their communication complexity was n cube, which is not as efficient as we desire.

16 Our results Secure protocol for singularity(A+B) in the computational two party setting with: communication complexity O(n2log n) round complexity O(log n) Recent improvements [Mohassel W] constant round information theoretical security We turn now to describe our results. The results presented in this assume the adversary is computationally bounded. We show a protocol for computing the singularity of A+B with communication complexity n^2 log n, which is very close to n^2, and with round complexity O(log n). In a recent paper with Payman Mohassel we have been able to improve these results to hold in the presence of unbounded adversaries, and to enjoy constant round complexity.

17 Oblivious Gaussian Elimination
Protocol from [Nissim W] Achieves: communication complexity O(n2log n) round complexity O(n0.275) Cryptographic assumption: public key homomorphic encryption The first protocol I will present is introduced in a joint paper with Kobbi Nissim. It achieves communication complexity O(n^2 log n ) and round complexity O(n^0.275). We will later see a different protocol with improved round complexity. The cryptographic assumption needed for this protocol is the existence of homomorphic encryption schemes, which we describe in the next slide.

18 Tool: Homomorphic Encryption
Public key encryption scheme Public key PK is published – everybody can encrypt Secret key SK is private – only one can decrypt For Corollary: Example: [Goldwasser Micali] (QR) for F=GF(2). (with PK only) We discuss public key encryption schemes, that is, a public encryption key is published, and thus everybody can encrypt, while only the player that knows the secret decryption key can decrypt. Homomorphic encryption schemes have the following further properties: Given encryptions of two field elements, you can compute an encryption of their sum, without the knowledge of the secret decryption key. Given an encryption of a and a value c in the clear we can compute an encryption of a times c. As a corollary, we can multiply a coordinate wise encrypted vector by a scalar we have in the clear, and multiply an encrypted matrix by a matrix we have in the clear. Homomorphic encryption schemes exists under certain cryptographic assumptions. For the presentation of this protocol, we will assume we are working over F_2, the field with 2 elements.

19 Initial Step A ∊ Fnxn B ∊ Fnxn PK Is M singular? + = Generates
We now start with the description of our protocol. As a first step we break the symmetry between Alice and Bob. Alice generates keys for a public key homomorphic encryption system and sends Bob the public key and an encryption of her matrix. Bob can then encrypt its on input, and use the homomorpic properties to compute an encryption of the sum. We are now faced with the following problem: Bob holds a matrix encrypted under Alice’s key. They wish to cooperate in order to decide whether the encrypted matrix is singular, without revealing any other information on the matrix. Is M singular?

20 Algorithms on Encrypted Data
Bob can locally compute: What about multiplication? What computations can Bob perform Locally? All computations allowed by the homomorphic encryption system that do not require the public key. What about multiplying two inputs? Is there a homomorphic encryption system that also supplies with the ability to multiply encrypted values? This is a great open problem in cryptography. In our case, we can do that using communication with Alice. Use Alice! ?

21 Multiplication Chooses random
Let us describe a very simple way for Bob to multiply to elements, without revealing any information on them to Alice. Bob chooses two random values ra and rb, locally compute E(a+ra) and E(b+rb) and sends the ciphers to Alice. Alice then decrypt to see two random values. She multiplies the values, encrypts, and sends the output to Bob. Bob can then delete all the terms other than the encryption of ab, to get the encryption of the product.

22 Multiplying a Vector by a Scalar
Communication complexity is O(n). Similarly, Bob can use the help of Alice to multiply an encrypted vector by an encrypted scalar. The communication complexity in both directions would be O(n).

23 Encrypted Matrix Singularity (reminder)
Let us go back to our problem: Bob holds a matrix encrypted under Alice’s public key, and they wish to decide whether the matrix is singular or not. Is singular?

24 Gaussian Elimination Find a row that “starts” with a 1.
Swap this row and the top row. “Eliminate” the leftmost column. Continue recursively. If Bob would hold his input matrix in the clear, then deciding the singularity of the matrix can be easily done using the well known Gaussian elimination procedure. Since we are about to execute this algorithm in a non-standard environment, let’s recall the details of the algorithm. Recall that we work over GF(2). The first step is to find a row that starts with a 1. Then, we swap it with the first row. We then reduce the first row from every other row that starts with a 1. Then, we continue recursively on the lower right submatrix.

25 Oblivious Gaussian Elimination
Bob is expected to execute this algorithm on hi encrypted input. How can Bob find a row that starts with a 1? Asking Alice to point out such a row would reveal valuable information on the matrix... Moreover, any use of Alice’s help costs in communication, and we want to design a communication efficient protocol. “Find a row that starts with a 1.” “Swap this row and the top row.” Use Alice!

26 Finding a row starting with a 1
STEP 1: Randomization Bob multiplies E(M) by a random full rank matrix R. E(M)  R E(M) Set m = log2n The first step of our protocol introduces some randomness to the protocol. Bob, chooses a full rank matrix R and multiplies it by E(M) (recall that Bob can multiply an encrypted matrix by a matrix in the clear). We set m to be log n square. With very large probability, if there was a 1 somewhere in the leftmost column of M, then there will be a 1 somewhere in the top m entries of the leftmost entries of the matrix RM. On the other hand, if there is no 1 in the leftmost column, we know that the original matrix was singular. w.h.p

27 Finding a row that starts with a 1
STEP 2: Moving the 1 to the top row. 1 Now we have a row starting with 1 in the top m rows of M. We now show how to move this row to be first.

28 Moving the 1 to the top row.
Bob computes E(M[1,1]M1) If M[1,1]=0 Bob gets E(0). If M[1,1]=1 Bob gets E(M1). For every 2 ≤ j ≤ m, Bob computes E(Mj)  E(Mj – M[j,1]M[1,1]M1) Same with E(M2), E(M3), ..., E(Mm) Update E(M1) = E(Mi) Eliminate leftmost column. 1 Bob uses the help of Alice to multiply the first row, with the top-left entry of the matrix. (recall that this can be done with O(n) bit of communication) If the top left entry is 0, Bob gets an encryption of the 0 vector, and all the operations Bob is going to perform will not affect the encrypted matrix. Otherwise, Bob gets an encryption of the first row of the matrix. For every 1 of the top m rows only, Bob updates the j’th row: Multiplies the leftmost element of the row by the vector he computed before, and reduces the output from the j’th row. Note that if both the first and the jth rows start with 1, then after this operation the jth row will start with 0. Otherwise, no changes are introduced to the matrix. We the continue to do the same with the second row, then with the third row. As some point, we get to a row that starts with a 1, and then all the other entries in the leftmost columns get the value 0. To move this one to the topleft position, we simply assign the first row the sum of all top m rows. 1 m 1 1

29 Moving the 1 to the top row.
Continue recursively on the lower right submatrix Finally, multiply all diagonal elements. M is singular if and only if the product of the diagonal entries is 1. 1 m 1 We continue this process recursively on the lower right matrix. In the end of the process, All the diagonal entries are 1 if and only if M is non-singular. Bob then uses Alice’s help to compute the product of all the diagonal entries

30 Communication Complexity
We proceed to analyze the communication complexity of the protocol. Let’s look at the communication from Bob to Alice first. Every time Bob multiplies the first row of the matrix by an entry, he sends O(n) encrypted field elements In return, Alice sends a vector of encryptions, and thus the communication complexity in the other direction is O(n) as well. To eliminate one column, Alice and Bob perform n such multiplications, and thus the communication complexity of eliminating one column is O(n^2) in both directions. Therefore, to eliminate all n columns, we get communication complexity of n cube, which is not good enough to beat the protocols we get from general results. Fixing the communication complexity from Bob to Alice is easy, as when eliminating the first column, he sends the same encryption over and over again, and thus he can simply send the first row sumed with a random vector once. Hence, the communication from Bob to Alice is O(n^2), as desired Single row One column Overall Alice  Bob Alice  Bob

31 Lazy Evaluation Send data “on demand” Memory Single row One column
Fixing the communication in the other direction requires a little more care. When eliminating one column, Bob sends Alice his top row and leftmost column, added by random values. Alice then performs computations in the clear and sends Bob information to update his matrix. We note that Alice can store most of the information in her memory, and send Bob only the information required for the next round of computation. This method is called “lazy evaluation” and it is often use to reduce the computational complexity of algorithm. This reduces the amount of information sent from Alice to Bob to O(n) per column, and thus to O(n^2) altogether, as required Single row One column Overall Alice  Bob Alice  Bob

32 Talk Overview Secure Computation in General
Secure Linear Algebra Based on “Oblivious Gaussian Elimination” Secure Linear Algebra Based on Linearly Recurrent Sequences Recent Developments and Open Problems We proceed now to describe a protocol with a similar communication complexity and much better round complexity.

33 Improved Round Complexity
Protocol from [Kiltz Mohassel W Franklin] Achieves: communication complexity O(n2log n) round complexity O(log n) Setting: Two party with computational security Computational assumption – homomorphic encryption The protocol is based on a paper with Eike Kiltz, Payman M. and M. Franklin It is in the same setting as the previous protocol, only that it achieves round complexity that is logarithmic in the input size.

34 Linearly Recurrent Sequences
General idea: apply algorithms designed for sparse matrices for secure computation on general matrices. Assumption – the underlying field is large |F| > nlog n (otherwise – use field extension) Interestingly, we apply ideas from algorithms that are design for fast linear algebra on sparse matrices to get secure protocols for general matrices. This protocol assumes that the field size is large. If we started from a small field, we ca use field extensions.

35 A Simple Reduction Randomized approach: To check if M is singular:
Pick a random vector v. Check whether the system Mx = v is solvable. Not solvable – M is singular. Solvable – with high prob. (1 – 1/|F|), M is non-singular The algorithms start with a simple reduction. To check if M is singular, simply pick a random vector v and check if the system Mx eqals v is solvable. Clearly, if the system is not solvable we can infer that M is singular. With large probability, if the system is solvable, then the original matrix is non-singular

36 Deciding if Mx = v is Solvable [Wiedemann]
Consider the n+1 vectors: v, Mv, M2v, ..., Mnv There are a=(a0, ..., an) such that ∑aiMiv = 0 Linearly recurrent sequences: If ∑aiMiv =0 then for all j: ∑aiMi+jv = Mj(∑aiMiv) = Mj0 = 0 To check to solvabilty of the system we use the following ideas from an algorithm by wiedemann, designed for efficient linear algebra on sparse matrices. Look at the vectors v, Mv, M square v, to M to the n times v. Clear, these n+1 vectors are linearly dependent. This means that there exists scalars a0 to a_n, such that the linear combination ai M to the i v gives the 0 vector. If instead of starting from M to the 0 times v, we start from M to the j times v and look at the combination a_i times M to the i+v times v, then it is easy to see that this combination evaluate to zero as well.

37 Deciding if Mx = v is Solvable [Wiedemann86]
For every b=(b0, ..., bn) such that ∑biMiv = 0, consider the polynomial pb(x) = ∑bixi The set of such polynomials forms an ideal in F[x] – the annihilator ideal Minimal polynomial m(x) – the generator of the ideal We now look at all the combinations b0 to bn such that the corresponding linear combination evaluates to 0. We define a polynomial p sub b such that the coeficient of x to the i is b_i. It is not hard to verify that these polynomials form an ideal. We denote by m(x) the polynomial that generates this ideal. This polynomial is called the minimal polynomial of the sequence.

38 The annihilator ideal Let fM(x) be the characteristic polynomial of M.
[Cayley Hamilton]: fM(M)=0 → fM(M)v = 0 → fM(x) is in the annihilator ideal → m(x) | fM(x) We will be interested in the constant coefficient of m(x). Consider f_M(x), the characteristic polynomial of M. It is well known that f_M of M evaluates to 0. Hence, f_M is clearly in the annihilator ideal, and hence the minimal polynomial divides the characteristic polynomial. To check if the system M x equals v is solvable, we will be intrested in the constant coeficient of m(x)

39 The Constant Coefficient of m(x)
Claim: If m(0) ≠ 0 then Mx = v is solvable. If m(0) = 0 then Mx = v is not solvable The following simple claim shows the connection between the constant coefficient of m and the solvability of the system. If m(0) is different than 0, then the system is solvable. We would want to claim that otherwise the system is not solvable but this is not the case. However, if the constant coefficient is 0, we can prove that DET(M) is 0 and thus clearly M is singular. We proceed for a simple proof of the claim

40 The Constant Coefficient of m(x)
Claim: If m(0) ≠ 0 then Mx = v is solvable. If m(0) = 0 then Det(M) = 0. Conclusion: With probability (1 – 1/|F|): m(0) = 0 if and only if det(M)=0 That is, we conclude that with large probabilty, deciding the singularity of the system reduces to checking whether the constant coefficient of m is 0.

41 Proof of the Claim (i) If m(0)≠0 then Mx=v is solvable.
m(x) = cnxn+...+c1x+c0 where c0=m(0) ≠ 0 m(M)v = (m(x) is in the ideal) cnMnv+...+c1Mv+c0v = 0 M(cnMn-1v+...+c1v) = -c0v set x = -c0-1(cnMnv+...+c1Mv) Mx = v  the system is solvable. We proceed to prove the claim: Write m(x) as a sum ci times x to the i. We assume that c_0 is different than 0. We know that m(M)v evaluates to zero since m(x) is in the ideal. Expending this term, and doing straightforward computations, we get that the following vector x is a solution to the system.

42 Proof of the Claim (ii) (ii) If m(0)=0 then Det(M) = 0. fM(0) = Det(M)
We saw before that m(x) | fM(x). Hence fM(0)=0 and thus Det(M) = □ The second part is very easy is well. Det(M) is of course the constant coeficient of the characteristic poylnomial of M. We saw before that m divides f_M. Hence, if the constant coefficient of m is 0 then clearly the constant coefficient of f_m is 0, which means that det(M) is 0.

43 Berlekamp/Massey Algorithm
We are interested in computing m(0). Berlekamp/Massey algorithm: computes m(x) in O(n log n) operations, given v, Mv, ..., M2n-1v. General idea: the algorithm uses an intermediate result of the extended Euclidean algorithm executed on: x2n a polynomial whose coefficients are the elements uTM0v, uTM1v, ..., uTM2n-1v for some random vector u. Note that we reduced the problem of solving linear systems into the problem of computing the constant coefficient of a minimal polynomial of a linearly recurrent sequance. Fortunately, the circuit complexity of this problem is a lot lower than the circuit complexity of the above linear algebraic functions. In particular, the well known algorithm of Berlekamp and Massey solves this problem with only O(n log n) operations, which translates to a circuit of size n log n. The idea of their algorithm is to use intermediate results of the extended euclidean algorithm, executed on xto the 2n and on a polynomial that is based on the elements of the linearly recurrent sequance.

44 And now: the protocol

45 Multiplying two matrices
Communication complexity is O(n2) Before we summarize our protocol, recall that matrix multiplication can be done within communication complexity of only O(n square). Note that this is a basic operation of which the best communication complexity is better than the one offered by the general protocols based on circuit complexity.

46 Secure Two-Party Algorithm (sketch)
(PK,SK) E (M) Next slide: O(log n) rounds, O(n2 log n) communication E(Miv)i=0,1,…,2n-1 Yao’s general method applied on Berlekamp/Massey algorithm: O(1) rounds, O(n logn) communication Bob, starting with an encyption of M, picks a random vector v, and computes encryptions of the 2n-1 first elements of the linearly recurrent sequence M to the i times v. This is done using the help of Alice, as we show in the next slide. Then we apply the genral result of Yao to the Berlekap Massey algorithm to compute an encryption of the coeficients of the minimal polynomial of the linearly sequnce. Finally, Bob multiplies this constant coefficient by a random element and sends it to Alice, Alice decrypts and sends Bob the answer. If the constant coefficient is 0, the answer will be zero. Otherwise, it will be a random field element E(m(x)) Decryption of E(m(0)r) where r is a random number. m(0) =? 0

47 Computing the Sequence EPK(Miv)
Bob is given E(M) and computes E(v) Bob computes E(M2^i), i=1...log n log n rounds, n2 log n communication Bob computes: E(Mv) E(M3v|M2v) = E(M2) · E(Mv|v) E(M7v|M6v|M5v|M4v) = E(M4) ·E(M3v|M2v|Mv|v) Finally: E(v), E(Mv), …, E(M2n-1v) O(log n) rounds, O(n2 log n) communication We are only left with showing how can Bob, using the help of Alice, efficiently compute the encryptions of the first 2n-1 elements of the linearly recurrent sequence. He starts by computing M to the different powers of 2 using log n repitative applications of the multiplication simple protocol. He then compute E(M|v) And each proceeding step Bob multiplies to matrices to double the number of sequence elements being computed. This is done by multiplying the M to the next power of two by a matrix that contians a concatination of all the vectors computed in preceeding rounds. Therefore, the computation of all the sequence elements is done using O(log n) matrix multiplications, and is done in O(log n) rounds of computation and with O(n2log n) communication, as required.

48 Talk Overview Secure Computation in General
Secure Linear Algebra Based on “Oblivious Gaussian Elimination” Secure Linear Algebra Based on Linearly Recurrent Sequences Recent Developments and Open Problems This concludes the presentation of our second secure protocol.

49 Recent Developements Protocol from [Mohassel W] For every constant t:
communication complexity O(n2+1/t) round complexity t Gives information theoretic security. Based on a reduction to deciding the singularity of Toeplitz matrices. In a recent joint paper with Payman Mohassel, we have been able to reduce the round complexity to constant, and to defend against unlimited adversaries as well. More specifically, for every constant t we get a protocol with t round and communication complexity very close to n square. The protocol is based on a reduction to deciding the singularity of Toeplitz matrices.

50 Open Problem Secure Linear Algebra
Malicious case for two party computation General Secure Computation Understand the relation between circuit complexity and secure protocol complexity of problem. Is linear communication complexity always possible? We end with some open problems: First is to design an efficient protocol in the malicious 2 party setting. Second, is to understand the relation between secure computation and circuit complexity. In particular, does any problem has a solution with communication complexity linear in the input size? This concludes my talk

51


Download ppt "Secure Computation of Linear Algebraic Functions"

Similar presentations


Ads by Google