KNOWLEDGE-ORIENTED MULTIPARTY COMPUTATION Piotr (Peter) Mardziel, Michael Hicks, Jonathan Katz, Mudhakar Srivatsa (IBM TJ Watson)

Slides:



Advertisements
Similar presentations
Pretty-Good Tomography Scott Aaronson MIT. Theres a problem… To do tomography on an entangled state of n qubits, we need exp(n) measurements Does this.
Advertisements

Quantum Money from Hidden Subspaces Scott Aaronson and Paul Christiano.
FT228/4 Knowledge Based Decision Support Systems
- Vasvi Kakkad.  Formal -  Tool for mathematical analysis of language  Method for precisely designing language  Well formed model for describing and.
Wysteria: A Programming Language for Generic, Mixed-Mode Multiparty Computations Aseem Rastogi Matthew Hammer, Michael Hicks (University of Maryland, College.
PROBABILITY. Uncertainty  Let action A t = leave for airport t minutes before flight from Logan Airport  Will A t get me there on time ? Problems :
Everything You Need to Know (since the midterm). Diagnosis Abductive diagnosis: a minimal set of (positive and negative) assumptions that entails the.
PROBABILISTIC COMPUTATION FOR INFORMATION SECURITY Piotr (Peter) Mardziel (UMD) Kasturi Raghavan (UCLA)
DYNAMIC ENFORCEMENT OF KNOWLEDGE-BASED SECURITY POLICIES Piotr (Peter) Mardziel, Stephen Magill, Michael Hicks, and Mudhakar Srivatsa.
Level 1 Laboratories University of Surrey, Physics Dept, Level 1 Labs, Oct 2007 Handling & Propagation of Errors : A simple approach 1.
Business Statistics: A Decision-Making Approach, 6e © 2005 Prentice-Hall, Inc. Chap 7-1 Introduction to Statistics: Chapter 8 Estimation.
Floating-Point and High-Level Languages Programming Languages Spring 2004.
8-1 Copyright ©2011 Pearson Education, Inc. publishing as Prentice Hall Chapter 8 Confidence Interval Estimation Statistics for Managers using Microsoft.
Copyright ©2011 Pearson Education 8-1 Chapter 8 Confidence Interval Estimation Statistics for Managers using Microsoft Excel 6 th Global Edition.
Proof by Deduction. Deductions and Formal Proofs A deduction is a sequence of logic statements, each of which is known or assumed to be true A formal.
Bayesian Filtering for Location Estimation D. Fox, J. Hightower, L. Liao, D. Schulz, and G. Borriello Presented by: Honggang Zhang.
Control of Personal Information in a Networked World Rebecca Wright Boaz Barak Jim Aspnes Avi Wigderson Sanjeev Arora David Goodman Joan Feigenbaum ToNC.
Statistics for Managers Using Microsoft® Excel 7th Edition
Business Statistics, A First Course (4e) © 2006 Prentice-Hall, Inc. Chap 8-1 Chapter 8 Confidence Interval Estimation Business Statistics, A First Course.
AM Recitation 2/10/11.
Intermediate Statistical Analysis Professor K. Leppel.
Business Statistics - QBM117 Introduction to hypothesis testing.
Business Statistics: A Decision-Making Approach, 6e © 2005 Prentice-Hall, Inc. Chap 7-1 Business Statistics: A Decision-Making Approach 6 th Edition Chapter.
Confidence Interval Estimation
DYNAMIC ENFORCEMENT OF KNOWLEDGE-BASED SECURITY POLICIES Michael Hicks University of Maryland, College Park Joint work with Piotr Mardziel, Stephen Magill,
Genetic Regulatory Network Inference Russell Schwartz Department of Biological Sciences Carnegie Mellon University.
Confidence Interval Estimation
Chap 8-1 Copyright ©2013 Pearson Education, Inc. publishing as Prentice Hall Chapter 8 Confidence Interval Estimation Business Statistics: A First Course.
Making decisions about distributions: Introduction to the Null Hypothesis 47:269: Research Methods I Dr. Leonard April 14, 2010.
Incentive Compatible Assured Information Sharing Murat Kantarcioglu.
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
Secure sharing in distributed information management applications: problems and directions Piotr Mardziel, Adam Bender, Michael Hicks, Dave Levin, Mudhakar.
On the Practical Feasibility of Secure Distributed Computing A Case Study Gregory Neven, Frank Piessens, Bart De Decker Dept. of Computer Science, K.U.Leuven.
Annual Conference of ITA ACITA 2010 Secure Sharing in Distributed Information Management Applications: Problems and Directions Piotr Mardziel, Adam Bender,
Ebrahim Tarameshloo, Philip W.L.Fong, Payman Mohassel University of Calgary Calgary, Alberta, Canada {etarames, pwlfong, On Protection.
Next-generation databases Active databases: when a particular event occurs and given conditions are satisfied then some actions are executed. An active.
Biostatistics Class 6 Hypothesis Testing: One-Sample Inference 2/29/2000.
International Technology Alliance in Network & Information Sciences Knowledge Inference for Securing and Optimizing Secure Computation Piotr (Peter) Mardziel,
Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011.
Lecture 16 Section 8.1 Objectives: Testing Statistical Hypotheses − Stating hypotheses statements − Type I and II errors − Conducting a hypothesis test.
PROBABILISTIC PROGRAMMING FOR SECURITY Michael Hicks Piotr (Peter) Mardziel University of Maryland, College Park Stephen Magill Galois Michael Hicks UMD.
Chap 7-1 A Course In Business Statistics, 4th © 2006 Prentice-Hall, Inc. A Course In Business Statistics 4 th Edition Chapter 7 Estimating Population Values.
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc. Chap 8-1 Confidence Interval Estimation.
Statistical Inference for the Mean Objectives: (Chapter 9, DeCoursey) -To understand the terms: Null Hypothesis, Rejection Region, and Type I and II errors.
Uncertainty Management in Rule-based Expert Systems
Chap 7-1 A Course In Business Statistics, 4th © 2006 Prentice-Hall, Inc. A Course In Business Statistics 4 th Edition Chapter 7 Estimating Population Values.
Inen 460 Lecture 2. Estimation (ch. 6,7) and Hypothesis Testing (ch.8) Two Important Aspects of Statistical Inference Point Estimation – Estimate an unknown.
Quantification of Integrity Michael Clarkson and Fred B. Schneider Cornell University IEEE Computer Security Foundations Symposium July 17, 2010.
§2.The hypothesis testing of one normal population.
Introduction to Sampling Methods Qi Zhao Oct.27,2004.
Belief in Information Flow Michael Clarkson, Andrew Myers, Fred B. Schneider Cornell University 18 th IEEE Computer Security Foundations Workshop June.
Hypothesis Testing Steps for the Rejection Region Method State H 1 and State H 0 State the Test Statistic and its sampling distribution (normal or t) Determine.
Second Price Auctions A Case Study of Secure Distributed Computing Bart De Decker Gregory Neven Frank Piessens Erik Van Hoeymissen.
Basic Business Statistics, 11e © 2009 Prentice-Hall, Inc. Chap 8-1 Chapter 8 Confidence Interval Estimation Business Statistics: A First Course 5 th Edition.
Outline Historical note about Bayes’ rule Bayesian updating for probability density functions –Salary offer estimate Coin trials example Reading material:
Chapter 8 Confidence Interval Estimation Statistics For Managers 5 th Edition.
Yandell – Econ 216 Chap 8-1 Chapter 8 Confidence Interval Estimation.
Floating-Point and High-Level Languages
ADVANTAGES OF SIMULATION
CONCEPTS OF HYPOTHESIS TESTING
PMI North Area PMP Exam Study Group
Discrete Event Simulation - 4
Knowledge Inference for Optimizing Secure Multi-party Computation
CS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Fall 2007
Computing probabilities using Expect problem-solving Trees: A worked example Jim Blythe USC/ISI.
CS 416 Artificial Intelligence
Cryptography Lecture 18.
Refined privacy models
basic probability and bayes' rule
Presentation transcript:

KNOWLEDGE-ORIENTED MULTIPARTY COMPUTATION Piotr (Peter) Mardziel, Michael Hicks, Jonathan Katz, Mudhakar Srivatsa (IBM TJ Watson)

Secure multi-party computation Multiple parties have secrets to protect. Want to compute some function over their secrets without revealing them. 2 x1x1 x2x2 Q 1 (x 1,x 2 ) True / False Q 1 = if x 1 ≥ x 2 then out := True else out := False

T Secure multi-party computation Use trusted third party. 3 x1x1 x2x2 Q 1 (x 1,x 2 ) True Q 1 = if x 1 ≥ x 2 then out := True else out := False

Secure multi-party computation SMC lets the participants compute this without a trusted third party. 4 T x1x1 x2x2 Q 1 (x 1,x 2 ) True Q 1 = if x 1 ≥ x 2 then out := True else out := False

Secure multi-party computation Nothing is learned beyond what is implied by the query output. Assume it is publicly known that 10 ≤ x 1,x 2 ≤ implies x 2 = 10 x1x1 x2x2 Q 1 (10,x 2 ) True Q 1 = if x 1 ≥ x 2 then out := True else out := False

Our goal Make sure what is implied is not too much. Model knowledge. Model inference. 6 Q 1 = if x 1 ≥ x 2 then out := True else out := False x1x1 x2x2 Q 1 (x 1,x 2 ) True

This talk Secure multiparty computation. Knowledge-based security For a simpler setting For SMC Evaluation 7

Knowledge in a simpler setting 8

Only one party, A 2, has a secret to protect. No need for SMC. 9 x 1 =80 x 2 =60 A2A2 A1A1 Q 1 (x 1,x 2 ) True Q 1 = if x 1 ≥ x 2 then out := True else out := False

Knowledge in a simpler setting A 2 imposes a limit on knowledge about x x 1 =80 A1A1 δ 1 : 10 ≤ x 2 ≤ 100 δ’ 1 : 10 ≤ x 2 ≤ 80 out = True A2A2 x 2 =60 Q 1 = if x 1 ≥ x 2 then out := True else out := False “(prior) belief” “revised belief” “revision” δ 1 | (out = True)

Knowledge in a simpler setting A 2 imposes a limit on knowledge about x δ’ 1 : 10 ≤ x 2 ≤ 80 “Knowledge-based” policy: | δ’ 1 | = 71 ≥ t x 1 =80 A1A1 A2A2 x 2 =60 Q 1 = if x 1 ≥ x 2 then out := True else out := False

Knowledge in a simpler setting Non-deterministic queries. 12 Q’ 1 = if x 1 ≥ x 2 then out := True else out := False if rand() < 0.5 then out := True x 1 =80 A1A1 A2A2 x 2 =60

Knowledge in a simpler setting Non-deterministic queries. 13 Q’ 1 = if x 1 ≥ x 2 then out := True else out := False if rand() < 0.5 then out := True δ 1 (x 2 ) = 1/91 for 10 ≤ x 2 ≤ 100 out = True δ’ 1 (x 2 ) = 2/162 for 10 ≤ x 2 ≤ 80 δ’ 1 (x 2 ) = 1/162 for 81 ≤ x 2 ≤ 100 x 1 =80 A1A1 A2A2 x 2 =60

Q’ 1 = if x 1 ≥ x 2 then out := True else out := False if rand() < 0.5 then out := True x 1 =80 A1A1 A2A2 x 2 =60 Knowledge in a simpler setting Policy 14 Q’ 1 (80,60) = True δ 1 | (out = True) = δ’ 1 δ’ 1 (x 2 ) = 2/162 for 10 ≤ x 2 ≤ 80 δ’ 1 (x 2 ) = 1/162 for 81 ≤ x 2 ≤ 100 Policy?: δ’ 1 (x 2 ) ≤ t 2 for every x 2 “belief threshold”

Q’ 1 = if x 1 ≥ x 2 then out := True else out := False if rand() < 0.5 then out := True x 1 =80 A1A1 A2A2 x 2 =60 Knowledge in a simpler setting Policy 15 Q’ 1 (80,60) = True δ 1 | (out = True) = δ’ 1 δ’ 1 (x 2 ) = 2/162 for 10 ≤ x 2 ≤ 80 δ’ 1 (x 2 ) = 1/162 for 81 ≤ x 2 ≤ 100 Policy?: δ’ 1 (x 2 ) ≤ t 2 for every x 2 “belief threshold” o ∀ o in range Q’ 1 (80,  )

x 1 =80 A1A1 A2A2 x 2 =60 Knowledge in a simpler setting Policy. 16 δ 1 | ( out = True ) δ 1 | ( out = False ) “max belief” = max δ’,x { δ’(x) } where δ’ = δ 1 | (out = o) for some o Policy: P(Q’ 1,x 1 =80,δ 1,t) = max belief ≤ t “(max) belief threshold” Q’ 1 = if x 1 ≥ x 2 then out := True else out := False if rand() < 0.5 then out := True If successful Q’ 1 (80,60) = True Track δ 1 | ( out = True )

x 1 =80 A1A1 A2A2 x 2 =60 Knowledge in a simpler setting 17 Q1Q1 δ  δ’ out = True δ’  δ’’ NOPE Q2Q2 TIME A 2 maintains a representation of A 1 ’s belief. Assumption: initial belief is accurate.

PL? Theory of Clarkson et al. Model knowledge as a probability distribution δ Assumption: δ is agent’s actual knowledge Model rational agent learning from query outputs. Probabilistic program semantics and revision. δ’ = ( [[S]] δ ) | (out = True) 18 P’ = ( [[ S ]] P ) | (out = True) Sound: δ ∊ γ(P)  δ’ ∊ γ(P’) Implementation Resistant to state-space size Ex. | support(δ) | > 2 * ∊ γ(P), an abstraction actual knowledge ∊ γ(P) (Abstract) Policy to limit knowledge: max-belief ≤ t Sound: max-belief(P) ≤ t  max-belief(δ) ≤ t

Knowledge in the SMC setting 19

Knowledge in the SMC setting All parties want to protect their secret. 20 x 1 =80 x 2 =60 A2A2 A1A1 Q 1 (x 1,x 2 )

Knowledge in the SMC setting All parties want to protect their secret. 21 x 1 =80 x 2 =60 A2A2 A1A1 Q 1 (x 1,x 2 ) True

Knowledge in the SMC setting Assumption: common knowledge/belief. 22 x 1 =80 A1A1 x 2 =60 A2A2 δ(x 1,x 2 ) = 1/ ≤ x 1,x 2 ≤ 100

Knowledge in the SMC setting Assumption: initial belief is derived from common knowledge, revised by secret value. 23 x 1 =80 A1A1 x 2 =60 A2A2 δ | (x 1 = 80) = δ 1 80 (x 2 ) = 1/91 10 ≤ x 2 ≤ 100 δ | (x 2 = 60) = δ 2 60 (x 1 ) = 1/91 10 ≤ x 1 ≤ 100

Belief sets A 2 considers all possible values of x 1 24 x 2 =60 A2A2 δ 1 10 = δ | (x 1 = 10) x 1 =10 A1A1 10 ≤ x 1 ≤ 100 x 1 =11x 1 =100 δ 1 11 = δ | (x 1 = 11) δ = δ | (x 1 = 100) …

Belief sets A 2 considers all possible values of x 1 25 x 2 =60 A2A2 A1A1 Δ = { δ 1 x }

Belief sets A 2 conservatively enforces max belief threshold. 26 x 2 =60 A2A2 δ 1 10  δ’ 1 10 x 1 =10 A1A1 x 1 =11 … x 1 =80 A1A1 Q δ 1 11  δ’ 1 11 max belief ≤ t

Belief sets A 2 maintains belief set. A 1 does similarly. 27 x 2 =60 A2A2 10 ≤ x 1 ≤ 100 A1A1 Δ 1 = { δ 1 x } x A1A1 policy P 2 A1A1 Δ’ 1 = { δ 1 x | (out = True) } x policy P 1 T Q 1 (x 1,x 2 ) True x1x1 x2x2 Δ2Δ2 Δ’ 2 TIME

Belief sets Very conservative. 28 x 1 =80 A1A1 δ 1 80 (x 2 ) = 1/91 10 ≤ x 2 ≤ 100 out = True δ’ 1 80 (x 2 ) = 1/71 10 ≤ x 2 ≤ 80 x 1 =10 A1A1 δ 1 10 (x 2 ) = 1/91 10 ≤ x 2 ≤ 100 out = True δ’ 1 10 (x 2 ) = 1 10 ≤ x 2 ≤ 10 Q 1 = if x 1 ≥ x 2 then out := True else out := False

Belief sets Expensive in computation and representation. Abstraction might help. Have: γ(P) = { δ } Can do: γ(P) ⊇ { δ | (x 1 = v) } 10 ≤ v ≤ 100 Would also like: γ(P) ≈ { δ | (x 1 = v) } 10 ≤ v ≤ A1A1 Δ = { δ | (x 1 = v) }

Different approach: Knowledge tracking via SMC 30

T Knowledge tracking via SMC SMC: “trusted third party”. 31 Q 1 (x 1,x 2 ) True Q 1 = if x 1 ≥ x 2 then out := True else out := False x1x1 A1A1 x2x2 A2A2

T policy Q 1 (x 1,x 2 ) δ1δ1 Knowledge tracking via SMC Use trusted third party for knowledge tracking and policy checking. Policy check on actual belief, instead conservatively over all plausible beliefs. 32 δ2δ2 x 1 =80 A1A1 x 2 =60 A2A2 δ δ | (x 1 = 80)δ | (x 2 = 60) True TIME δ’ 1 δ’ 2 P 1 (δ 2, … ) ∧ P 2 (δ 1, … )

T policy Q 1 (x 1,x 2 ) δ1δ1 Knowledge tracking via SMC Problem 2: policy decision leaks information. 33 δ2δ2 x 1 =80 A1A1 x 2 =60 A2A2 δ δ | (x 1 = 80) δ | (x 2 = 60) Reject TIME δ1δ1 δ2δ2 P 1 (δ 2, … ) ∧ P 2 (δ 1, … )

T policy Q 1 (x 1,x 2 ) δ1δ1 Knowledge tracking via SMC Agents trust the “trusted third party” to enforce their policies. 34 δ2δ2 x 1 =80 A1A1 x 2 =60 A2A2 δ δ | (x 1 = 80) δ | (x 2 = 60) Reject Accept TIME δ1δ1 δ’ 2 P 1 (δ 2, … ) P 2 (δ 1, … ) True

T policy Q 1 (x 1,x 2 ) δ1δ1 Knowledge tracking via SMC Knowledge tracking within SMC More permissive than belief sets. Unsatisfying uncertainty about one’s own policy decisions. “SMC is 1000 times slower than normal computation” Active research area (getting better). 35 δ2δ2 x 2 =60 δ | (x 2 = 60) Reject Accept δ1δ1 δ’ 2 P 1 (δ 2, … ) P 2 (δ 1, … ) True

Comparison and Examples 36

Millionaires 37 x 1 =? A1A1 Q 1 = if x 1 ≥ x 2 && x 1 ≥ x 3 then out := True else out := False max belief x 2 =? A2A2 x 3 =? A3A3 δ1δ1 δ 1 x2 δ 1 x3

Reduce precision 38 similar w = avg := (x 1 + x 2 + x 3 )/3 if | x 1 – avg | ≤ w && | x 2 – avg | ≤ w && | x 3 – avg | ≤ w then out := True else out := False max belief x 1 =? A1A1 x 2 =? A2A2 x 3 =? A3A3

Introduce noise 39 richest p = out := 0 if x 1 > x 2 && x 1 > x 3 then out := 1 if x 2 > x 1 && x 2 > x 3 then out := 2 if x 3 > x 1 && x 3 > x 2 then out := 3 if rand() < p then out := uniform(0,1,2,3) max belief x 1 =? A1A1 x 2 =? A2A2 x 3 =? A3A3

Summary+conclusions 40

Knowledge-Oriented Multiparty computation SMC: agents do not learn beyond what is implied by query. Our work: agents limit what can be inferred. Two approaches with differing (dis)advantages. Ongoing work in PL and crypto for tractability. 41 x1x1 x2x2 Q 1 (x 1,x 2 ) True

Knowledge in the SMC setting Each other’s secret is unknown, but in some initial known set. 42 x 1 =80 A1A1 x 2 =60 A2A2 x 2 =? A2A2 x 1 =? A1A1 10 ≤ x 2 ≤ ≤ x 1 ≤ 100

δ 1 =δ | (x 1 =80) δ 2 =δ | (x 2 =60) T policy P 1 (δ 2, … ) ∧ P 2 (δ 1, … ) Q 1 (80,60) Knowledge tracking via SMC Use trusted third party for knowledge tracking and policy checking. Policy check on actual belief, instead conservatively over all plausible beliefs. 43 True δ 2 | (out=True) x 1 =80 A1A1 x 2 =60 A2A2 δ(x 1,x 2 ) = 1/ ≤ x 1,x 2 ≤ 100

T policy P 1 (δ 2, … ) ∧ P 2 (δ 1, … ) Q 1 (80,60) Knowledge tracking via SMC Problem 1: agents cannot be trusted to provide their true beliefs. They cannot be trusted to look at each other’s beliefs either. 44 True x 1 =80 A1A1 x 2 =60 A2A2 δ 1 =δ | (x 1 =80) δ 2 =δ | (x 2 =60) δ 2 | (out=True)

Knowledge in a simpler setting Simulatable policy. 45 Q’’ 1 = if x 1 ≥ x 2 then out := True else out := False if rand() < 0.5 then out := x 2 x 1 =80 A1A1 A2A2 x 2 =60

x 1 =80 A1A1 A2A2 x 2 =60 Knowledge in a simpler setting δ 1 (x 2 ) = 1/91 for 10 ≤ x 2 ≤ 100 out = True δ’ 1 (x 2 ) = 1/71 for 10 ≤ x 2 ≤ 80 Simulatable policy. 46 Q’’ 1 = if x 1 ≥ x 2 then out := True else out := False if rand() < 0.5 then out := x 2

x 1 =80 A1A1 A2A2 x 2 =60 Knowledge in a simpler setting δ 1 (x 2 ) = 1/91 for 10 ≤ x 2 ≤ 100 out = 60 δ’ 1 (x 2 ) = 1 for x 2 = 60 Simulatable policy. 47 Q’’ 1 = if x 1 ≥ x 2 then out := True else out := False if rand() < 0.5 then out := x 2

x 1 =80 A1A1 A2A2 x 2 =60 Knowledge in a simpler setting Simulatable policy. 48 Q’’ 1 = if x 1 ≥ x 2 then out := True else out := False if rand() < 0.5 then out := x 2 δ 1 | ( out = True ) δ 1 | ( out = False ) “max belief” = max δ’,x { δ’(x) } Policy: 1 = max belief ≤ t δ 1 | ( out = 60 )

Belief sets What A 1 learns depends on x x 1 =? A1A1 Q 1 = if x 1 ≥ x 2 && x 1 ≥ x 3 then out := True else out := False max belief threshold

Belief sets Conservative policy check approach can still allow non- trivial thresholds for some queries. 50 x 1 =? A1A1 max belief threshold

Knowledge tracking via SMC Agents cannot be trusted to provide their true beliefs. Cannot let A 1 ’s belief be tracked/known by A 2 or vice versa. 51 x 1 =? A1A1 δ 1 (x 2 ) = 1/91 10 ≤ x 2 ≤ 100 out = True δ’ 1 (x 2 ) = 1/71 10 ≤ x 2 ≤ 80 Q 1 = if x 1 ≥ x 2 then out := True else out := False 80

Knowledge tracking via SMC Policy decision leaks information. 52 x 2 =60 t 2 =0.5 A2A2 x 1 =? A1A1 δ 1 (x 2 ) = 1/91 for 10 ≤ x 2 ≤ 100 δ’ 1 (x 2 ) > 0.5 for some x 2 (reject) Q 1 = if x 1 ≥ x 2 then out := True else out := False 10