Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reasoning about Information Leakage and Adversarial Inference Matt Fredrikson 1.

Similar presentations


Presentation on theme: "Reasoning about Information Leakage and Adversarial Inference Matt Fredrikson 1."— Presentation transcript:

1 Reasoning about Information Leakage and Adversarial Inference Matt Fredrikson 1

2 2 Records driving using accelerometer and GPS Rates safety based on acceleration, braking, cornering Overlays traffic alerts on map

3 3

4 4 43.0731° N, 89.4011° W

5 5 42.0517° N, 89.2399° W 43.1021° N, 89.6187° W Don’t make any user’s location more guessable

6 6

7 7 43.0731° N, 89.4011° W Area Code: 608

8 8 Two adversaries: back-end server and end-user Both see continuous streams of sensor-based data Back-end sees raw data End-user sees aggregate data Goal: infer driving behavior Goal: infer others’ location Assume algorithms are public

9 9 Solution #1: release coarse data, do more computations locally May come with tradeoff in performance Both: difficult to verify Solution #2: need a way of reasoning about partial leak

10 10 Monolithic Application Privilege-separated application Declassifier Worker Backend Database httpd

11 11 Worker Database httpd Declassifier

12 12 Writing differentially-private declassifiers is challenging Real systems have many side channels – even ignoring timing Sampling continuous distributions with finite precision is not easy Using differentially-private sanitizers is hard User must work out privacy budget Selecting correct privacy parameter is a subtle matter Need to verify core sanitizer algorithm Provide automated developer support

13 13 Insecure Program... Policy Weaver Secure Program Policy httpd Differential Privacy Help allocate privacy budget Ensure that sanitizers are placed correctly

14 14 Can send arbitrary queries to the back-end database Observes output of declassifier process Goal: violate differential privacy on back-end database Assume adversary can control worker process

15 15 Need to reason about finite- precision implementations of sanitizers Need to ensure sanitizers correctly used for higher-level policy These goals difficult to accomplish under traditional models of information flow Idea: help users write code that uses primitives correctly

16 µ µ µ µ µ µ µ µ µ L 1 := 0; L 2 := 1 while : (L 1 = L 2 ) do L 3 := L 1 + L 2 ; L 1 := L 2 ; L 2 := L 3 ; H := L 3 L 1 := 0; L 2 := 1 while : (L 1 = L 2 ) do L 3 := L 1 + L 2 ; L 1 := L 2 ; L 2 := L 3 ; H := L 3 P D [x] 16 Sem µ µ

17 17 µ : Observations  Feasible States x := y p := q+1 l := h Can represent different observational models …different computational abilities

18 18 Privacy policies bound the adversary’s knowledge about the initial state  µ initial state distributions w := h z := w+1 x := z Dynamic compliance: private for a single starting state P ² ¾ D ½  Dynamic compliance: private for a single starting state P ² ¾ D ½  Static compliance: private for all starting states P ²  Static compliance: private for all starting states P ² 

19 19 A randomized function K gives ² -differential privacy if for all databases D 1, D 2 satisfying a neighbor relation N(D 1, D 2 ), and all S µ Range(K): Pr[K(D 1 ) 2 S] · exp( ² ) £ Pr[K(D 2 ) 2 S] vs. non-interference: D 1 | L = D 2 | L vs. non-interference: D 1 | L = D 2 | L vs. non-interference: Pr[K(D 1 ) 2 S] = Pr[K(D 2 ) 2 S] vs. non-interference: Pr[K(D 1 ) 2 S] = Pr[K(D 2 ) 2 S] Inp uts  (D) ´ 8 ¾, ¾ ‘ 2 R D. N( ¾, ¾ ‘) ) Pr[ ¾ ] · exp( ² ) £ Pr[ ¾ ‘] Draw ¾, ¾ ‘ using coin flips of D Impose neighbor relation Bound occurrence probabilities

20 Path µ Valid Dynamic AnalysisStatic AnalysisPolicy Weaving Proof-of-concept implementation Based on CEGAR model checking Reduce probability formulas to model counting

21 21 Z3 Theory of Counting libbarvinokMathematica Theory of Discrete Probabilities Nonlinear counting Excellent performance on realistic benchmarks: (Give numbers) Excellent performance on realistic benchmarks: (Give numbers) LICS 2013

22 22 Source Code Abstract Model Model Checker Predicate Refinement Result Runtime Policy Check Source Code Abstract Model Model Checker Predicate Refinement Result Runtime Policy Check Must address problem of learning single-vocabulary predicates Predicates might make probability statements Direction: always learn predicates that explain in terms of adversary’s observations

23 23 Semantics of Language L Enforcement Semantics E Insecure Program... Policy-Weaver Generator L-E Policy Weaver Secure Program Policy Policy-Language Semantics Extended with very general information flow constructs Exploring integration with privilege-aware operating systems Developed new type of policy based on adversarial knowledge Can enforce classic information flow, as well as new partial notions Support weaving by extending JAM framework

24 Backup Slides

25 25 Ideal, most precise path oracle Sees all writes to L-variables Infers all feasible H-states = { ¾ H : 9 ¾ ‘, P’. ¾, ?, P ) * ¾ ‘, ½, P’ Æ Pr D init [ ¾ ] > 0} Return the H-variable portion of ¾ Whenever there is a state ¾ ‘ and program counter P’ such that… executing P starting in ¾… in the standard semantics )… yields state ¾ ‘, ½, P’… and ¾ is assigned non-zero probability in the initial distribution µ precise ( ½ )

26 26 Both executions must exhibit the same L-state behavior  (D) ´ 8 ¾, ¾ ‘ 2 R D. ¾ L = ¾ ’ L ) Pr[ ¾ H ] = Pr[ ¾ ‘ H ] Adversary must not see anything that makes current H-state seem more probable LH LH w := h z := w+1 x := z w := h z := w+1 x := z

27 1. while request != null do 2. x := AggregateDatabase(h, request); 3. l := Sanitize(x); High-security input Low-security output Privacy budget good for one query

28 28 1. transcript_length := 0; 2. while request != null do 3. x := AggregateDatabase(h, request); 4. if transcript_length < 1 then 5. l := Sanitize(x); 6. transcript_length++; Runtime checks prevent privacy budget overflow Privacy condition: length( ½ ) = 1 Privacy condition: length( ½ ) = 1

29 Assume: Pr[ ¾ | Output = S] · exp( ² ) £ Pr[ ¾ ’ | Output = S] Pr[P( ¾ ) = S] > exp( ² ) £ Pr[P( ¾ ’) = S] P is deterministic, so… Pr[P( ¾ ) = S] 2 {0, 1} (same for P( ¾ ‘)) ) Pr[P( ¾ ’) = S] = 0 and Pr[P( ¾ ) = S] = 1 If Pr[P( ¾ ’) = S] = 0, then Pr[ ¾ ’ | Output = S] = 0 ) Pr[ ¾ | Output = S] = 0 )( (because Pr[P( ¾ ) = S] = 1, so we know that ¾ is feasible on observing Output = S)


Download ppt "Reasoning about Information Leakage and Adversarial Inference Matt Fredrikson 1."

Similar presentations


Ads by Google