Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Information Security – Theory vs. Reality 0368-4474-01, Winter 2011 Lecture 7: Information flow control Eran Tromer Slides credit: Max Krohn, MIT Ian.

Similar presentations

Presentation on theme: "1 Information Security – Theory vs. Reality 0368-4474-01, Winter 2011 Lecture 7: Information flow control Eran Tromer Slides credit: Max Krohn, MIT Ian."— Presentation transcript:

1 1 Information Security – Theory vs. Reality , Winter 2011 Lecture 7: Information flow control Eran Tromer Slides credit: Max Krohn, MIT Ian Goldberg and Urs Hengartner, University of Waterloo

2 Information Flow Control An information flow policy describes authorized paths along which information can flow For example, Bell-La Padula describes a lattice-based information flow policy

3 Example Lattice (“Top Secret”, {“Soviet Union”, “East Germany”}), (“Unclassified”,  )‏ (“Top Secret”, {“Soviet Union”}) (“Secret”, {“Soviet Union”}) (“Secret”, {“East Germany”}) (“Secret”, {“Soviet Union”, “East Germany”})

4 Lattices Dominance relationship ≥ defined in military security model is transitive and antisymmetric Therefore, it defines a lattice For two levels a and b, neither a ≥ b nor b ≥ a might hold However, for every a and b, there is a lowest upper bound u for which u ≥ a and u ≥ b, and a greatest lower bound l for which a ≥ l and b ≥ l There are also two elements U and L that dominate/are dominated by all levels In example, U = (“Top Secret”, {“Soviet Union”, “East Germany”}) L = (“Unclassified”,  )‏

5 Information Flow Control Input and output variables of program each have a (lattice-based) security classification S() associated with them For each program statement, compiler verifies whether information flow is secure For example, x = y + z is secure only if S( x ) ≥ lub(S( y ), S( z )), where lub() is lowest upper bound Program is secure if each of its statements is secure

6 Implicit information flow Conditional branches carry information about the condition: secret bool a; public bool b; … b = a; if (a) { b = 0; } else { b = 1; } Possible solution: assign label to program counter and include it in every operation. (Too conservative!)

7 Issues Label creep Example: false flows Example: branches Declassification example: encryption

8 Implementation approaches 8 Language-based IFC (JIF & others) – Compile-time tracking for Java, Haskell – Static analysis – Provable security – Label creep (false positives) Dynamic analysis – Language / bytecode / machine code – Tainting – False positives, false negatives – Hybrid solutions OS-based DIFC (Asbestos, HiStar, Flume) – Run-time tracking enforced by a trusted kernel Works with any language, compiled or scripting, closed or open

9 9 Distributed Information Flow Control DB Bob’s Data Alice’s Data Gateway Alice’s Data  1. Data tracking 2. Isolated declassification

10 Information Flow Control (IFC) Goal: track which secrets a process has seen Mechanism: each process gets a secrecy label – Label summarizes which categories of data a process is assumed to have seen. – Examples: { “Financial Reports” } { “HR Documents” } { “Financial Reports” and “HR Documents” } “tag” “label”

11 Tags + Labels Process p tag_t HR = create_tag(); S p = {} D p = {} D p = { HR } Universe of Tags: Finance Legal SecretProjects change_label({Finance}); S p = { Finance }S p = { Finance, HR } HR change_label({Finance,HR}); change_label({Finance}); change_label({}); DIFC: Declassification in action. Same as Step 1. Any process can add any tag to its label. DIFC Rule: A process can create a new tag; gets ability to declassify it.

12 DIFC OS 12 KERNEL pq declassi fier Alice’s Data S q = {} S p = {a} S q = {a} S d = {a} D d = {a}

13 Communication Rule Process qProcess p S q = { HR, Finance } S p = { HR }  p can send to q iff S p  S q

14 Question: What interface should the kernel expose to applications? – Easier question than “is my OS implementation secure” – Easy to get it wrong! 14

15 Approach 1: “Floating Labels” [IX,Asbestos] 15 KERNEL pq S q = {} S p = {a} S q = {a}

16 Floaters Leaks Data 16 attacker Alice’s Data S = {a} b0b0 b1b1 S = {} 1001 Leak file S = {} 0000 S = {} 1001 S = {a} b2b2 S = {}S = {a} b3b3 S = {}

17 Approach 2: “Set Your Own” [HiStar/Flume] 17 KERNEL pq S q = {} O q = {a + } S p = {a} O p = {a -, a + } S q = {a} O q = {a + } Rule: S p  S q necessary precondition for send

18 Case study: Flume Goal: User-level implementation – apt-get install flume Approach: – System Call Delegation [Ostia by Garfinkel et al, 2003] – Use Linux 2.6 (or OpenBSD 3.9)

19 System Call Delegation Web App glibc Linux Kernel Layoff Plans open(“/hr/LayoffPlans”, O_RDONLY);

20 System Call Delegation Web App Flume Libc Linux Kernel Layoff Plans open(“/hr/LayoffPlans”, O_RDONLY); Flume Reference Monitor Web App

21 System Calls IPC: read, write, select Process management: fork, exit, getpid DIFC: create tag, change label, fetch label, send capabilities, receive capabilities 21

22 How To Prove Security Property: Noninterference 22

23 Noninterference [Goguen & Meseguer ’82] 23 p q S p = {a} S q = {} Experiment #1: p’ q S p’ = {a} S q = {} Experiment #2: = “HIGH”“LOW”

24 How To Prove Security (cont.) Property: Noninterference Process Algebra model for a DIFC OS – Communicating Sequential Processes (CSP) Proof that the model fits the definition 24

25 Proving noninterference Formalism (Communicating Sequential Formalisms) Consider any possible system (with arbitrary user applications) Induction over all possible sequences of moves a system can make (i.e., traces) At each step, case-by-case analysis of all system calls in the interface. – Prove no “interference” 25

26 Results on Flume Allows adoption of Unix software? – 1,000 LOC launcher/declassifier – 1,000 out of 100,000 LOC in MoinMoin changed – Python interpreter, Apache, unchanged Solves security vulnerabilities? – Without our knowing, we inherited two ACL bypass bugs from MoinMoin – Both are not exploitable in Flume’s MoinMoin Performance? – Performs within a factor of 2 of the original on read and write benchmarks Example App: MoinMoin Wiki

27 Flume Limitations Bigger TCB than HiStar / Asbestos – Linux stack (Kernel + glibc + linker) – Reference monitor (~22 kLOC) Covert channels via disk quotas Confined processes like MoinMoin don’t get full POSIX API. – spawn() instead of fork() & exec() – flume_pipe() instead of pipe()

28 DIFC vs. Traditional MAC 28 Traditional MACDIFC Military SystemsWeb-based Systems, Desktops Superior Contraining Users Developers Protecting Themselves from Bugs Centralized Declassification Policy Application-Specific Declassification Policies

29 What are we trusting? Proof of noninterference Platform – Hardware – Virtual machine manager – Kernel Reference monitor Declassifier – Human decisions Covert channels Physical access User authentication 29

30 Practical barriers to deployment of Information Flow Control systems Programming model confuses programmers Lack of support: – Applications – Libraries – Operating systems People don’t care about security until it's too late 30

31 Quantitative information flow control Quantify how much information (#bits) is leaking from (secret) inputs to outputs Approach: dynamic analysis (extended tainting) followed by flow graph analysis Example: battleship online game 31

Download ppt "1 Information Security – Theory vs. Reality 0368-4474-01, Winter 2011 Lecture 7: Information flow control Eran Tromer Slides credit: Max Krohn, MIT Ian."

Similar presentations

Ads by Google