1 Information Security – Theory vs. Reality 0368-4474-01, Winter 2011 Lecture 7: Information flow control Eran Tromer Slides credit: Max Krohn, MIT Ian.

Slides:



Advertisements
Similar presentations
Information Flow Control For Standard OS Abstractions Max Krohn, Alex Yip, Micah Brodsky, Natan Cliffer, Frans Kaashoek, Eddie Kohler, Robert Morris.
Advertisements

Information Flow and Covert Channels November, 2006.
Towards Static Flow-based Declassification for Legacy and Untrusted Programs Bruno P. S. Rocha Sruthi Bandhakavi Jerry den Hartog William H. Winsborough.
CMSC 414 Computer (and Network) Security Lecture 13 Jonathan Katz.
27 th Oct 2003 Checking Secure Interactions of Smart Card Applets: extended version P. Bieber, J. Cazin, P. Girard, J. –L. Lanet, V. Wiels, and G. Zanon.
Access Control Methodologies
Ashish Kundu CS590F Purdue 02/12/07 Language-Based Information Flow Security Andrei Sabelfield, Andrew C. Myers Presentation: Ashish Kundu
Using Programmer-Written Compiler Extensions to Catch Security Holes Authors: Ken Ashcraft and Dawson Engler Presented by : Hong Chen CS590F 2/7/2007.
Title of Selected Paper: Design and Implementation of Secure Embedded Systems Based on Trustzone Authors: Yan-ling Xu, Wei Pan, Xin-guo Zhang Presented.
CMSC 414 Computer and Network Security Lecture 24 Jonathan Katz.
By Ronny Bull, Chaitanya Pinnamaneni, Alex Stuart.
CMSC 414 Computer and Network Security Lecture 13 Jonathan Katz.
CSE331: Introduction to Networks and Security Lecture 28 Fall 2002.
1 Enforcing Confidentiality in Low-level Programs Andrew Myers Cornell University.
Decentralized Robustness Stephen Chong Andrew C. Myers Cornell University CSFW 19 July 6 th 2006.
CMSC 414 Computer and Network Security Lecture 12 Jonathan Katz.
Polyglot: An Extensible Compiler Framework for Java Nathaniel Nystrom, Michael R. Clarkson, and Andrew C. Myers Presentation by Aaron Kimball & Ben Lerner.
1/28/2004CSCI 315 Operating Systems Design1 Operating System Structures & Processes Notice: The slides for this lecture have been largely based on those.
CMSC 414 Computer and Network Security Lecture 11 Jonathan Katz.
Sicurezza Informatica Prof. Stefano Bistarelli
1 FM and Security-Overview FM Formal Security Models Based on Slides prepared by A. Jones and Y. Lin. Material based on C. Landwehr paper.
1 RAKSHA: A FLEXIBLE ARCHITECTURE FOR SOFTWARE SECURITY Computer Systems Laboratory Stanford University Hari Kannan, Michael Dalton, Christos Kozyrakis.
Automatic Implementation of provable cryptography for confidentiality and integrity Presented by Tamara Rezk – INDES project - INRIA Joint work with: Cédric.
CMSC 414 Computer and Network Security Lecture 19 Jonathan Katz.
1 Introduction to Information Security , Spring 2015 Lecture 3: Access control (2/2) Eran Tromer Slides credit: John Mitchell, Stanford Max Krohn,
M1G Introduction to Programming 2 4. Enhancing a class:Room.
Securing the Web with Decentralized Information Flow Control Maxwell Krohn (MIT) in cahoots with: Alex Yip, Micah Brodsky, Petros Efstathopoulos (UCLA),
1 Introduction to Information Security , Spring 2014 Lecture 3: Access control (cont.) Eran Tromer Slide credits: John Mitchell, Stanford Max.
Benjamin Davis Hao Chen University of California, Davis.
Presented by Amlan B Dey.  Access control is the traditional center of gravity of computer security.  It is where security engineering meets computer.
1 Confidentiality Policies September 21, 2006 Lecture 4 IS 2150 / TEL 2810 Introduction to Security.
PRECIP: Towards Practical and Retrofittable Confidential Information Protection XiaoFeng Wang (IUB), Zhuowei Li (IUB), Ninghui Li (Purdue) and Jong Youl.
CS162 Week 8 Kyle Dewey. Overview Example online going over fail03.not (from the test suite) in depth A type system for secure information flow Implementing.
Containment and Integrity for Mobile Code Security policies as types Andrew Myers Fred Schneider Department of Computer Science Cornell University.
Session 2 - Security Models and Architecture. 2 Overview Basic concepts The Models –Bell-LaPadula (BLP) –Biba –Clark-Wilson –Chinese Wall Systems Evaluation.
Security Architecture and Design Chapter 4 Part 3 Pages 357 to 377.
1 Labels and Event Processes in the Asbestos Operating System Petros Efstathopoulos, Maxwell Krohn, et al. KARTHIK ANANTAPUR BACHERAO 10/28/2005.
CE Operating Systems Lecture 3 Overview of OS functions and structure.
Information Flow Language and System Level 1Dennis Kafura – CS5204 – Operating Systems.
(a) What is the output generated by this program? In fact the output is not uniquely defined, i.e., it is not always the same. So please give three examples.
Zeldovich et al. (both papers) Reading Group by Theo.
Fall, Privacy&Security - Virginia Tech – Computer Science Click to edit Master title style Information Flow Control Language and System Level.
Information Security CS 526 Topic 17
A Lattice Model of Secure Information Flow By Dorothy E. Denning Presented by Drayton Benner March 22, 2000.
Fall, Privacy&Security - Virginia Tech – Computer Science Click to edit Master title style Decentralized Information Flow A paper by Myers/Liskov.
CS426Fall 2010/Lecture 211 Computer Security CS 426 Lecture 21 The Bell LaPadula Model.
Certification of Programs for Secure Information Flow Dorothy & Peter Denning Communications of the ACM (CACM) 1977.
Computer Science and Engineering Computer System Security CSE 5339/7339 Session 16 October 14, 2004.
OS Labs 2/25/08 Frans Kaashoek MIT
CS162 Week 8 Kyle Dewey. Overview Example online going over fail03.not (from the test suite) in depth A type system for secure information flow Implementing.
Lecture9 Page 1 CS 236 Online Operating System Security, Con’t CS 236 On-Line MS Program Networks and Systems Security Peter Reiher.
ITP 109 Week 2 Trina Gregory Introduction to Java.
Language-Based Information- Flow Security (Sabelfeld and Myers) “Practical methods for controlling information flow have eluded researchers for some time.”
DeepDroid Dynamically Enforcing Enterprise Policy Manwoong (Andy) Choi
22 feb What is Access Control? Access control is the heart of security Definitions: * The ability to allow only authorized users, programs or.
1 Chapter 2: Operating-System Structures Services Interface provided to users & programmers –System calls (programmer access) –User level access to system.
Information Flow Control for Standard OS Abstractions Landon Cox April 6, 2016.
9- 1 Last time ● User Authentication ● Beyond passwords ● Biometrics ● Security Policies and Models ● Trusted Operating Systems and Software ● Military.
Faculty of Computer Science Institute for System Architecture, Operating Systems Group Information Flow Control for Standard OS Abstractions Dresden,
Introduction to Information Security , Spring 2016 Lecture 9: Secure Architecture Principles, Access Control Avishai Wool Slides credit: Eran.
Key Ideas from day 1 slides
Information Flow Control
Paper Reading Group:. Language-Based Information-Flow Security. A
Decentralized Information Flow Control
Modern Systems: Security
Making Information Flow Explicit in HiStar Lecture 25, cs262a
Information Security CS 526
Information Flow Control for Standard OS Abstractions
Information Security CS 526
Information Security CS 526
Presentation transcript:

1 Information Security – Theory vs. Reality , Winter 2011 Lecture 7: Information flow control Eran Tromer Slides credit: Max Krohn, MIT Ian Goldberg and Urs Hengartner, University of Waterloo

Information Flow Control An information flow policy describes authorized paths along which information can flow For example, Bell-La Padula describes a lattice-based information flow policy

Example Lattice (“Top Secret”, {“Soviet Union”, “East Germany”}), (“Unclassified”,  )‏ (“Top Secret”, {“Soviet Union”}) (“Secret”, {“Soviet Union”}) (“Secret”, {“East Germany”}) (“Secret”, {“Soviet Union”, “East Germany”})

Lattices Dominance relationship ≥ defined in military security model is transitive and antisymmetric Therefore, it defines a lattice For two levels a and b, neither a ≥ b nor b ≥ a might hold However, for every a and b, there is a lowest upper bound u for which u ≥ a and u ≥ b, and a greatest lower bound l for which a ≥ l and b ≥ l There are also two elements U and L that dominate/are dominated by all levels In example, U = (“Top Secret”, {“Soviet Union”, “East Germany”}) L = (“Unclassified”,  )‏

Information Flow Control Input and output variables of program each have a (lattice-based) security classification S() associated with them For each program statement, compiler verifies whether information flow is secure For example, x = y + z is secure only if S( x ) ≥ lub(S( y ), S( z )), where lub() is lowest upper bound Program is secure if each of its statements is secure

Implicit information flow Conditional branches carry information about the condition: secret bool a; public bool b; … b = a; if (a) { b = 0; } else { b = 1; } Possible solution: assign label to program counter and include it in every operation. (Too conservative!)

Issues Label creep Example: false flows Example: branches Declassification example: encryption

Implementation approaches 8 Language-based IFC (JIF & others) – Compile-time tracking for Java, Haskell – Static analysis – Provable security – Label creep (false positives) Dynamic analysis – Language / bytecode / machine code – Tainting – False positives, false negatives – Hybrid solutions OS-based DIFC (Asbestos, HiStar, Flume) – Run-time tracking enforced by a trusted kernel Works with any language, compiled or scripting, closed or open

9 Distributed Information Flow Control DB Bob’s Data Alice’s Data Gateway Alice’s Data  1. Data tracking 2. Isolated declassification

Information Flow Control (IFC) Goal: track which secrets a process has seen Mechanism: each process gets a secrecy label – Label summarizes which categories of data a process is assumed to have seen. – Examples: { “Financial Reports” } { “HR Documents” } { “Financial Reports” and “HR Documents” } “tag” “label”

Tags + Labels Process p tag_t HR = create_tag(); S p = {} D p = {} D p = { HR } Universe of Tags: Finance Legal SecretProjects change_label({Finance}); S p = { Finance }S p = { Finance, HR } HR change_label({Finance,HR}); change_label({Finance}); change_label({}); DIFC: Declassification in action. Same as Step 1. Any process can add any tag to its label. DIFC Rule: A process can create a new tag; gets ability to declassify it.

DIFC OS 12 KERNEL pq declassi fier Alice’s Data S q = {} S p = {a} S q = {a} S d = {a} D d = {a}

Communication Rule Process qProcess p S q = { HR, Finance } S p = { HR }  p can send to q iff S p  S q

Question: What interface should the kernel expose to applications? – Easier question than “is my OS implementation secure” – Easy to get it wrong! 14

Approach 1: “Floating Labels” [IX,Asbestos] 15 KERNEL pq S q = {} S p = {a} S q = {a}

Floaters Leaks Data 16 attacker Alice’s Data S = {a} b0b0 b1b1 S = {} 1001 Leak file S = {} 0000 S = {} 1001 S = {a} b2b2 S = {}S = {a} b3b3 S = {}

Approach 2: “Set Your Own” [HiStar/Flume] 17 KERNEL pq S q = {} O q = {a + } S p = {a} O p = {a -, a + } S q = {a} O q = {a + } Rule: S p  S q necessary precondition for send

Case study: Flume Goal: User-level implementation – apt-get install flume Approach: – System Call Delegation [Ostia by Garfinkel et al, 2003] – Use Linux 2.6 (or OpenBSD 3.9)

System Call Delegation Web App glibc Linux Kernel Layoff Plans open(“/hr/LayoffPlans”, O_RDONLY);

System Call Delegation Web App Flume Libc Linux Kernel Layoff Plans open(“/hr/LayoffPlans”, O_RDONLY); Flume Reference Monitor Web App

System Calls IPC: read, write, select Process management: fork, exit, getpid DIFC: create tag, change label, fetch label, send capabilities, receive capabilities 21

How To Prove Security Property: Noninterference 22

Noninterference [Goguen & Meseguer ’82] 23 p q S p = {a} S q = {} Experiment #1: p’ q S p’ = {a} S q = {} Experiment #2: = “HIGH”“LOW”

How To Prove Security (cont.) Property: Noninterference Process Algebra model for a DIFC OS – Communicating Sequential Processes (CSP) Proof that the model fits the definition 24

Proving noninterference Formalism (Communicating Sequential Formalisms) Consider any possible system (with arbitrary user applications) Induction over all possible sequences of moves a system can make (i.e., traces) At each step, case-by-case analysis of all system calls in the interface. – Prove no “interference” 25

Results on Flume Allows adoption of Unix software? – 1,000 LOC launcher/declassifier – 1,000 out of 100,000 LOC in MoinMoin changed – Python interpreter, Apache, unchanged Solves security vulnerabilities? – Without our knowing, we inherited two ACL bypass bugs from MoinMoin – Both are not exploitable in Flume’s MoinMoin Performance? – Performs within a factor of 2 of the original on read and write benchmarks Example App: MoinMoin Wiki

Flume Limitations Bigger TCB than HiStar / Asbestos – Linux stack (Kernel + glibc + linker) – Reference monitor (~22 kLOC) Covert channels via disk quotas Confined processes like MoinMoin don’t get full POSIX API. – spawn() instead of fork() & exec() – flume_pipe() instead of pipe()

DIFC vs. Traditional MAC 28 Traditional MACDIFC Military SystemsWeb-based Systems, Desktops Superior Contraining Users Developers Protecting Themselves from Bugs Centralized Declassification Policy Application-Specific Declassification Policies

What are we trusting? Proof of noninterference Platform – Hardware – Virtual machine manager – Kernel Reference monitor Declassifier – Human decisions Covert channels Physical access User authentication 29

Practical barriers to deployment of Information Flow Control systems Programming model confuses programmers Lack of support: – Applications – Libraries – Operating systems People don’t care about security until it's too late 30

Quantitative information flow control Quantify how much information (#bits) is leaking from (secret) inputs to outputs Approach: dynamic analysis (extended tainting) followed by flow graph analysis Example: battleship online game 31