Jay Ligatti University of South Florida.  Interpose on the actions of some untrusted software  Have authority to decide whether and how to allow those.

Slides:



Advertisements
Similar presentations
N-Consensus is the Second Strongest Object for N+1 Processes Eli Gafni UCLA Petr Kuznetsov Max Planck Institute for Software Systems.
Advertisements

Lecture 8: Three-Level Architectures CS 344R: Robotics Benjamin Kuipers.
Parallel and Distributed Simulation Time Warp: Basic Algorithm.
Jay Ligatti and Srikar Reddy University of South Florida.
Mobile Code Security Aviel D. Rubin, Daniel E. Geer, Jr. MOBILE CODE SECURITY, IEEE Internet Computing, 1998 Minkyu Lee
1 Temporal Claims A temporal claim is defined in Promela by the syntax: never { … body … } never is a keyword, like proctype. The body is the same as for.
CS21 Decidability and Tractability
Introduction to Computability Theory
Equivalence, DFA, NDFA Sequential Machine Theory Prof. K. J. Hintz Department of Electrical and Computer Engineering Lecture 2 Updated and modified by.
Lecture 8 Recursively enumerable (r.e.) languages
Software Security Monitors: Theory & Practice David Walker Princeton University (joint work with Lujo Bauer and Jay Ligatti)
CS 536 Spring Global Optimizations Lecture 23.
1 Enforcing Confidentiality in Low-level Programs Andrew Myers Cornell University.
Chapter 1 Introduction. Chapter Overview Overview of Operating Systems Secure Operating Systems Basic Concepts in Information Security Design of a Secure.
4/25/08Prof. Hilfinger CS164 Lecture 371 Global Optimization Lecture 37 (From notes by R. Bodik & G. Necula)
Software Reliability Methods Sorin Lerner. Software reliability methods: issues What are the issues?
More Enforceable Security Policies Lujo Bauer, Jay Ligatti and David Walker Princeton University (graciously presented by Iliano Cervesato)
A Type System for Expressive Security Policies David Walker Cornell University.
CS5371 Theory of Computation Lecture 10: Computability Theory I (Turing Machine)
Prof. Fateman CS 164 Lecture 221 Global Optimization Lecture 22.
Software Security Monitors: Theory & Practice David Walker Princeton University (joint work with Lujo Bauer and Jay Ligatti)
CS5371 Theory of Computation Lecture 8: Automata Theory VI (PDA, PDA = CFG)
Describing Syntax and Semantics
Programming Language Semantics Denotational Semantics Chapter 5 Part III Based on a lecture by Martin Abadi.
Prof. Bodik CS 164 Lecture 16, Fall Global Optimization Lecture 16.
Regular Model Checking Ahmed Bouajjani,Benget Jonsson, Marcus Nillson and Tayssir Touili Moran Ben Tulila
Cs3102: Theory of Computation Class 18: Proving Undecidability Spring 2010 University of Virginia David Evans.
Containment and Integrity for Mobile Code Security policies as types Andrew Myers Fred Schneider Department of Computer Science Cornell University.
Finite State Machines Chapter 5. Languages and Machines.
CMPS 3223 Theory of Computation
Transformation of Timed Automata into Mixed Integer Linear Programs Sebastian Panek.
 Computability Theory Turing Machines Professor MSc. Ivan A. Escobar
Richard Gay – ICISS, December 20, 2014 CliSeAu:Securing Distributed Java Programs by Cooperative Dynamic Enforcement Richard Gay, Jinwei Hu, Heiko Mantel.
CSC-682 Cryptography & Computer Security Sound and Precise Analysis of Web Applications for Injection Vulnerabilities Pompi Rotaru Based on an article.
The Nature of Classical Physics A Rehash Followed by Some New Stuff 1 Nature of Classical Physics TexPoint fonts used in EMF. Read the TexPoint manual.
CIS 842: Specification and Verification of Reactive Systems Lecture Specifications: Sequencing Properties Copyright , Matt Dwyer, John Hatcliff,
Chapter 13 Recursion. Learning Objectives Recursive void Functions – Tracing recursive calls – Infinite recursion, overflows Recursive Functions that.
MA/CSSE 474 Theory of Computation More Reduction Examples Non-SD Reductions.
Constraint Satisfaction Problems (CSPs) CPSC 322 – CSP 1 Poole & Mackworth textbook: Sections § Lecturer: Alan Mackworth September 28, 2012.
1 More About Turing Machines “Programming Tricks” Restrictions Extensions Closure Properties.
Troubleshooting Security Issues Lesson 6. Skills Matrix Technology SkillObjective Domain SkillDomain # Monitoring and Troubleshooting with Event Viewer.
Formal Semantics of Programming Languages 虞慧群 Topic 1: Introduction.
COP4020 Programming Languages Introduction to Axiomatic Semantics Prof. Robert van Engelen.
MPRI 3 Dec 2007Catuscia Palamidessi 1 Why Probability and Nondeterminism? Concurrency Theory Nondeterminism –Scheduling within parallel composition –Unknown.
Lecture 5 1 CSP tools for verification of Sec Prot Overview of the lecture The Casper interface Refinement checking and FDR Model checking Theorem proving.
CSCI1600: Embedded and Real Time Software Lecture 28: Verification I Steven Reiss, Fall 2015.
SASI Enforcement of Security Policies : A Retrospective* PSLab 오민경.
CSE 311 Foundations of Computing I Lecture 28 Computability: Other Undecidable Problems Autumn 2011 CSE 3111.
Automated Debugging with Error Invariants TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA A A A AA A A Chanseok Oh.
Model Checking Lecture 1. Model checking, narrowly interpreted: Decision procedures for checking if a given Kripke structure is a model for a given formula.
From Natural Language to LTL: Difficulties Capturing Natural Language Specification in Formal Languages for Automatic Analysis Elsa L Gunter NJIT.
Superstabilizing Protocols for Dynamic Distributed Systems Authors: Shlomi Dolev, Ted Herman Presented by: Vikas Motwani CSE 291: Wireless Sensor Networks.
MOPS: an Infrastructure for Examining Security Properties of Software Authors Hao Chen and David Wagner Appears in ACM Conference on Computer and Communications.
Today’s Agenda  Quiz 4  Temporal Logic Formal Methods in Software Engineering1.
Model Checking Lecture 1: Specification Tom Henzinger.
Infringement Management Towards Practical Enforcement Fabio Massacci 1 joint work with Nataliia Bielova 1 and reality checks by Andrea Micheletti 2 1 University.
Abstractions Eric Feron. Outline Principles of abstraction Motivating example Abstracting variables Abstracting functions Abstracting operators Recommended.
Fundamentals of Fault-Tolerant Distributed Computing In Asynchronous Environments Paper by Felix C. Gartner Graeme Coakley COEN 317 November 23, 2003.
MA/CSSE 474 Theory of Computation Decision Problems, Continued DFSMs.
1 Jay Ligatti (Princeton University); joint work with: Lujo Bauer (Carnegie Mellon University), David Walker (Princeton University) Enforcing Non-safety.
Timed Safety Property Runtime Enforcement Wanjiang Qian 03/22/2016.
Graph Coverage for Specifications CS 4501 / 6501 Software Testing
Turing Machines Acceptors; Enumerators
Enforcing Non-safety Security Policies with Program Monitors
New Research in Software Security
Parallel and Distributed Simulation
Language-based Security
Guest Lecture by David Johnston
Translating Linear Temporal Logic into Büchi Automata
Presentation transcript:

Jay Ligatti University of South Florida

 Interpose on the actions of some untrusted software  Have authority to decide whether and how to allow those actions to be executed  Are called runtime/security/program monitors 2 untrusted software monitor action-executing system (OS/hardware) possibly unsafe action a safe action a a=open(file,“r”) | shutdown() | login(sn,pw) | connect(addr,port) |…

 Monitoring code can be inserted into the untrusted software or the executing system 3 untrusted software monitor executing system safe actions untrusted software executing system possibly unsafe actions monitor

 In all cases monitor inputs possibly unsafe actions from the untrusted software and outputs safe actions to be executed 4 untrusted software monitor action-executing system (OS/hardware) possibly unsafe action a safe action a

 Ubiquitous Operating systems (e.g., file access control) Virtual machines (e.g., stack inspection) Web browsers (e.g., javascript sandboxing) Intrusion-detection systems Firewalls Auditing tools Spam filters Etc.  Most of what are usually considered “computer security” mechanisms can be thought of as runtime monitors 5

 How do monitors operate to enforce policies? Which policies can runtime mechanisms enforce? Which policies should we never even try to enforce at runtime? All policies Runtime-enforceable policies 6

 How do monitors operate to enforce policies? Which policies get enforced when we combine runtime mechanisms? mechanism M enforces policy P mechanism M’ enforces policy P’ M ^ M’ enforces? P ^ P’ ? What if P requires the first action executed to be fopen( f ), but P’ requires the first action executed to be fopen( f’ )? 7

 How do monitors operate to enforce policies? How efficiently does a mechanism enforce a policy? What are the lower bounds on resources required to enforce policies of interest? What does it mean for a mechanism to be efficient? Low space usage (SHA of Fong, BHA of Talhi, Tawbi, and Debbabi) Low time usage ? 8

 How do monitors operate to enforce policies? Which policies can runtime mechanisms enforce? Which policies get enforced when we combine runtime mechanisms? How efficiently does a mechanism enforce a policy? What are the lower bounds on resources required to enforce policies of interest? 9

 How do monitors operate to enforce policies? Which policies can runtime mechanisms enforce? Which policies get enforced when we combine runtime mechanisms? How efficiently does a mechanism enforce a policy? What are the lower bounds on resources required to enforce policies of interest? 10

 Research questions How do monitors operate to enforce policies?  Which policies can runtime mechanisms enforce?  Related work vs. this work  The model: systems, executions, monitors, policies, and enforcement  Analysis of enforceable properties  Summary and future work 11

 Most analyses of monitors are based on truncation automata (Schneider, 2000)  Operation: halt software being monitored (target) immediately before any policy violation  Limitation: real monitors normally respond to violations with remedial actions target monitor executing system (OS/VM/CPU) possibly unsafe action a Halt target! 12

 Powerful model of runtime enforcement  Operation: actively transform target actions to ensure they satisfy desired policy target monitor executing system (OS/VM/CPU) possibly unsafe action a a a'a' etc. (quietly suppress a) 13

 Limitation: All actions are assumed totally asynchronous  Monitor can always get next action after suppressing previous actions  Target can’t care about results of executed actions; there are no results in the model E.g., the echo program “x=input(); output(x);” is outside the edit-automata model 14

 Conservatively assume all actions are synchronous and monitor those actions and their results  Operation: actively transform actions and results to ensure they satisfy desired policy Untrusted Application safe results Executing System (Trusted) Security Monitor actionssafe actions results 15

 MRAs are stronger than truncation automata Can accept actions and halt targets but can also transform actions and results  MRAs are weaker than edit automata Asynchronicity lets edit automata “see” arbitrarily far into the future  Can postpone deciding how to edit an action until later  Arbitrary postponement is normally unrealistic 16

1. MRAs can enforce result-sanitization policies (trusted) mechanism sanitizes results before they get input to (untrusted) target application Many privacy, information-flow, and access- control policies are result-sanitization Untrusted Application Executing System (Trusted) Security Monitor (1) ls (3) {foo.txt,.hidden} (2) ls (4) {foo.txt} 17

2. Model provides simpler and more expressive definitions of policies and enforcement than previous work (more on this later) 18

 Research questions How do monitors operate to enforce policies?  Which policies can runtime mechanisms enforce?  Related work vs. this work  The model: systems, executions, monitors, policies, and enforcement  Analysis of enforceable properties  Summary and future work 19

 Systems are specified as sets of events Let A be a finite or countably infinite set of actions Let R (disjoint from A) be a finite or countably infinite set of action results Then a system is specified as E = A ∪ R  Example: A = {popupWindow(“Confirm Shutdown”), shutdown()} R = {OK, cancel, null} 20

 Execution: finite/infinite sequence of events  Adopting a monitor-centric view, ∃ 4 event possibilities: (1) MRA inputs action a from the target Untrusted Application Executing System (Trusted) Security Monitor a => add a i to the current trace 21

 Execution: finite/infinite sequence of events  Adopting a monitor-centric view, ∃ 4 event possibilities: (2) MRA outputs action a to be executed Untrusted Application Executing System (Trusted) Security Monitor a => add a o to the current trace 22

 Execution: finite/infinite sequence of events  Adopting a monitor-centric view, ∃ 4 event possibilities: (3) MRA inputs result r from the system Untrusted Application Executing System (Trusted) Security Monitor r => add r i to the current trace 23

 Execution: finite/infinite sequence of events  Adopting a monitor-centric view, ∃ 4 event possibilities: (4) MRA outputs result r to the target Untrusted Application Executing System (Trusted) Security Monitor => add r o to the current trace r 24

 ls i ; ls o ; {foo.txt,.hidden} i ; {foo.txt} o Untrusted Application Executing System (Trusted) Security Monitor ls 25

 ls i ; ls o ; {foo.txt,.hidden} i ; {foo.txt} o Untrusted Application Executing System (Trusted) Security Monitor ls 26

 ls i ; ls o ; {foo.txt,.hidden} i ; {foo.txt} o Untrusted Application Executing System (Trusted) Security Monitor {foo.txt,.hidden} 27

 ls i ; ls o ; {foo.txt,.hidden} i ; {foo.txt} o Untrusted Application Executing System (Trusted) Security Monitor {foo.txt} 28

 shutdown i ; popupConfirm o ; OK i ; shutdown o Untrusted Application Executing System (Trusted) Security Monitor shutdown 29

 shutdown i ; popupConfirm o ; OK i ; shutdown o Untrusted Application Executing System (Trusted) Security Monitor popupConfirm 30

 shutdown i ; popupConfirm o ; OK i ; shutdown o Untrusted Application Executing System (Trusted) Security Monitor OK 31

 shutdown i ; popupConfirm o ; OK i ; shutdown o Untrusted Application Executing System (Trusted) Security Monitor shutdown 32

 getMail(server) i ; null o ; getMail(server) i ; null o ; … Untrusted Application Executing System (Trusted) Security Monitor getMail(server) 33

 getMail(server) i ; null o ; getMail(server) i ; null o ; … Untrusted Application Executing System (Trusted) Security Monitor null 34

 getMail(server) i ; null o ; getMail(server) i ; null o ; … 35 Etc… This is an infinite-length execution, so it represents a nonterminating run of the monitor (and target application)

 E * = set of all well-formed finite-length executions on system with event set E  E ω = set of all well-formed infinite-length executions on system with event set E  E ∞ = E * ∪ E ω  = empty execution (no events occur)  x;x’ = well-formed concatenation of executions x and x’  x ≤ x’ = execution x is a prefix of x’ 36

 Metavariable ____ ranges over ____ e over events a over actions r over results x over executions α over A ∪ { } (potential actions) ρ over R ∪ { } (potential results) 37

 An MRA M is a tuple (E, Q, q 0, δ ) E = event set over which M operates Q = M’s finite or countably infinite state set q 0 = M’s initial state δ = M’s (partially recursive) transition function δ : Q x E Q x E given a current MRA state and an event just input, δ returns the next MRA state and an event to output 38

 q is the MRA’s current state  α i is empty or the action being input to the MRA  α o is empty or the action being output from the MRA  ρ i is empty or the result being input to the MRA  ρ o is empty or the result being output from the MRA 39 αiαi αoαo q ρoρo ρiρi Untrusted Application Executing System (Trusted) Security Monitor αiαi αoαo ρiρi ρoρo q

 Starting configuration:  A single-step judgment specifies how MRAs take small steps (to input/output a single event) Single-step judgment form: C → C’  Then the multi-step judgment is the reflexive, transitive closure of the single-step relation Multi-step judgment form: C → * C’ 40 q0q0 e x

 Rules for inputting and reacting to actions: 41 → next T = a q ρ aiai q a → δ (q,a) = (q’,a’) q a a’ o q’ a’ → δ (q,a) = (q’,r) q a roro q’ r (Output-Action-for-Action) (Output-Result-for-Action) (Input-Action)

 Rules for inputting and reacting to results: 42 → next S = r q a riri q r → δ (q,r) = (q’,a) q r aoao q’ a → δ (q,r) = (q’,r’) r’ o q’ r’ q r (Input-Result) (Output-Action-for-Result) (Output-Result-for-Result)

 M  x means MRA M, when its input events match the (possibly infinite) sequence of input events in x, produces the execution x M  x iff:  if x ∈ E ω then ∀ x’≤x : ∃ C : C 0 → * C  if x ∈ E * then ∃ C :  C 0 → * C  if x ends with an input event then M never transitions from C 43 x’ x

 Semantics matches the possible behaviors we’ve observed in many implemented monitoring systems Polymer (with Bauer and Walker) PSLang (Erlingsson and Schneider) AspectJ (Kiczales et al.) Etc. 44

 Hidden-file filtering MRA M = (E, Q, q 0, δ ) E = { ls, …} Q = { T, F } (are we executing an ls?) q 0 = { F } ( F, e ) if q=F and e<>ls δ (q,e) = ( T, e ) if q=F and e=ls ( F, filter(e)) if q=T 45

 Shutdown-confirming MRA M=(E, Q, q 0, δ ) E = { shutdown, popupConfirm, OK, cancel, null, …} Q = { T, F } (are we confirming a shutdown?) q 0 = { F } ( F, e ) if q=F and e<>shutdown δ (q,e) = ( T, popupConfirm ) if q=F and e=shutdown ( F, null ) if q=T and e=cancel ( F, shutdown ) if q=T and e=OK 46

 (Technical note: here we’re really only considering special kinds of policies called properties)  Policies are predicates on (or sets of) executions  P(x) iff execution x satisfies policy P 47

P( ) ¬P(ls i ) P(ls i ; e o ) iff e=ls ∀ results L: ¬ P(ls i ; ls o ; L i ) P(ls i ; ls o ; L i ; e o ) iff e=filter(L) [it’s OK for the target to do nothing] [monitor may not just stop upon inputting ls; must then output ls] [monitor must output only ls after inputting ls; it’s then OK for the system to never return a listing] [monitor may not stop upon inputting L; must return the filtered list to the target] [monitor must filter listings] 48

 Policies here can reason about results Enables result-sanitization policies E.g., filter-hidden-file policy  Policies here can reason about input events Enables policies to dictate exactly how mechanisms can/must transform events E.g., confirm-shutdown policy => Powerful, but practical, expressiveness 49

 Sound enforcement (no false -s)  Complete enforcement (no false +s)  Precise enforcement (no false +s or -s) MRA M soundly enforces policy P iff ∀ x ∈ E ∞ : (M  x ⇒ P(x)) MRA M completely enforces policy P iff ∀ x ∈ E ∞ : (P(x) ⇒ M  x) MRA M precisely enforces policy P iff ∀ x ∈ E ∞ : (M  x ⇔ P(x)) 50

 Simpler: no need for extra “transparency” constraints that can be rolled into policy definitions (now that policies can reason about input events)  More expressive: can reason about complete and precise enforcement too 51

 Research questions How do monitors operate to enforce policies?  Which policies can runtime mechanisms enforce?  Related work vs. this work  The model: systems, executions, monitors, policies, and enforcement  Analysis of enforceable properties What are the limits of MRA enforcement?  Summary and future work 52

 Policy P on system with event set E can be soundly enforced by some MRA M iff there exists (R.E.) predicate R over E * s.t. all the following are true. R( ) ∀ (x;e i ) ∈ E * :  ¬R(x) or  P(x;e i ) or  ∃ e’ ∈ E:(R(x;e i ;e’ o ) ∧ P(x;e i ;e’ o )) ∀ x ∈ E ω : if ¬P(x) then ∃ (x’;e i )≤x:¬R(x’) 53

 Policy P on system with event set E can be completely enforced by some MRA M iff: ∀ (x;e i ) ∈ E * :  ∀ e’ ∈ E : dead P (x;e i ;e’ o ) or  ¬P(x;e i ) ∧ ∃ !e’ ∈ E : alive P (x;e i ;e’ o ) (where alive P (x) iff ∃ x’ ∈ E ∞ :P(x;x’) and dead P (x) iff ¬alive P (x) ) 54

 Policy P on system with event set E can be precisely enforced by some MRA M iff all the following are true. P( ) ∀ (x;e i ) ∈ E * :  ¬P(x) or  P(x;e i ) ∧ ∀ e’ ∈ E : dead P (x;e i ;e’ o ) or  ¬P(x;e i ) ∧ ∃ !e’ ∈ E : P(x;e i ;e’ o ) ∧ ∃ !e’ ∈ E : alive P (x;e i ;e’ o ) ∀ x ∈ E ω : if ¬P(x) then ∃ (x’;e i )≤x:¬P(x’) 55

 Research questions How do monitors operate to enforce policies?  Which policies can runtime mechanisms enforce?  Related work vs. this work  The model: systems, executions, monitors, policies, and enforcement  Analysis of enforceable properties  Summary and future work 56

 Started building a theory of runtime enforcement based on MRAs, which: model the realistic ability of runtime mechanisms to transform synchronous actions and their results. can enforce result-sanitization policies and policies based on input events. provide simpler and more expressive definitions of policies and enforcement than previous models. 57

 Something between edit automata (which assume asynchronous actions) and MRAs (which assume synchronous actions)? How would the monitor know when the target is waiting for a result, and for which action?  Static analysis of target application?  Could get complicated 58

 Which policies get enforced when we combine runtime mechanisms?  How efficiently does a mechanism enforce a policy?  What are the lower bounds on resources required to enforce policies of interest?  Having a realistic operational model of runtime enforcement seems like a good first step to address these research questions 59

60