Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reference Monitors Part 1: OS & SFI Greg Morrisett Cornell University, Edited by Bill Mitchell, CSUS w. permission, Spr 02.

Similar presentations


Presentation on theme: "Reference Monitors Part 1: OS & SFI Greg Morrisett Cornell University, Edited by Bill Mitchell, CSUS w. permission, Spr 02."— Presentation transcript:

1 Reference Monitors Part 1: OS & SFI Greg Morrisett Cornell University, Edited by Bill Mitchell, CSUS w. permission, Spr 02

2 June 2001Lang. Based Security2 A Reference Monitor (RM) Observes the execution of a program and halts the program if it is about to violate the security policy. Common Examples that apply RM architecture: –operating system (hardware-based) –interpreters (software-based) –firewalls Claim: majority of today’s enforcement mechanisms are instances of reference monitors.

3 June 2001Lang. Based Security3 Reference Monitors Outline Analysis of RM power and limitations. What is a security policy? What policies can reference monitors enforce? Traditional Operating Systems. –Policies and practical issues –Hardware-enforcement of OS policies. Software-enforcement of OS policies. –Why? –Software-Based Fault Isolation –Java and CLR Stack Inspection, taken to more sophisticated level in the new Microsoft.NET CLR-compliant languages –Inlined Reference Monitors

4 June 2001Lang. Based Security4 Requirements for a Monitor Must have (reliable) access to information about what the program is about to do. –e.g., what instruction is it about to execute? Must have the ability to “stop” the program –can’t stop a program running on another machine that you don’t own. –really, stopping isn’t necessary, but transition to a “good” state, such as suspend/debug/… Must protect the monitor’s state and code from tampering. –key reason why a kernel’s data structures and code aren’t accessible by user code. In practice, must have low overhead.

5 June 2001Lang. Based Security5 What Policies are realistic ? We’ll see that under quite liberal assumptions: –there’s a nice class of policies that reference monitors can enforce (safety properties). –there are desirable policies that no reference monitor can enforce precisely. Ideal is: reject program if and only if it violates the policy Assumptions: –monitor can have access to entire state of computation. –monitor can have infinite state (by runtime mem. mgt). –but monitor can’t guess the future – the predicate it uses to determine whether to halt a program must be computable.

6 June 2001Lang. Based Security6 Schneider's Formalism A reference monitor only sees one execution sequence of a program. So we can only enforce policies P s.t.: (1) P(S) =  S. P (  ) where P is a predicate on individual sequences. A set of execution sequences S is a property if membership is determined solely by the sequence and not the other members in the set.

7 June 2001Lang. Based Security7 More Constraints on Monitors Shouldn’t be able to “see” the future. –Assumption: must make decisions in finite time. –Suppose P (  ) is true but P (  [..i] ) is false for some prefix  [..i] of . When the monitor sees  [..i] it can’t tell whether or not the execution will yield  or some other sequence, so the best it can do is rule out all sequences involving  [..i] including . So in some sense, P must be continuous: (2)  . P (  )  (  i. P (  [..i] ))

8 June 2001Lang. Based Security8 Safety Properties A predicate P on sets of sequences s.t. (1) P(S) =  S. P (  ) (2)  . P (  )  (  i. P (  [..i] )) is a safety property: “no bad thing will happen.” Conclusion: a reference monitor can’t enforce a policy P unless it’s a safety property. In fact, Schneider shows that reference monitors can (in theory) implement any safety property.

9 June 2001Lang. Based Security9 Safety vs. Security Safety is what we can implement, but is it what we want? –“lack of info. flow” isn’t a property. Safety ensures something bad won’t happen, but it doesn’t ensure something good will eventually happen: –program will terminate –program will eventually release the lock –user will eventually make payment These are examples of liveness properties. –policies involving availability aren’t safety prop. –so a ref. monitor can’t handle denial-of-service?

10 June 2001Lang. Based Security10 Safety Is Nice Safety does have its benefits: –They compose: if P and Q are safety properties, then P & Q is a safety property (just the intersection of allowed traces.) –Safety properties can approximate liveness by setting limits. e.g., we can determine that a program terminates within k steps. –We can also approximate many other security policies (e.g., info. flow) by simply choosing a stronger safety property.

11 June 2001Lang. Based Security11 Practical Issues In theory, a monitor could: –examine the entire history and the entire machine state to decide whether or not to allow a transition. –perform an arbitrary computation to decide whether or not to allow a transition. In practice, most systems: –keep a small piece of state to track history –only look at labels on the transitions –have small labels –perform simple tests Otherwise, the overheads would be overwhelming. –so policies are practically limited by the vocabulary of labels, the complexity of the tests, and the state maintained by the monitor.

12 June 2001Lang. Based Security12 Software Fault Isolation (SFI) Wahbe et al. (SOSP’93) Keep software components in same hardware-based address space. Use a software-based reference monitor to isolate components into logical address spaces. –conceptually: check each read, write, & jump to make sure it’s within the component’s logical address space. –hope: communication as cheap as procedure call. –worry: overheads of checking will swamp the benefits of communication. Note: doesn’t deal with other policy issues –e.g., availability of CPU

13 June 2001Lang. Based Security13 One Way to SFI void interp(int pc, reg[], mem[], code[], memsz, codesz) { while (true) { if (pc >= codesz) exit(1); int inst = code[pc], rd = RD(inst), rs1 = RS1(inst), rs2 = RS2(inst), immed = IMMED(inst); switch (opcode(inst)) { case ADD: reg[rd] = reg[rs1] + reg[rs2]; break; case LD: int addr = reg[rs1] + immed; if (addr >= memsz) exit(1); reg[rd] = mem[addr]; break; case JMP: pc = reg[rd]; continue;... } pc++; }} 0: add r1,r2,r3 1: ld r4,r3(12) 2: jmp r4

14 June 2001Lang. Based Security14 Pros & Cons of Interpreter Pros: –easy to implement (small TCB.) –works with binaries (high-level language- independent.) –easy to enforce other aspects of OS policy Cons: –terribly execution overhead (x25? x70?) but it’s a start.

15 June 2001Lang. Based Security15 Partial Evaluation (PE) A technique for speeding up interpreters. –we know what the code is. –specialize the interpreter to the code. unroll the loop – one copy for each instruction specialize the switch to the instruction compile the resulting code For a cool example of this, see Fred Smith's thesis (hanging off my web page.)

16 June 2001Lang. Based Security16 Example PE Specialized interpreter: reg[1] = reg[2] + reg[3]; addr = reg[3] + 12; if (addr >= memsz) exit(1); reg[4] = mem[addr]; pc = reg[4] 0: add r1,r2,r3 1: ld r4,r3(12) 2: jmp r4... Original Binary: while (true) { if (pc >= codesz) exit(1); int inst = code[pc];... } Interpreter 0: add r1,r2,r3 1: addi r5,r3,12 2: subi r6,r5,memsz 3: jab _exit 4: ld r4,r5(0)... Resulting Compiled Code

17 June 2001Lang. Based Security17 SFI in Practice Used a hand-written specializer or rewriter. –Code and data for a domain in one contiguous segment. upper bits are all the same and form a segment id. separate code space to ensure code is not modified. –Inserts code to ensure stores [optionally loads] are in the logical address space. force the upper bits in the address to be the segment id no branch penalty – just mask the address may have to re-allocate registers and adjust PC-relative offsets in code. simple analysis used to eliminate unnecessary masks –Inserts code to ensure jump is to a valid target must be in the code segment for the domain must be the beginning of the translation of a source instruction in practice, limited to instructions with labels.

18 June 2001Lang. Based Security18 More on Jumps PC-relative jumps are easy: –just adjust to the new instruction’s offset. Computed jumps are not: –must ensure code doesn’t jump into or around a check or else that it’s safe for code to do the jump. –for this paper, they ensured the latter: a dedicated register is used to hold the address that’s going to be written – so all writes are done using this register. only inserted code changes this value, and it’s always changed (atomically) with a value that’s in the data segment. so at all times, the address is “valid” for writing. works with little overhead for almost all computed jumps.

19 June 2001Lang. Based Security19 More SFI Details Protection vs. Sandboxing: –Protection is fail-stop: stronger security guarantees (e.g., reads) required 5 dedicated registers, 4 instruction sequence 20% overhead on 1993 RISC machines –Sandboxing covers only stores requires only 2 registers, 2 instruction sequence 5% overhead Remote Procedure Call: –10x cost of a procedure call –10x faster than a really good OS RPC Sequoia DB benchmarks: 2-7% overhead for SFI compared to 18-40% overhead for OS.

20 June 2001Lang. Based Security20 Questions What happens on the x86? –small # of registers –variable-length instruction encoding What happens with discontiguous hunks of memory? What would happen if we really didn’t trust the extension? –i.e., check the arguments to an RPC? –timeouts on upcalls? Does this really scale to secure systems?


Download ppt "Reference Monitors Part 1: OS & SFI Greg Morrisett Cornell University, Edited by Bill Mitchell, CSUS w. permission, Spr 02."

Similar presentations


Ads by Google