Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 September 14, 2006 Lecture 3 IS 2150 / TEL 2810 Introduction to Security.

Similar presentations


Presentation on theme: "1 September 14, 2006 Lecture 3 IS 2150 / TEL 2810 Introduction to Security."— Presentation transcript:

1 1 September 14, 2006 Lecture 3 IS 2150 / TEL 2810 Introduction to Security

2 2 What is a secure system? A simple definition A secure system doesn’t allow violations of a security policy Alternative view: based on distribution of rights to the subjects Leakage of rights: (unsafe with respect to right r) Assume that A representing a secure state does not contain a right r in any element of A. A right r is said to be leaked, if a sequence of operations/commands adds r to an element of A, which did not contain r

3 3 What is a secure system? Safety of a system with initial protection state X o Safe with respect to r: System is safe with respect to r if r can never be leaked Else it is called unsafe with respect to right r.

4 4 Safety Problem: formally Given initial state X 0 = (S 0, O 0, A 0 ) Set of primitive commands c r is not in A 0 [s, o] Can we reach a state X n where  s,o such that A n [s,o] includes a right r not in A 0 [s,o]? - If so, the system is not safe - But is “safe” secure?

5 5 Decidability Results (Harrison, Ruzzo, Ullman) Theorem: Given a system where each command consists of a single primitive command (mono-operational), there exists an algorithm that will determine if a protection system with initial state X 0 is safe with respect to right r.

6 6 Decidability Results (Harrison, Ruzzo, Ullman) Proof: determine minimum commands k to leak Delete/destroy: Can’t leak (or be detected) Create/enter: new subjects/objects “equal”, so treat all new subjects as one No test for absence Tests on A[s 1, o 1 ] and A[s 2, o 2 ] have same result as the same tests on A[s 1, o 1 ] and A[s 1, o 2 ] = A[s 1, o 2 ]  A[s 2, o 2 ] If n rights leak possible, must be able to leak k= n(|S 0 |+1)(|O 0 |+1)+1 commands Enumerate all possible states to decide

7 7 Decidability Results (Harrison, Ruzzo, Ullman) It is undecidable if a given state of a given protection system is safe for a given generic right For proof – need to know Turing machines and halting problem

8 8 What is the implication? Safety decidable for some models Are they practical? Safety only works if maximum rights known in advance Policy must specify all rights someone could get, not just what they have Where might this make sense?

9 9 Back to HRU: Fundamental questions How can we determine that a system is secure? Need to define what we mean by a system being “secure” Is there a generic algorithm that allows us to determine whether a computer system is secure?

10 10 Turing Machine & halting problem The halting problem: Given a description of an algorithm and a description of its initial arguments, determine whether the algorithm, when executed with these arguments, ever halts (the alternative is that it runs forever without halting). Given a description of an algorithm and a description of its initial arguments, determine whether the algorithm, when executed with these arguments, ever halts (the alternative is that it runs forever without halting).

11 11 Turing Machine & halting problem Reduce TM to Safety problem If Safety problem is decidable then it implies that TM halts (for all inputs) – showing that the halting problem is decidable (contradiction)

12 12 Turing Machine TM is an abstract model of computer Alan Turing in 1936 TM consists of A tape divided into cells; infinite in one direction A set of tape symbols M M contains a special blank symbol b A set of states K A head that can read and write symbols An action table that tells the machine What symbol to write How to move the head (‘L’ for left and ‘R’ for right) What is the next state

13 13 Turing Machine The action table describes the transition function Transition function  (k, m) = (k, m, L): in state k, symbol m on tape location is replaced by symbol m, head moves to left one square, and TM enters state k Halting state is q f TM halts when it enters this state

14 14 Turing Machine A BC … 1234 head Current state is k Let  (k, C) = (k 1, X, R) where k 1 is the next state Current symbol is C D A BX … 1234 head D A B? … 1234 ? Let  (k 1, D) = (k 2, Y, L) where k 2 is the next state ? ?

15 15 General Safety Problem Theorem: It is undecidable if a given state of a given protection system is safe for a given generic right Proof: Reduce TM to safety problem Symbols, States  rights Tape cell  subject Cell s i has A  s i has A rights on itself Cell s k  s k has end rights on itself State p, head at s i  s i has p rights on itself Distinguished Right own: s i owns s i +1 for 1 ≤ i < k

16 16 Command Mapping (Left move)  (k, C) = (k 1, X, L) If head is not in leftmost command c k,C (s i, s i-1 ) if own in a[s i-1, s i ] and k in a[s i, s i ] and C in a[s i, s i ] then delete k from A[s i,s i ]; delete C from A[s i,s i ]; enter X into A[s i,s i ]; enter k 1 into A[s i-1, s i-1 ]; End

17 17 Mapping s1s1 s2s2 s3s3 s4s4 s4s4 s3s3 s2s2 s1s1 A B C k D end own A BC … 124 head Current state is k Current symbol is C D 1 234

18 18 Mapping (Left Move) s1s1 s2s2 s3s3 s4s4 s4s4 s3s3 s2s2 s1s1 A B k 1 X D end own After  (k, C) = (k 1, X, L) where k is the current state and k 1 the next state A BX … 124 head D 1 234 If head is in leftmost both s i, s i-1 are s 1

19 19 Command Mapping (Right move)  (k, C) = (k 1, X, R) command c k,C (s i, s i+1 ) if own in a[s i, s i+1 ] and k in a[s i, s i ] and C in a[s i, s i ] then delete k from A[s i,s i ]; delete C from A[s i,s i ]; enter X into A[s i,s i ]; enter k 1 into A[s i+1, s i+1 ]; end

20 20 Mapping s1s1 s2s2 s3s3 s4s4 s4s4 s3s3 s2s2 s1s1 A B C k D end own A BC … 124 head Current state is k Current symbol is C D 1 234

21 21 Mapping s1s1 s2s2 s3s3 s4s4 s4s4 s3s3 s2s2 s1s1 A B X D k 1 end own After  (k, C) = (k 1, X, R) where k is the current state and k 1 the next state A BX … 124 head D 1 234

22 22 Command Mapping (Rightmost move)  (k 1, D) = (k 2, Y, R) at end becomes command crightmost k,C (s i,s i+1 ) if end in a[s i,s i ] and k 1 in a[s i,s i ] and D in a[s i,s i ] then delete end from a[s i,s i ]; create subject s i+1 ; enter own into a[s i,s i+1 ]; enter end into a[s i+1, s i+1 ]; delete k 1 from a[s i,s i ]; delete D from a[s i,s i ]; enter Y into a[s i,s i ]; enter k 2 into A[s i,s i ]; end

23 23 Mapping s1s1 s2s2 s3s3 s4s4 s4s4 s3s3 s2s2 s1s1 A B X Y own After  (k 1, D) = (k 2, Y, R) where k 1 is the current state and k 2 the next state s5s5 s5s5 own b k 2 end A BX 124 head Y 1 234

24 24 Rest of Proof Protection system exactly simulates a TM Exactly 1 end right in ACM Only 1 right corresponds to a state Thus, at most 1 applicable command in each configuration of the TM If TM enters state q f, then right has leaked If safety question decidable, then represent TM as above and determine if q f leaks Leaks halting state  halting state in the matrix  Halting state reached Conclusion: safety question undecidable

25 25 Security Policy Defines what it means for a system to be secure Formally: Partitions a system into Set of secure (authorized) states Set of non-secure (unauthorized) states Secure system is one that Starts in authorized state Cannot enter unauthorized state

26 26 Secure System - Example Is this Finite State Machine Secure? A is start state ? B is start state ? C is start state ? How can this be made secure if not? Suppose A, B, and C are authorized states ? ABCD Unauthorized states Authorized states

27 27 Additional Definitions: Security breach: system enters an unauthorized state Let X be a set of entities, I be information. I has confidentiality with respect to X if no member of X can obtain information on I I has integrity with respect to X if all members of X trust I Trust I, its conveyance and protection (data integrity) I maybe origin information or an identity (authentication) I is a resource – its integrity implies it functions as it should (assurance) I has availability with respect to X if all members of X can access I Time limits (quality of service

28 28 Confidentiality Policy Also known as information flow Transfer of rights Transfer of information without transfer of rights Temporal context Model often depends on trust Parts of system where information could flow Trusted entity must participate to enable flow Highly developed in Military/Government

29 29 Integrity Policy Defines how information can be altered Entities allowed to alter data Conditions under which data can be altered Limits to change of data Examples: Purchase over $1000 requires signature Check over $10,000 must be approved by one person and cashed by another Separation of duties : for preventing fraud Highly developed in commercial world

30 30 Transaction-oriented Integrity Begin in consistent state “Consistent” defined by specification Perform series of actions (transaction) Actions cannot be interrupted If actions complete, system in consistent state If actions do not complete, system reverts to beginning (consistent) state

31 31 Trust Theories and mechanisms rest on some trust assumptions Administrator installs patch 1. Trusts patch came from vendor, not tampered with in transit 2. Trusts vendor tested patch thoroughly 3. Trusts vendor’s test environment corresponds to local environment 4. Trusts patch is installed correctly

32 32 Trust in Formal Verification Formal verification provides a formal mathematical proof that given input i, program P produces output o as specified Suppose a security-related program S formally verified to work with operating system O What are the assumptions?

33 33 Trust in Formal Methods 1. Proof has no errors Bugs in automated theorem provers 2. Preconditions hold in environment in which S is to be used 3. S transformed into executable S’ whose actions follow source code Compiler bugs, linker/loader/library problems 4. Hardware executes S’ as intended Hardware bugs

34 34 Security Mechanism Policy describes what is allowed Mechanism Is an entity/procedure that enforces (part of) policy Example Policy: Students should not copy homework Mechanism: Disallow access to files owned by other users Does mechanism enforce policy?

35 35 Common Mechanisms: Access Control Discretionary Access Control (DAC) Owner determines access rights Typically identity-based access control: Owner specifies other users who have access Mandatory Access Control (MAC) Rules specify granting of access Also called rule-based access control Originator Controlled Access Control (ORCON) Originator controls access Originator need not be owner! Role Based Access Control (RBAC) Identity governed by role user assumes


Download ppt "1 September 14, 2006 Lecture 3 IS 2150 / TEL 2810 Introduction to Security."

Similar presentations


Ads by Google