1 September 14, 2006 Lecture 3 IS 2150 / TEL 2810 Introduction to Security.

Slides:



Advertisements
Similar presentations
Operating System Security
Advertisements

THE CHURCH-TURING T H E S I S “ TURING MACHINES” Pages COMPUTABILITY THEORY.
1 1 -Access Control Foundational Results. 2 2 Preliminaries Undecidability The Halting Problem The Turing Machine.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
Chapter 4: Security Policies Overview The nature of policies What they cover Policy languages The nature of mechanisms Types Secure vs. precise Underlying.
Prof. Busch - LSU1 Decidable Languages. Prof. Busch - LSU2 Recall that: A language is Turing-Acceptable if there is a Turing machine that accepts Also.
April 13, 2004ECS 235Slide #1 Expressive Power How do the sets of systems that models can describe compare? –If HRU equivalent to SPM, SPM provides more.
November 1, 2004Introduction to Computer Security ©2004 Matt Bishop Slide #3-1 Chapter 3: Foundational Results Overview Harrison-Ruzzo-Ullman result –Corollaries.
November 1, 2004Introduction to Computer Security ©2004 Matt Bishop Slide #4-1 Chapter 4: Security Policies Overview The nature of policies –What they.
1 Foundations of Software Design Fall 2002 Marti Hearst Lecture 29: Computability, Turing Machines, Can Computers Think?
Fall 2004COMP 3351 A Universal Turing Machine. Fall 2004COMP 3352 Turing Machines are “hardwired” they execute only one program A limitation of Turing.
CMSC 414 Computer (and Network) Security Lecture 10 Jonathan Katz.
1 Introduction to Computability Theory Lecture11: The Halting Problem Prof. Amos Israeli.
1 Reducibility. 2 Problem is reduced to problem If we can solve problem then we can solve problem.
ITIS 3200: Introduction to Information Security and Privacy Dr. Weichao Wang.
Theory of Computation. Computation Computation is a general term for any type of information processing that can be represented as an algorithm precisely.
1 IS 2150 / TEL 2810 Introduction to Security James Joshi Assistant Professor, SIS Lecture 5 September 27, 2007 Security Policies Confidentiality Policies.
Security Policy What is a security policy? –Defines what it means for a system to be secure Formally: Partition system into –Secure (authorized) states.
1 IS 2150 / TEL 2810 Information Security & Privacy James Joshi Associate Professor, SIS Lecture 6 Oct 2-9, 2013 Security Policies Confidentiality Policies.
1 A pattern language for security models Eduardo B. Fernandez and Rouyi Pan Presented by Liping Cai 03/15/2006.
MA/CSSE 474 Theory of Computation More Reduction Examples Non-SD Reductions.
Slide #4-1 Chapter 4: Security Policies Overview The nature of policies –What they cover –Policy languages The nature of mechanisms –Types Underlying both.
Slide #2-1 Chapter 2: Access Control Matrix Overview Access Control Matrix Model Protection State Transitions –Commands –Conditional Commands.
Slide #3-1 Chapter 3: Foundational Results Overview Harrison-Ruzzo-Ullman result –Corollaries.
Halting Problem Introduction to Computing Science and Programming I.
A Universal Turing Machine
THE CHURCH-TURING T H E S I S “ TURING MACHINES” Part 1 – Pages COMPUTABILITY THEORY.
12/4/20151 Computer Security Security models – an overview.
ITIS 3200: Introduction to Information Security and Privacy Dr. Weichao Wang.
12/13/20151 Computer Security Security Policies...
1Computer Sciences Department. Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Reference 3Computer Sciences Department.
1 Turing’s Thesis. 2 Turing’s thesis: Any computation carried out by mechanical means can be performed by a Turing Machine (1930)
1 IS 2150 / TEL 2810 Introduction to Security James Joshi Associate Professor, SIS Lecture 5 September 29, 2009 Security Policies Confidentiality Policies.
Chapter 4: Security Policies Overview The nature of policies What they cover Policy languages The nature of mechanisms Types Secure vs. precise Underlying.
Access Control: Policies and Mechanisms Vinod Ganapathy.
Advanced System Security Dr. Wayne Summers Department of Computer Science Columbus State University
2/1/20161 Computer Security Foundational Results.
Turing Machines Lecture 26 Naveen Z Quazilbash. Overview Introduction Turing Machine Notation Turing Machine Formal Notation Transition Function Instantaneous.
IS 2150/TEL 2810: Introduction of Computer Security1 September 27, 2003 Introduction to Computer Security Lecture 4 Security Policies, Confidentiality.
1 IS 2150 / TEL 2810 Introduction to Security James Joshi Associate Professor, SIS Lecture 4 September 18, 2012 Access Control Model Foundational Results.
Automata & Formal Languages, Feodor F. Dragan, Kent State University 1 CHAPTER 3 The Church-Turing Thesis Contents Turing Machines definitions, examples,
1 Introduction to Turing Machines
1 Chapter 9 Undecidability  Turing Machines Coded as Binary Strings  Universal Turing machine  Diagonalizing over Turing Machines  Problems as Languages.
1 IS 2150 / TEL 2810 Introduction to Security James Joshi Assistant Professor, SIS Lecture 3 September 13, 2007 Mathematical Review Security Policies.
INTRO TO COMPUTER SECURITY LECTURE 2 Security Policies M M Waseem Iqbal
Theory of Computation Automata Theory Dr. Ayman Srour.
Fall 2013 Lecture 27: Turing machines and decidability CSE 311: Foundations of Computing.
Chap 4. Security Policies
IS 2150 / TEL 2810 Introduction to Security
CSE 311 Foundations of Computing I
IS 2150 / TEL 2810 Introduction to Security
Introduction to Computer Security Lecture 2
Turing Machines Acceptors; Enumerators
Chapter 9 TURING MACHINES.
Advanced System Security
IS 2150 / TEL 2810 Information Security & Privacy
IS 2150 / TEL 2810 Introduction to Security
IS 2150 / TEL 2810 Introduction to Security
Decidable Languages Costas Busch - LSU.
Chapter 4: Security Policies
Chapter 4: Security Policies
Computer Security Foundations
Recall last lecture and Nondeterministic TMs
IS 2150 / TEL 2810 Information Security & Privacy
IS 2150 / TEL 2810 Introduction to Security
IS 2150 / TEL 2810 Information Security & Privacy
Computer Security Security Policies
Chapter 4: Security Policies
IS 2150 / TEL 2810 Introduction to Security
IS 2150 / TEL 2810 Introduction to Security
Presentation transcript:

1 September 14, 2006 Lecture 3 IS 2150 / TEL 2810 Introduction to Security

2 What is a secure system? A simple definition A secure system doesn’t allow violations of a security policy Alternative view: based on distribution of rights to the subjects Leakage of rights: (unsafe with respect to right r) Assume that A representing a secure state does not contain a right r in any element of A. A right r is said to be leaked, if a sequence of operations/commands adds r to an element of A, which did not contain r

3 What is a secure system? Safety of a system with initial protection state X o Safe with respect to r: System is safe with respect to r if r can never be leaked Else it is called unsafe with respect to right r.

4 Safety Problem: formally Given initial state X 0 = (S 0, O 0, A 0 ) Set of primitive commands c r is not in A 0 [s, o] Can we reach a state X n where  s,o such that A n [s,o] includes a right r not in A 0 [s,o]? - If so, the system is not safe - But is “safe” secure?

5 Decidability Results (Harrison, Ruzzo, Ullman) Theorem: Given a system where each command consists of a single primitive command (mono-operational), there exists an algorithm that will determine if a protection system with initial state X 0 is safe with respect to right r.

6 Decidability Results (Harrison, Ruzzo, Ullman) Proof: determine minimum commands k to leak Delete/destroy: Can’t leak (or be detected) Create/enter: new subjects/objects “equal”, so treat all new subjects as one No test for absence Tests on A[s 1, o 1 ] and A[s 2, o 2 ] have same result as the same tests on A[s 1, o 1 ] and A[s 1, o 2 ] = A[s 1, o 2 ]  A[s 2, o 2 ] If n rights leak possible, must be able to leak k= n(|S 0 |+1)(|O 0 |+1)+1 commands Enumerate all possible states to decide

7 Decidability Results (Harrison, Ruzzo, Ullman) It is undecidable if a given state of a given protection system is safe for a given generic right For proof – need to know Turing machines and halting problem

8 What is the implication? Safety decidable for some models Are they practical? Safety only works if maximum rights known in advance Policy must specify all rights someone could get, not just what they have Where might this make sense?

9 Back to HRU: Fundamental questions How can we determine that a system is secure? Need to define what we mean by a system being “secure” Is there a generic algorithm that allows us to determine whether a computer system is secure?

10 Turing Machine & halting problem The halting problem: Given a description of an algorithm and a description of its initial arguments, determine whether the algorithm, when executed with these arguments, ever halts (the alternative is that it runs forever without halting). Given a description of an algorithm and a description of its initial arguments, determine whether the algorithm, when executed with these arguments, ever halts (the alternative is that it runs forever without halting).

11 Turing Machine & halting problem Reduce TM to Safety problem If Safety problem is decidable then it implies that TM halts (for all inputs) – showing that the halting problem is decidable (contradiction)

12 Turing Machine TM is an abstract model of computer Alan Turing in 1936 TM consists of A tape divided into cells; infinite in one direction A set of tape symbols M M contains a special blank symbol b A set of states K A head that can read and write symbols An action table that tells the machine What symbol to write How to move the head (‘L’ for left and ‘R’ for right) What is the next state

13 Turing Machine The action table describes the transition function Transition function  (k, m) = (k, m, L): in state k, symbol m on tape location is replaced by symbol m, head moves to left one square, and TM enters state k Halting state is q f TM halts when it enters this state

14 Turing Machine A BC … 1234 head Current state is k Let  (k, C) = (k 1, X, R) where k 1 is the next state Current symbol is C D A BX … 1234 head D A B? … 1234 ? Let  (k 1, D) = (k 2, Y, L) where k 2 is the next state ? ?

15 General Safety Problem Theorem: It is undecidable if a given state of a given protection system is safe for a given generic right Proof: Reduce TM to safety problem Symbols, States  rights Tape cell  subject Cell s i has A  s i has A rights on itself Cell s k  s k has end rights on itself State p, head at s i  s i has p rights on itself Distinguished Right own: s i owns s i +1 for 1 ≤ i < k

16 Command Mapping (Left move)  (k, C) = (k 1, X, L) If head is not in leftmost command c k,C (s i, s i-1 ) if own in a[s i-1, s i ] and k in a[s i, s i ] and C in a[s i, s i ] then delete k from A[s i,s i ]; delete C from A[s i,s i ]; enter X into A[s i,s i ]; enter k 1 into A[s i-1, s i-1 ]; End

17 Mapping s1s1 s2s2 s3s3 s4s4 s4s4 s3s3 s2s2 s1s1 A B C k D end own A BC … 124 head Current state is k Current symbol is C D 1 234

18 Mapping (Left Move) s1s1 s2s2 s3s3 s4s4 s4s4 s3s3 s2s2 s1s1 A B k 1 X D end own After  (k, C) = (k 1, X, L) where k is the current state and k 1 the next state A BX … 124 head D If head is in leftmost both s i, s i-1 are s 1

19 Command Mapping (Right move)  (k, C) = (k 1, X, R) command c k,C (s i, s i+1 ) if own in a[s i, s i+1 ] and k in a[s i, s i ] and C in a[s i, s i ] then delete k from A[s i,s i ]; delete C from A[s i,s i ]; enter X into A[s i,s i ]; enter k 1 into A[s i+1, s i+1 ]; end

20 Mapping s1s1 s2s2 s3s3 s4s4 s4s4 s3s3 s2s2 s1s1 A B C k D end own A BC … 124 head Current state is k Current symbol is C D 1 234

21 Mapping s1s1 s2s2 s3s3 s4s4 s4s4 s3s3 s2s2 s1s1 A B X D k 1 end own After  (k, C) = (k 1, X, R) where k is the current state and k 1 the next state A BX … 124 head D 1 234

22 Command Mapping (Rightmost move)  (k 1, D) = (k 2, Y, R) at end becomes command crightmost k,C (s i,s i+1 ) if end in a[s i,s i ] and k 1 in a[s i,s i ] and D in a[s i,s i ] then delete end from a[s i,s i ]; create subject s i+1 ; enter own into a[s i,s i+1 ]; enter end into a[s i+1, s i+1 ]; delete k 1 from a[s i,s i ]; delete D from a[s i,s i ]; enter Y into a[s i,s i ]; enter k 2 into A[s i,s i ]; end

23 Mapping s1s1 s2s2 s3s3 s4s4 s4s4 s3s3 s2s2 s1s1 A B X Y own After  (k 1, D) = (k 2, Y, R) where k 1 is the current state and k 2 the next state s5s5 s5s5 own b k 2 end A BX 124 head Y 1 234

24 Rest of Proof Protection system exactly simulates a TM Exactly 1 end right in ACM Only 1 right corresponds to a state Thus, at most 1 applicable command in each configuration of the TM If TM enters state q f, then right has leaked If safety question decidable, then represent TM as above and determine if q f leaks Leaks halting state  halting state in the matrix  Halting state reached Conclusion: safety question undecidable

25 Security Policy Defines what it means for a system to be secure Formally: Partitions a system into Set of secure (authorized) states Set of non-secure (unauthorized) states Secure system is one that Starts in authorized state Cannot enter unauthorized state

26 Secure System - Example Is this Finite State Machine Secure? A is start state ? B is start state ? C is start state ? How can this be made secure if not? Suppose A, B, and C are authorized states ? ABCD Unauthorized states Authorized states

27 Additional Definitions: Security breach: system enters an unauthorized state Let X be a set of entities, I be information. I has confidentiality with respect to X if no member of X can obtain information on I I has integrity with respect to X if all members of X trust I Trust I, its conveyance and protection (data integrity) I maybe origin information or an identity (authentication) I is a resource – its integrity implies it functions as it should (assurance) I has availability with respect to X if all members of X can access I Time limits (quality of service

28 Confidentiality Policy Also known as information flow Transfer of rights Transfer of information without transfer of rights Temporal context Model often depends on trust Parts of system where information could flow Trusted entity must participate to enable flow Highly developed in Military/Government

29 Integrity Policy Defines how information can be altered Entities allowed to alter data Conditions under which data can be altered Limits to change of data Examples: Purchase over $1000 requires signature Check over $10,000 must be approved by one person and cashed by another Separation of duties : for preventing fraud Highly developed in commercial world

30 Transaction-oriented Integrity Begin in consistent state “Consistent” defined by specification Perform series of actions (transaction) Actions cannot be interrupted If actions complete, system in consistent state If actions do not complete, system reverts to beginning (consistent) state

31 Trust Theories and mechanisms rest on some trust assumptions Administrator installs patch 1. Trusts patch came from vendor, not tampered with in transit 2. Trusts vendor tested patch thoroughly 3. Trusts vendor’s test environment corresponds to local environment 4. Trusts patch is installed correctly

32 Trust in Formal Verification Formal verification provides a formal mathematical proof that given input i, program P produces output o as specified Suppose a security-related program S formally verified to work with operating system O What are the assumptions?

33 Trust in Formal Methods 1. Proof has no errors Bugs in automated theorem provers 2. Preconditions hold in environment in which S is to be used 3. S transformed into executable S’ whose actions follow source code Compiler bugs, linker/loader/library problems 4. Hardware executes S’ as intended Hardware bugs

34 Security Mechanism Policy describes what is allowed Mechanism Is an entity/procedure that enforces (part of) policy Example Policy: Students should not copy homework Mechanism: Disallow access to files owned by other users Does mechanism enforce policy?

35 Common Mechanisms: Access Control Discretionary Access Control (DAC) Owner determines access rights Typically identity-based access control: Owner specifies other users who have access Mandatory Access Control (MAC) Rules specify granting of access Also called rule-based access control Originator Controlled Access Control (ORCON) Originator controls access Originator need not be owner! Role Based Access Control (RBAC) Identity governed by role user assumes