Download presentation
Presentation is loading. Please wait.
Published byBernice Shelton Modified over 9 years ago
1
Security Certifications and Building a Secure System…in search of the Holy Grail? Lesson 21
2
Building Secure Systems Vulnerabilities frequently found in operating systems and application programs. Problem is not new, we’ve always had problems with the security of our systems and flaws in the operating system. Is it really that hard to build a secure system?
3
Why systems are not secure Security is fundamentally difficult. What is adequate for most functions isn’t for security “good enough” doesn’t apply for security Security is often (usually) an afterthought. Security is viewed as an impediment. False solutions impede progress industry subject to ‘fads’ -- quick fixes cause us to become complacent. Technology is oversold -- the problem is often with the people not the computers. Errors are made – and not found (inadequate testing, poor programming techniques).
4
Example of poor programming/errors Buffer Overflows result of poor programming practice –use of functions such as gets and strcpy these don’t check input for boundaries may allow individual to gain root or admin access
5
Sample Buffer Overflow Example #include void func(char *p) { char stack_temp[20]; strcpy(stack_temp, p); printf(stack_temp); } int main(int argc, char* argv[]) { func(“I AM MORE THAN TWENTY CHARACTERS LONG!”); return 0; }
6
Buffer Overflows Program Execute A Return Subroutine A Read Variable Data Process Stack Return Addr
7
Buffer Overflows Program Execute A Return Subroutine A Read Variable Data Process Stack Return Addr New Addr Another Routine
8
Exploits Buffer Overflows fingerd, statd, talkd, … result of poor programming practice Shell Escapes special character in input string causes escape to shell
9
Cover your tracks Adjust log files RootKit –contains sniffer, login program that disables logging, “hacker” versions of several system utilities (e.g. ps) Looping launch attack from another system you’ve compromised thus hiding trail greatly increases complexity of tracking
10
Security Kernel The HW and SW that implements the “reference monitor” All accesses that subjects make to objects are authorized on information in an access control database. The specific checks that are made and all modifications to the access control database are controlled by the reference monitor in accordance with the established security policy. Audit File Reference Monitor Access Control Database Objects Subjects
11
Three Principles for Security Kernels Completeness: it must be impossible to bypass. All access to information must be mediated by the reference monitor. Isolation: it must be tamperproof The OS and the reference monitor should “protect” themselves from modification. Verifiability: it must be shown to be properly implemented. Good software engineering practices. Simplicity of function in the kernel. Minimize the size of the kernel.
12
The “Orange Book” The NCSC (NSA) developed the Trusted Computer System Evaluation Criteria (TCSEC) Designed to meet three objectives to provide guidance to manufacturers as to what security features to build into their products to provide the DoD customers with a metric to evaluate the degree of trust they could place in a computer system to provide a basis for specifying security requirements in acquisition specifications
13
The Orange Book Particular emphasis is on preventing unauthorized disclosure of information. Based on Bell-La Padula security model Simple Security Condition –allows a subject read access to an object only if the security level of the subject dominates the security level of the object. *-Property –allows a subject write access to an object only if the security level of the subject is dominated by the security level of the object. Also known as the Confinement Property. “No Read Up/No Write Down”
14
The Orange Book “Trusted Computing Base” Concept 7 Levels D: Minimal Protection C1: Discretionary Security Protection C2: Controlled Access Protection B1: Labeled Security Protection B2: Structured Protection B3: Security Domains A1: Verified Protection
15
The Orange Book
16
Fundamental Requirements Policy Security Policy Marking Accountability Identification: Individual subjects must be identified. Accountability: Audit information must be selectively kept and protected. Assurance Assurance: system must contain HW/SW mechanisms that can be independently evaluated to ensure system enforces requirements. Continuous Protection: mechanisms that enforce protection must be protected against tampering and unauthorized changes. Documentation
17
Division C Class C1
19
Division C Class C2
20
Division B Class B1
21
TCSEC Summary
22
The “Yellow Book” Open: Application developers do not have sufficient clearance or authorization to proved an acceptable presumption that they have not introduced malicious logic. Closed: Application developers (including maintainers) do have the clearance/authorization.
23
The “Yellow Book”
24
The “Red Book” Trusted Network Interpretation (TNI) Two parts Interprets Orange book for networks –interpretation –rationale Describes additional security services that arise with networks.
25
Division C Class C1
26
The Network Security Services
27
Issues with Any Certification Certifications take time thus they generally have a hefty price associated with them. By the time the product is evaluated, its obsolete. Who gets to do the evaluation? Lots of folks don’t want the government poking around their product, but can you trust some other company? Certifications are for a single release, if you release a new version it will need to be evaluated too.
28
The ITSEC and Common Criteria After the TCSEC was published, several European countries issued their own criteria. The Information Technology Security Evaluation Criteria (ITSEC). Had a number of improvements. –Permitted new feature definitions and functionalities. –Accommodated commercial evaluation facilities Soon the U.S. was preparing to update the TCSEC. Instead of multiple standards, how about a joint one? Thus, the birth of the Common Criteria
29
Common Criteria
31
Has 7 Evaluation Assurance Levels (EAL) EAL1: functionally tested EAL2: Structurally tested EAL3: Methodologically tested and checked EAL4: Methodologically designed, tested, & reviewed EAL5: Semiformally designed and tested EAL6: Semiformally verified design and tested EAL7: Formally verified design and tested Any collection of components can be combined with an EAL to form a Protection Profile. Defines an implementation-independent set of security requirements and objectives.
32
Example Level Description EAL1 - functionally tested EAL1 is applicable where some confidence in correct operation is required, but the threats to security are not viewed as serious. It will be of value where independent assurance is required to support contention that due care has been exercised with respect to the protection of personal or similar information. This level provides an evaluation of the Target of Evaluation (TOE) as made available to the consumer, including independent testing against a specification, and an examination of the guidance documentation provided.
33
Level Description EAL2 - structurally tested EAL2 requires the cooperation of the developer in terms of the delivery of design information and test results, but should not demand more effort on the part of the developer than is consistent with good commercial practice. As such it should not require a substantially increased investment of cost or time. EAL2 is applicable in those circumstances where developers or users require a low to moderate level of independently assured security in the absence of ready availability of the complete development record. Such a situation may arise when securing legacy systems, or where access to the developer may be limited.
34
Level Description EAL3 - methodically tested and checked EAL3 permits a conscientious developer to gain maximum assurance from positive security engineering at the design stage without substantial alteration of existing sound development practices. It is applicable in those circumstances where developers or users require a moderate level of independently assured security, and require a thorough investigation of the TOE and its development without incurring substantial reengineering costs. An EAL3 evaluation provides an analysis supported by "grey box" testing, selective confirmation of the developer test results, and evidence of a developer search for obvious vulnerabilities. Development environmental controls and TOE configuration management are also required.
35
ICSA Certification ICSA Labs initiated a program for certifying IT products against a set of industry accepted, de facto standards. Standards are developed with input from security experts, vendors, developers, and users. Targets threats that actually occur frequently, not postulated ones (think covert channels). Goal is criteria appropriate for 80% of customers. Has mechanism for certification of future versions.
36
ICSA
39
ICSA - IDS
41
System Security Engineering Capability Maturity Model SSE-CMM: “Describes the essential characteristics of an organization’s security engineering process that must exist to ensure good security engineering.” Began as an NSA-sponsored effort in 1993. The model is a standard metric for security engineering practices covering: the entire life cycle the whole organization interactions with other organizations including acquisitions, system management, certification,… interactions with other disciplines
42
SSE-CMM Intended to be used as a: Tool for engineering organizations to evaluate their security engineering practices and define improvements to them. Standard mechanism for customers to evaluate a provider’s security engineering capability. Basis for security engineering evaluation organizations to establish capability-based confidences or assurance.
43
SSE-CMM Capability Levels 1 Performed Informally 2 Planned & Tracked 3 Well Defined 0 Not Performed 4 Qualitatively Controlled 5 Continuously Improving 1 Base Practices Performed 2 Committing to perform Planning performance Disciplined performance Tracking performance Verifying performance 3 Defining a standard process Tailoring a standard process Using data Perform a defined process 4 Establishing measurable quality goals Determining process capability to achieve goals Objectively managing performance 5 Establishing quantitative process effectiveness goals Improving process effectiveness
44
21 SSE-CMM Process Areas (some examples) Administer Security Controls Establish security responsibilities Manage security configuration Manage security awareness, training, and education programs Manage security services and control mechanisms Attack Security Scope attack Develop attack scenarios Perform attacks Synthesize attack results Monitor System Security Posture Analyze event records Monitor changes Identify security incidents Manage security incident response
45
Summary What is the Importance and Significance of this material? How does this topic fit into the subject of “Voice and Data Security”?
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.