Presentation is loading. Please wait.

Presentation is loading. Please wait.

Design and Implementation Principles for Secure Systems.

Similar presentations


Presentation on theme: "Design and Implementation Principles for Secure Systems."— Presentation transcript:

1 Design and Implementation Principles for Secure Systems

2 Readings Secure Design Principles https://buildsecurityin.us-cert.gov/portal/article/knowledge/principles/overview.xml

3 Categories of exploitable flaws Design philosophy Design DO’s Design and Implementation DON’T’s

4 Categories of Exploitable Flaws 1.Architecture and design flaws: things overlooked or not planned for in the original design. 2.Implementation flaws: mistakes in coding. 3.Operational flaws: Procedural problems in how the software is used or administered.

5 Architecture/Design Attacks Program is used in a new way. Two utilities piped together with unintended consequences. Replay attack. Man-in-the-middle attack / session hijacking Sniffer attack: eavesdropper gathers info over time Race condition (between time of authentication to time of use); incomplete mediation

6 Implementation Level Attacks Buffer overflow: a fixed length buffer is allocated by the program and at runtime, the attacker fills the buffer with more data than it can hold. Defense: length check all input! Back door attack: many programs have special code that allows access controls to be bypassed for administration or maintenance.

7 Implementation Level Attacks Parsing error attack: result of an incomplete checking of input. Ex: program allows input of a path (mydir/myfile.txt) but doesn’t protect against./.. (up a level) and various hexadecimal or Unicoded strings.

8 Operations Level Attacks Denial-of-service (can also be design level): resources get used up Default accounts attack: Many programs are shipped configured with standard accounts and passwords. Guest/guest and field/service are examples. Password cracking

9 Design Philosophy for high assurance software

10 Learn From the Real World Real world security decisions based on: –value, locks, and police Need good enough locks so that the "bad guys" can't break in too often. The terms "good enough," "break in" and "too often" are key. Assume that the police and courts work, so "bad guys" are caught and punished. "Police" in this context is any agency that might pursue offenders. Expected gain from committing a crime must be negative. Value is important. Generally we do not protect things of little value.

11 Locks A constraint on any security mechanism is that it by minimally disruptive. Locks must not be difficult or annoying to use. If they are, people will find ways to circumvent the annoyance and thus nullify the security protections. Risk management allows recovery from a security problem and decreases the need for complex and annoying locks. For example, rather than installing a complicated locking system for automobiles we buy auto insurance to help deal with costs that arise in the event of damage or theft.

12 Locks and Security The environment also influences the type and strength of the locks being used as well. People pay for security they believe they need. Security is scaled with respect to both the value of the thing being secured and the threat against it.

13 Application to Computer Security Authentication and authorization along with cryptographic techniques are the locks. Confidentiality, Integrity, Availability is the value to be protected. Audit logs, and computer forensics techniques are added to police.

14 Design Principles: the Do’s Economy of Mechanism Open Design Principle of Least Privilege Complete Mediation Separation of Privilege Failsafe Defaults Diversity of Mechanism Multiple Lines of Defense

15 Economy of Mechanism Use simple and straightforward mechanisms wherever possible. Avoid a lot of extra features. Bells and whistles add complication, and that introduces errors. The simplest mechanism that does the job is the best.

16 Open Design No security through obscurity. The security of a mechanism should not depend on attacker's ignorance of how the mechanism works or is built. This principle is controversial. Showing a design or source code to attackers certainly makes their task easier. NSA, for instance, refused to publish their analysis that the DES encryption algorithm is secure. Does this make you more confident in the security of DES? Publicizing the design give security researchers the opportunity to find and fix the flaws before the attackers do.

17 Principle of Least Privilege Every program and every user should operate using the minimum set of privileges necessary to complete the job. This limits the damage that can result from an accident or error, and it limits the number of privileged programs (which could be compromised) in the system. It also helps in debugging, is good for increasing assurance, and allows isolation of small critical subsystems.

18 Complete Mediation Complete Mediation. Every access to every object is checked. Thus, some interfaces must be monitored---the interface might be defined in terms of method calls or it might be defined in terms of individual instructions. Example: process for user A opens a file; user A is terminated revoking all his privileges; process accesses file which has been open for days and user privilege is not verified.

19 Separation of Privilege Two keys are better than one. Each privilege should require a separate secret. Separate passwords for separate objects. This allows fine-grained control of access to the system, and limits what can be compromised if a single secret is revealed. Can be overly cumbersome for the user.

20 Failsafe Defaults The default option is to maintain security. If the system fails, it will fail safely. (Because attackers may be able to make it fail) It is much better (and less prone to error) to define who can have access that to directly define who cannot. Problem: user needs a privilege that was not anticipated so his work is delayed while the privilege is authorized.

21 Diversity of Mechanism Diversity of Mechanism Security mechanisms that have the same design or follow similar logic are likely to fail in similar ways (hence, at the same time or fall to the same trick of an attacker). Diverse mechanisms are unlikely to share vulnerabilities. With diverse mechanisms, the odds are increased that no single vulnerability is common to all.

22 Multiple Lines of Defense Unfortunately, no security mechanism is totally secure. Plan for something to fail and have a second (and third) line of defense. Example: Try to keep the bad guys out (firewall) but if they do get in, minimize the harm they can do (strict access controls), and if they manage to get access, have good audit logs so you can track them down and prosecute.

23 A Practical Example It you must use an existing, unsafe application, use it with a wrapper. A wrapper is one way to make an existing application more secure.

24 Wrappers Make the real application inaccessible Replace the old access with a small program or script that: 1.Checks the command line parameters 2.Prepares a restricted runtime environment for the target app 3.Invokes the target app

25 Overflow wrapper auscert@auscert.org.au ftp://ftp.auscert.org.au/pub/auscert/tools/overflow/ overflow_wrapper.c This wrapper is designed to limit exploitation of programs which have command line argument buffer overflow vulnerabilities. The vulnerable program is replaced by this wrapper. The original vulnerable program being moved to another location and its permissions restricted. This wrapper checks each argument's length to ensure it doesn't exceed a given length before executing the original program.

26 #include main(argc,argv,envp) int argc; char *argv[]; char *envp[]; { int i; for (i=0; i<argc; i++) { if (strlen(argv[i]) > MAXARGLEN) { fprintf(stderr,“exceeded argument length...Exiting\n"); #ifdef SYSLOG /* log error */ #endif exit(1); } execve(REAL_PROG, argv, envp); perror("execve failed"); exit(1); }

27 Bad Practices: The Don’t’s Using Files and File Names Invoking Other Programs Handling Passwords Storage and Memory Other

28 Using Files and Filenames Don’t write code that uses relative filenames. Filename references should be fully qualified, that is, start with “/” or “\”. Relative filenames can make it possible to change the file referred to by changing the “current” or “working” directory. Don’t refer to a file twice in the same program by its name. Open the file once by name, then use the file handle from then on. Otherwise, race conditions can occur or an attacker could switch files.

29 Files and Filenames Don’t rely on operating system level file protection mechanisms to prevent unauthorized file access. It is good practice to use OS provided access control mechanisms, but your application should rely on a separate access control in case the OS is compromised.

30 Invoking Other Programs Don’t invoke untrusted programs from within trusted ones. It is almost always better to do the work yourself and refrain from taking what looks like a shortcut. You don’t know what the untrusted program might do on your behalf it may invoke a third program with many security flaws. Don’t invoke a shell or command line. If you absolutely must, obscure all state information before the shell escape.

31 Handling Passwords Don’t keep sensitive data in a database without password protection. Don’t rely on DB access controls. Don’t echo passwords or display them on the screen for any reason. (some web sites do this) Don’t issue passwords via email. This common practice reduces the level of protection to the user’s mail folder (not very secure). Don’t code usernames or passwords into an application. If you absolutely must, be sure it can’t be read in the object code.

32 Storage and Memory Don’t use world-writeable storage, even temporarily. It gives attackers a chance to examine anything you put there. Don’t store passwords or sensitive data on disk (even temporarily) without encryption. Same reason. Don’t trust user-writeable storage not to be tampered with. Assume your user is malicious.

33 Other Don’t assume that your users are not malicious – even if you are writing for a small set of in- house trusted employees. Check all inputs, especially input from other programs. Don’t “dump core”. This used to be the answer to unfixable problems. Unfortunately, hackers can create unfixable problems and use the core dump. Don’t assume success. After a system call, check the exit conditions and try to handle errors. Not checking can lead to race conditions, file overwrites and other flaws that can be exploited.

34 Other Don’t confuse “random” with “pseudo- random”. Most random number generators are really pseudo-random. That is, statistically random, but predictable. Predictability is death in cryptography. Don’t authenticate on untrusted data. IP numbers and email addresses can be spoofed and hence do not identify a process or user. Don’t transmit passwords or sensitive data without encryption.


Download ppt "Design and Implementation Principles for Secure Systems."

Similar presentations


Ads by Google