Download presentation
Presentation is loading. Please wait.
1
CSC 482/582: Computer Security
Secure Design Principles CSC 482/582: Computer Security
2
Topics Categories of Security Flaws
Architecture/Design Implementation Operational Software Security: More than Just Coding Secure Design Principles Design Issues in Legacy Code Case Study: Sendmail vs. Postfix CSC 482/582: Computer Security
3
Categories of Security Flaws
Architectural/design-level flaws: security issues that original design did not consider or solve correctly. Implementation flaws: errors made in coding the design. Operational flaws: problems arising from how software is installed or configured. CSC 482/582: Computer Security
4
Architecture/Design Flaws
Race Condition Application checks access control, then accesses a file as two separate steps, permitting an attacker to race program and substitute the accessible file for one that’s not allowed. Replay Attack If an attacker can record a transaction between a client and server at one time, then replay part of the conversation without the application detecting it, a replay attack is possible. Sniffing Since only authorized users could directly access network in original Internet, protocols like telnet send passwords in the clear. CSC 482/582: Computer Security
5
Implementation Flaws Buffer overflow Input validation Back door
Application with fixed-size buffer accepts unlimited length input, writing data into memory beyond buffer in languages w/o bounds checking like C/C++. Input validation Application doesn’t check that input has valid format, such as not checking for “../” sequences in pathnames, allowing attackers to traverse up the directory tree to access any file. Back door Programmer writes special code to bypass access control system, often for debugging or maintenance purposes. CSC 482/582: Computer Security
6
Operational Flaws Denial of service Default accounts Password cracking
System does not have enough resources or ability to monitor resources to sustain availability under large number of requests. Default accounts Default username/password pairs allow access to anyone who knows default configuration. Password cracking Poor passwords can be guessed by software using dictionaries and permutation algorithms. CSC 482/582: Computer Security
7
How can design securely?
What about using checklists? Learn from our and others’ mistakes. Avoid known errors: buffer overflow, code injection, race conditions, etc. Too many known problems. What about unknown problems? Checklists are useful as one component of assurance. New attacks discovered every year. CSC 482/582: Computer Security
8
How can design securely?
Think about security from the beginning. Evaluate threats and risks in requirements. Once we understand our threat model, then we can begin designing an appropriate solution. Apply Secure Design Principles Guidelines for security design. Not a guarantee of security. Tradeoffs between different principles Checklists are useful as one component of assurance. New attacks discovered every year. CSC 482/582: Computer Security
9
Security Design Principles
Least Privilege Fail-Safe Defaults Economy of Mechanism Complete Mediation Open Design Separation of Privilege Least Common Mechanism Psychological Acceptability CSC 482/582: Computer Security
10
Meta Principles Simplicity (Minimization) Restriction (Isolation)
Fewer components and cases to fail. Fewer possible inconsistencies. Easy to understand. Restriction (Isolation) Minimize access. Inhibit communication. The design principles are rooted in simplicity and restrictiveness. Simplicity lies on many levels. The basic idea is that simpler things have fewer components, so less can go wrong. Further, there are fewer interfaces, so there are fewer entities communicating through the interfaces that can be inconsistent. Finally, they are easier to check, since the mechanism is not complex, and therefore easier to understand. There is also less to check. Restriction minimizes the number and types of interactions between the entity and other entities. In some circles, an example is the “need to know” principle: only give the entity the information it needs to complete its task. It also should only be able to release information when required to by the goals of the entity. Note this includes writing (integrity), because by altering other entities, the writer can communicate information. CSC 482/582: Computer Security
11
Least Privilege A subject should be given only those privileges necessary to complete its task. Function, not identity, controls. Rights added as needed, discarded after use. Minimal protection domain. Most common violation: Running as administrator or root. Use runas or sudo instead. This is an example of restriction. Key concepts are: Function: what is the task, and what is the minimal set of rights needed? “Minimal” here means that if the right is not present, the task cannot be performed. A good example is a UNIX network server that needs access to a port below 1024 (this access requires root). Rights being added, discarded: if the task requires privileges for only one action, then the privileges should be added before the action and then removed. Going back to the UNIX network server, if the server need not act as root (for example, an SMTP server), then drop the root privileges immediately after the port is opened. The protection domain statement emphasizes the other two. CSC 482/582: Computer Security
12
Least Privilege Example
Problem: A web server. Serves files under /usr/local/http. Logs connections under /usr/local/http/log. HTTP uses port 80 by default. Only root can open ports < 1024. Solution: Web server runs as root user. How does this solution violate the Principle of Least Privilege and how could we fix it? This is an example of restriction. Key concepts are: Function: what is the task, and what is the minimal set of rights needed? “Minimal” here means that if the right is not present, the task cannot be performed. A good example is a UNIX network server that needs access to a port below 1024 (this access requires root). Rights being added, discarded: if the task requires privileges for only one action, then the privileges should be added before the action and then removed. Going back to the UNIX network server, if the server need not act as root (for example, an SMTP server), then drop the root privileges immediately after the port is opened. The protection domain statement emphasizes the other two. CSC 482/582: Computer Security
13
How do we run with least privilege?
List required resources and special tasks Files Network connections Change user account Backup data Determine what access you need to resources Access Control model Do you need create, read, write, append, etc.? CSC 482/582: Computer Security
14
Fail-Safe Defaults Default action is to deny access.
When an action fails, system must be restored to a state as secure as the state it was in when it started the action. The first is well-known. Add rights explicitly; set everything to deny, and add back. This follows the Principle of Least Privilege. You see a variation when writing code that has security considerations. If you take untrusted data (such as input) that may contain meta-characters. The rule of thumb is to specify the LEGAL characters, and discard all others, rather than to specify the ILLEGAL characters and discard them. More on this in chapter 29 … The second is often overlooked, but goes to the meaning of “fail safe”: if something fails, the system is still safe. Failure should never change the security state of the system. Hence, if an action fails, the system should be as secure as if the action never took place. Credit card system defaults to manual process if it cannot phone in to check validity. Manual process insecure. Counterpoint: risk management—cheaper to accept loss than to refuse valid transactions when line is down. CSC 482/582: Computer Security
15
Fail Safe Defaults Example
Problem: Retail credit card transaction. Card looked up in vendor database to check for stolen cards or suspicious transaction pattern. What happens if system cannot contact vendor? Solution: No authentication, but transaction is logged. How does this system violate the Principle of Fail-Safe Defaults? CSC 482/582: Computer Security
16
Fail Safe Defaults Example
Problem: Steam DoS attack, Christmas 2015 To keep site attack, Steam used caching. Caching was incorrectly configured so that authenticated pages were cached. Some users saw cached pages from other users, revealing private account information. Solution: Test failover configuration before deployment.
17
Fail Safe Defaults Example
Problem: MS Office Macro Viruses. MS office files can contain Visual Basic code (macros.) MS Office automatically executes certain macros when opening a MS Office file. Users can turn off automatic execution. Don’t mix code and data! Solution: MS Office XP has automatic execution of macros turned off by default. While the solution is a fail-safe default, does it follow least privilege too? CSC 482/582: Computer Security
18
Economy of Mechanism Keep it as simple as possible (KISS).
Use the simplest solution that works. Fewer cases and components to fail. Reuse known secure solutions i.e., don’t write your own cryptography. Simplicity refers to all dimensions: design, implementation, operation, interaction with other components, even in specification. The toolkit philosophy of the UNIX system is excellent here; each tool is designed and implemented to perform a single task. The tools are then put together. This allows the checking of each component, and then their interfaces. It is conceptually much less complex than examining the unit as a whole. The key, though, is to define all interfaces completely (for example, environment variables and global variables as well as parameter lists). CSC 482/582: Computer Security
19
Economy of Mechanism Example
Problem: SMB File Sharing Protocol. Used since late 1980s. Newer protocol version protects data integrity by employing packet signing technique. What do you do about computers with older versions of protocol? Solution: Let client negotiate which SMB version to use. How does this solution violate economy of mechanism? CSC 482/582: Computer Security
20
Complete Mediation Check every access.
Usually checked once, on first access: UNIX: File ACL checked on open(), but not on subsequent accesses to file. If permissions change after initial access, unauthorized access may be permitted. bad example: DNS cache poisoning The reason for relaxing this one is efficiency: if you do lots of accesses, the checks will slow you down substantially. It’s not clear if that is really true, though. Exercise: Have a process open a UNIX file for reading. From the shell, delete the read permissions that allow the process to read the file. Then have the process read from the open file. The process can do so. This shows the check is done at the open. If you want to be sure, have the process close the file. Then have the process try to reopen the file for reading. This open will fail. Note that UNIX systems fail to enforce this principle to any degree on a superuser process, where access permissions are not even checked for an open! This is why people create management accounts (more properly, role accounts) like bin or mail: by restricting processes to those accounts, so access control checking applies. It also is an application of the principle of least privilege. DNS Cache Poisoning (classic): Delegate your domain to google.com, supplying your own addresses for google’s DNS servers. Wait for someone to query DNS server and then update their cache with changed information, using it for next TTL period which can be 24+ hours. CSC 482/582: Computer Security
21
Open Design Security should not depend on secrecy of design or implementation. Popularly misunderstood to mean that source code should be public. It means avoiding “Security through obscurity” Refers to security policy and mechanism, not simple user secrets like passwords and cryptographic keys, e.g. it follows Kerchoff’s Principle. Note that source code need not be available to meet this principle. It simply says that your security cannot depend upon your design being a secret. Secrecy can enhance the security, but if the design becomes exposed, the security of the mechanism cannot be affected. The problem is that people are very good at finding out what secrets protect you. They may figure it out from the way the system works, or from reverse engineering the interface or system, or by more prosaic techniques such as dumpster diving. This principle does not speak to secrets not involving design or implementation. For example, you can keep crypto keys and passwords secret. deCSS: alleged purpose—stop piracy; actual purposes—region coding, nonskippable commercials. Algorithm weak, easily reverse engineered. The “secret” key must be stored on firmware of every DVD player. CSC 482/582: Computer Security
22
Open vs. Closed Source “Is open-source software secure?” Open: Closed:
Some people might look at security of your application (if they care) may or may not tell you what they find Closed: not making code available does not hide much need diverse security-aware code reviews A business decision: Not a security one! There are a plethora of companies that need to secure their software that might are aware of the “security by obscurity” problem, and decide to make their software “open-source” in order to secure it. When a company makes a piece of software “open-source,” it makes the source code available to the entire world for review. They reason that if a piece of software can only be made secure by subjecting it to an open review process, then why not make it open to the entire world such that people can point out problems, and the company can simply fix it. While the company might have good intensions, the company is making yet a more detailed set of assumptions if it believes that it can create more secure software by open-sourcing it. The first additional assumption that the company would be making is that by open-sourcing its software, others would actually look at the source code of the software, and specifically would look at the sections of code that might lead to security flaws. If the source code is hard to read, not very understandable, uninteresting, etc. the code will probably not be read at all. In addition, if an open-source developer actually does look at the code, he or she might be interested in looking at a piece of the code whose functionality they are interested in modifying for their own purposes. The open-source developer might want to change the GUI, or adapt some part of the functionality to serve a specific request that one of their customers might have. Security may or may not be on the agenda of the open-source developer. Finally, even if the open-source developer is interested in the security of the program, there is no assurance that he or she will actually report any security vulnerabilities to the author of the code. The open-source “developer” may be malicious and may be looking to attack some deployed version of the software in question. Due to all these reasons, the simple act of making a piece of software open-source will not automatically lead to an increase in its security. On the other hand, keeping a piece of software proprietary (“closed-source”) does not ensure the security of a program either for all the reasons that we discussed when we talked about in the security by obscurity section. Only releasing the binary code of an application does not hide much from the attacker, and the attacker can still exploit security holes by studying the behavior of the running program. Even if a company keeps its code proprietary, it should be reviewed by security experts to look for vulnerabilities. What this all means is that if you want to ensure the security of an application, you need to spend time reviewing the code for security vulnerabilities. You can’t simply open-source it in the hopes that others will find security flaws for you, and you can’t hope that it will be secure just because you don’t release the source code. You need to spend time reviewing the security of your application yourself if you indeed want it to be secure. So by open-sourcing a piece of software, one might argue that they could be just making the hacker’s job a little easier. This is possible, but a determined hacker doesn’t need to source code. This does not contradict what we said when we talked about security by obscurity. Hiding the source code of an application does not make it much harder to attack. At the end of the day, the decision to open-source a piece of software or keep it closed source should be a business decision. Is open-sourcing or keeping the code proprietary more complementary with the business model under which the software is intended to generate revenue? (Good discussion of this topic in “Building Secure Software” by John Viega and Gary McGraw.)
23
Open Design Example: Problem: MPAA wants control over DVDs.
Region coding, unskippable commercials. Solution: CSS (Content Scrambling System) CSS algorithm kept secret. DVD Players need player key to decrypt disk key on DVD to descript movie for playing. Encryption uses 40-bit keys. People w/o keys can copy but not play DVDs. What happened next? CSS algorithm reverse engineered. Weakness in algorithm allows disk key to be recovered in an attack of complexity 225, which takes only a few seconds. Note that source code need not be available to meet this principle. It simply says that your security cannot depend upon your design being a secret. Secrecy can enhance the security, but if the design becomes exposed, the security of the mechanism cannot be affected. The problem is that people are very good at finding out what secrets protect you. They may figure it out from the way the system works, or from reverse engineering the interface or system, or by more prosaic techniques such as dumpster diving. This principle does not speak to secrets not involving design or implementation. For example, you can keep crypto keys and passwords secret. deCSS: alleged purpose—stop piracy; actual purposes—region coding, nonskippable commercials. Algorithm weak, easily reverse engineered. The “secret” key must be stored on firmware of every DVD player. CSC 482/582: Computer Security
24
Flaws in the Approach What assumptions to make about adversary?
Knows algorithms? Or not? Algorithms in “binary” secret? Attackers can probe for weaknesses reverse engineer executables observe behavior in normal vs. aberrant conditions (fault injection) Fuzzing: trying random input strings to find an exploit blackmail insiders Now that we have added security requirements to the requirements documents of our information systems, let talk about how we should go about implementing mechanisms that enforce those security requirements. Many organizations practice “security by obscurity.” That is, they attempt to keep things secure by keeping them secret. For example, companies keep many trade secrets, and sometimes don’t even tell their customers how their products work. Military organizations only disseminate information to people on a “need to know” basis. In both these cases, an organization is trying to keep information secure by hiding it from others. While it is possible to achieve some level of security by hiding information, that is, through obscurity, it may not always make sense to assume that an attacker does not know how the system works. For example, one might assume that a user might not be able to understand how a program works because it is deployed as an executable binary file (i.e., a .exe file). However, an attacker can easily disassemble, decompile, or reverse engineer the executable. Also, the attacker could derive information about how the program functions simply by observing its behavior under normal conditions, and/or its behavior on inputs that the attacker selects. (Can we come up with a simple example?) In addition to the technical approaches above, the attacker may also potentially be able to blackmail or coerce those that do know how the system works into disclosing details about how it works. To be conservative, we may therefore want to assume that the attacker knows exactly how the system to be attacked works. As such, we may want to avoid practicing security by obscurity if a better option exists. In the following, we will talk about how it may be possible to build a secure system whose design could be public knowledge, where the security of the system does not depend upon hiding design details, but instead depends on certain “keys” being secret. By “keys” we mean relatively short sequences of bits (i.e bit keys). It is usually much easier to keep a few “keys” secret compared to keeping all of the information about how a system functions secret. Interesting article about the topic at:
25
SWS Obscurity Code obfuscators offer some protection
Distributing Java bytecode of SWS (and not source code) does not provide security. Tools like strings can search binary for passwords, keys, etc. Bytecode can be decompiled (see Mocha, Jad) to produce source code, including class and public member names. Machine code can be disassembled into assembly by tools like IDA Pro and even decompiled into rough C code. Debuggers and reflection tools can examine a running program. Code obfuscators offer some protection Make code harder to read by replacing readable names with meaningless ones, re-organizing code, etc. But reverse engineers can work through any obfuscation given enough time. CSC 482/582: Computer Security
26
Disassembling SWS public void processRequest(java.net.Socket);
throws java/lang/Exception Code: 0: new 25; //class BufferedReader 3: dup 4: new 26; //class InputStreamReader 7: dup 8: aload_1 9: invokevirtual 27; 12: invokespecial 28; 15: invokespecial 29; 18: astore_2 19: new 30; //class OutputStreamWriter 22: dup 23: aload_1 24: invokevirtual 31; 27: invokespecial 32; 30: astore_3 31: aload_2 32: invokevirtual 33; 35: astore 4 37: aconst_null 38: astore 5 40: aconst_null 41: astore 6 43: new 34; //class StringTokenizer 46: dup 47: aload 4 49: ldc 35; //String 51: invokespecial 36; 54: astore 7 56: aload 7 58: invokevirtual 37; 61: astore 5 63: aload 7 65: invokevirtual 37; 68: astore 6 70: aload 5 72: ldc 38; //String GET 74: invokevirtual 39; 77: ifeq 90 80: aload_0 81: aload_3 82: aload 6 84: invokevirtual 40; 87: goto 90: aload_3 91: ldc 41; 93: invokevirtual 42; 96: goto 101 99: astore 8 101: aload_3 102: invokevirtual 44; 105: return
27
Separation of Privilege
Require multiple conditions to grant access. Separation of duty Compartmentalization (encapsulation) Defence in depth You need to meet more than one condition to gain access. Separation of duty says that the one who signs the checks cannot be the one who prints the checks because then a single person could steal money. To make that more difficult, the thief must compromise two people, not one. This also provides a finer-grained control over a resource than a single condition. The analogy with non-computer security mechanisms is that of “defense in depth.” To get into a castle, you need to cross the moat, scale the walls, and drop down over the walls before you can get in. That is three barriers (conditions) that must be overcome (met). Openssh: separate app into monitor and child processes; child processes are not privileged and must ask monitor to perform privileged ops on its behalf; user interaction only with unprivileged children CSC 482/582: Computer Security
28
Separation of Duty Functions are divided so that one entity does not have control over all parts of a transaction. Example: Different persons must initiate a purchase and authorize a purchase. Two different people may be required to arm and fire a nuclear missile. CSC 482/582: Computer Security
29
Compartmentalization
Problem: A security violation in one process should not affect others. Solution: Virtual Memory Each process gets its own address space. In what ways is this solution flawed? i.e., how can the compartments communicate? How could we improve compartmentalization of processes? CSC 482/582: Computer Security
30
Defence in Depth Diverse defensive strategies
Different types of defences. Protection Detection Reaction Different implementations of defences. If one layer pierced, next layer may stop. Avoid “crunchy on the outside, chewy on the inside” network security. Contradicts “Economy of Mechanism” Think hard about more than 2 layers. Firewall: what about insiders? Bank: auto cameras, security guard, vault requiring multiple locks + codes, dye bills CSC 482/582: Computer Security
31
Avoid M&M Architectures
Inherently insecure system protected by another system mediating access to it Ex: Firewalls guard vulnerable systems within Ex: Death Star “strong outer defense” but vulnerable Hard outer shell should not be sole defense
32
Defence in Depth Example
Problem: Bank. How to secure the money? Solution: Defence in depth. Guards inside bank. Closed-circuit cameras monitor activity. Tellers do not have access to vault. Vault has multiple defences: Time-release. Walls and lock complexity. Multiple compartments. CSC 482/582: Computer Security
33
Least Common Mechanism
Mechanisms to access resources should not be shared. Information can flow along shared channels. Covert channels. Contradicts Economy of Mechanism? Isolation prevents communication, and communication with something—another process or a resource—is necessary for a breach of security. Limit the communication, and you limit the damage. Works with “separation of privilege” openssh example. Covert channels: if two processes share a resource, by coordinating access, they can communicate by modulating access to the entire resource. Examples: percent of CPU used. To send a 1 bit, the first process uses 75% of the CPU; to send a 0 bit, it uses 25% of the CPU. The other process sees how much of the CPU it can get and from that can tell what the first process used, and hence is sending. Variations include filling disks, creating files with fixed names, and so forth. Approaches to implementing this principle: isolate each process, via virtual machines or sandboxes (a sandbox is like a VM, but the isolation is not complete). CSC 482/582: Computer Security
34
Least Common Mechanism
Problem: Compromising web server allows attacker access to entire machine. Solution: Run web server as non-root user. Attacker still gains “other” access to filesystem. Run web server in chroot jail. Isolation prevents communication, and communication with something—another process or a resource—is necessary for a breach of security. Limit the communication, and you limit the damage. Works with “separation of privilege” openssh example. Covert channels: if two processes share a resource, by coordinating access, they can communicate by modulating access to the entire resource. Examples: percent of CPU used. To send a 1 bit, the first process uses 75% of the CPU; to send a 0 bit, it uses 25% of the CPU. The other process sees how much of the CPU it can get and from that can tell what the first process used, and hence is sending. Variations include filling disks, creating files with fixed names, and so forth. Approaches to implementing this principle: isolate each process, via virtual machines or sandboxes (a sandbox is like a VM, but the isolation is not complete). CSC 482/582: Computer Security
35
Psychological Acceptability
Security mechanisms should not add to the difficulty of accessing a resource. Usability: Ease of installation, configuration, and use. Hide complexity introduced by security mechanisms. Principle of Least Astonishment: Design should match user’s experience, expectations, and mental models. Follow UI conventions. This recognizes the human element. General rules are: Be clear in error messages. You don’t need to be detailed (e.g., did the user mistype the password or login name) but you do need to state the rules for using the mechanism (in the example, that the user must supply both password and login name). This is usually interpreted as meaning the mechanism must not impose an onerous burden. Strictly speaking, passwords violate this rule (because it’s not as easy to access a resource by giving a password as accessing the resource without a password), but the password is considered a minimal burden.
36
Psychological Acceptability
Users will not read documentation. Make system secure in default configuration. Users will not read dialog boxes. Don’t offer complex choices. example: Mozilla/IE certificate dialogs. Privacy vs Usability example: one-click shopping Use (and compilation, installation, etc.) must be straightforward. It’s okay to have scripts or other aids to help here. But, for example, data types in the configuration file should either be obvious or explicitly stated. A “duration” field does not indicate if the period of time is to be expressed in hours, minutes, seconds, or something else, nor whether fractional units are allowed (a float) or not (an integer). The latter is actually a common bug. If duration is in minutes, then does “0.5” mean 30 seconds (as a float) or 0 seconds (as an integer)? If the latter, an error message should be given (“invalid type”). CSC 482/582: Computer Security
37
Key Points Categories of Security Flaws Secure Design Principles
Architecture/design Implementation Operational Secure Design Principles Least Privilege Compartmentalization Psychological Acceptability … To sum up: these principles require Simplicity (and when not possible, minimal complexity); Restrictiveness; Understanding the goals of the proposed security; and Understanding the environment in which the mechanism will be developed and deployed. CSC 482/582: Computer Security
38
References Bishop, Matt, Introduction to Computer Security, Addison-Wesley, 2005. Graff, Mark and van Wyk, Kenneth, Secure Coding: Principles & Practices, O’Reilly, 2003. Howard, Michael and LeBlanc, David, Writing Secure Code, 2nd edition, Microsoft Press, 2003. Viega, John, and McGraw, Gary, Building Secure Software, Addison-Wesley, 2002. Wheeler, David, Secure Programming for UNIX and Linux HOWTO, CSC 482/582: Computer Security
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.