Presentation is loading. Please wait.

Presentation is loading. Please wait.

CMSC 426/626: Secure Coding Krishna M. Sivalingam Sources: From Secure Coding, Mark and van Wyk, O’Reilly, 2003 www.cert.org/secure-coding.

Similar presentations


Presentation on theme: "CMSC 426/626: Secure Coding Krishna M. Sivalingam Sources: From Secure Coding, Mark and van Wyk, O’Reilly, 2003 www.cert.org/secure-coding."— Presentation transcript:

1 CMSC 426/626: Secure Coding Krishna M. Sivalingam Sources: From Secure Coding, Mark and van Wyk, O’Reilly, 2003 www.cert.org/secure-coding

2 Where can errors occur? During entire software lifecycle  Security Architecture/Design stage  Man-in-the-middle attack  Race condition attack  Replay attack  Implementation Stage  Buffer overflow attack  Parsing error attack  Back door attacks (aka Trapdoors)  Code Maintenance Stage

3 Flaw Classifications  Landwehr’s Scheme  Bishop’s Scheme  Aslam’s Scheme  Du/Mathur’s classification  Flaws are Intentional and Inadvertent  Inadvertent Flaw Classifications  Validation Error  Domain Error  Serialization and Aliasing  Inadequate Authentication and Identification  Boundary Condition Violation  Other exploitable logic error

4 Study of Buffer Overflow Attack  Cowan, Crispin, Perry Wagle, Calton Pu, Steve Beattie, and Jonathan Walpole. "Buffer Overflows: Attacks and Defenses for the Vulnerability of the Decade." Proceedings of DARPA Information Survivability Conference and Expo (DISCEX), 1999  http://insecure.org/stf/mudge_buffer_overflow_tutorial.html

5 Buffer Overflows  Inject attack code by overflowing the buffer  Usually involves adding code based on target machines’ CPU opcodes  Execute code with all the privileges of the vulnerable program  Thus, if program is running as root, attacker can run at will any code as root  Typically, manage to invoke execve /bin/sh or similar to get a root shell

6 Program Segments  An executing program consists of:  Code  Initialized Data  Global variables  Stack  Heap (for dynamic allocation)  Remember that local variables, return address, etc. are stored in the stack when a function is invoked  When a local variable is over-run, it can alter return address, etc.

7 Where to Inject Code  On the stack (automatic variables)  On the heap (malloc or calloc variables)  In static data areas  Executable code need not be restricted to the overflowing buffer – code can be injected elsewhere  One can also use existing code  For example, if exec(arg) exists in program, modify running code by making arg point to “/bin/sh”

8 Jump to Attacker’s Code  Activation Record  Overflow into return address on the stack and make it point at the code.  Function pointers  Overflow into “void (*foo())()” and it point at the code  Setjmp and longjmp commands, that are used for checkpointing and recovery  Alter address given to longjmp to point to attacker’s code

9 Buffer Overflow Details  Look at Mudge’s sample buffer overflow attack

10 Buffer Overflow Defenses  Writing Correct Code  Vulnerable programs continue to emerge on a regular basis  C has many error-prone idioms and a culture that favors performance over correctness.  Static Analysis Tools  Fortify – looks for vulnerable constructs  Too many false positives Crispin Cowan’s SANS 2000 Talk on Web From Crispin Cowan’s SANS 2000 Talk on Web

11 Buffer Overflow Defenses  Non-executable buffers  Non executable data segments  Optimizing compiles emit code into program data segments  Non executable stack segments  Highly effective against code injection on the stack but not against code injections on the heap or static variables.

12 Buffer Overflow Defenses  Array Bound Checking  Can run 12x-30x slower  a[3] is checked but *(a+3) is not  Type safe languages: Java or ML  There are millions of lines of C code in operating systems and security system applications  Attack the Java Virtual Machine which is a C program  StackGuard program: Adds a “canary” value, which is a 32- bit random # or a known string terminator (CR, LF, ‘\0’, etc.)  Compiler adds canary and system can check for this value at runtime  Entire RedHat system has been recompiled with this and shown to be less vulnerable

13 Race Conditions  http://seclab.cs.ucdavis.edu/projects/vulnerabilities/s criv/ucd-ecs-95-08.pdf  http://citeseer.ist.psu.edu/bishop96checking.html  http://www.mirrors.wiretapped.net/security/develo pment/secure-programming/bishop-dilger-1996- checking-for-race-conditions-in-file-accesses.pdf

14 Race condition: What is it?  Consider a setuid program, owned by root  UserA is presently executing the program, hence is running it as root  Assume that the program wants to write to a file. The system must check whether UserA has the right privileges on this file, checked as follows: if (access(filename, W_OK) == 0){ if ((fd = open(filename, O_WRONLY)) == NULL){ perror(filename); return(0); } /* now write to the file */

15 Race condition: What is it?  In the time between verifying access and opening the file, if the file referred to changes, then its access will not have been checked  Called TOCTTOU (Time-of-check-To-Time-of- Use) binding flaw  For example, if access is originally checked on /tmp/X AND before execution of write statement:  /tmp/X is deleted AND  Hard link from /etc/passwd is created to /tmp/X  Then, process will write to /etc/passwd!  Present in xterm program, while logging sessions

16 Source: Bishop and Dilger’s 1996 paper in Computing Systems

17 Race conditions, contd.  Similar attack possible on binmail program  Binmail appends mail to an existing mail spool file  E.g. /usr/spool/mail/jkl  Binmail verifies if file exists (and is not a symbolic link)  Before binmail writes to file, jkl is deleted AND made a hard link to /etc/passwd  Now, binmail appends data to /etc/passwd  Attacker can create a new account with no password and root privileges  Note that binding flaws do not arise when file descriptors are used!

18 Good Practices in Implementation  Inform Yourself  Follow Vulnerability Discussions and Alerts (eg. www.cert.org)  Read books and papers on secure coding practices, analyses of software flaws, etc.  Explore open source software  Examples of how to and how not to write code

19 Good Practices in Implementation  Handle Data with Caution  Cleanse data: Examine input data for malicious intent (altering character sets, using dis-allowed characters)  Perform bounds checking  Check array indices  Check configuration files  Can be modified by attacker  Check command-line parameters  Don’t trust web URLs and parameters within  Be careful of web content (variables hidden in HTML fields)

20 Good Practices in Implementation  Check web cookies  Check environment variables  Set valid initial values for data  Understand filename references and use them correctly  Check for indirect file references (e.g. Shortcuts, symbolic links)  Be careful of how program and data files are located (as in searching using PATH variable)  Reuse “Good” Code whenever Practical

21 Good Practices in Implementation  Sound Review Processes  Perform Peer review of Code  Perform Independent Validation and Verification  Use automated security tools  Static Code checkers  RATS - Rough Auditing Tool for Security  SPLINT – Source code scanner http://splint.org//  Uno: http://spinroot.com/uno//  Runtime checkers  Libsafe: http://directory.fsf.org/libsafe.html  PurifyPlus: http://www-306.ibm.com/software/awdtools/purifyplus//  Immunix Tools:

22 Good Practices in Implementation  Profiling Tools  Papillon for Solaris: http://www.roqe.org/papillon//  Gprof from GNU  Janus – policy enforcement and profiling; http://www.cs.berkeley.edu/~daw/janus/  Black-box Testing for Fault-Injection Tools  Appscan: http://www.watchfire.com/securityzone/default.aspx  Whisker: wiretrip.net  ISS Database Scanner: http://www.iss.net/  Perform network-based vulnerability scans  Nmap: http://insecure.org/nmap/  Nessus: http://www.nessus.org/  ISS Internet Scanner

23 Good Practices in Implementation  Make Generous Use of Checklists  Security checklists must be created and checked against. For example:  Application requires password for access  All user ID logins are unique  Uses role-based access control  Encryption is used  Code should be Maintainable  Practice standards of in-line documentation  Remove obsolete code  Test all code changes

24 Implementation, Don’ts  Don’t write code that uses relative filenames  Fully qualified filenames should be used  Don’t refer to a file twice in the same program by its name  Always use file descriptors after initial open  Prevents “race condition attack” that exploit time between access check and file execution  Don’t invoke untrusted programs from within trusted ones  Avoid using setuid or similar mechanisms whenever possible  Don’t assume that users are not malicious

25 Implementation, Don’ts  Don’t dump core – code must fail gracefully  Coredump can be used to extract valuable data stored in memory during execution  Don’t assume that a system call (or any function call) is always successful – always check for return values and error variable values  Computer-based random number generators are “pseudo-random” and can have repitition  Don’t invoke shell or command line from within a program  Don’t use world writable storage, even for temporary files

26 Implementation, Don’ts  Don’t trust user-writable storage not to be tampered with  Don’t keep sensitive data in a database without password protection  Don’t code usernames/passwords into an application  Don’t echo passwords!  Don’t rely on host-level file protection mechanisms  Don’t make access decisions based on environment variables or command-line arguments  Don’t issue passwords via email

27 To be Continued


Download ppt "CMSC 426/626: Secure Coding Krishna M. Sivalingam Sources: From Secure Coding, Mark and van Wyk, O’Reilly, 2003 www.cert.org/secure-coding."

Similar presentations


Ads by Google