Presentation is loading. Please wait.

Presentation is loading. Please wait.

An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and.

Similar presentations


Presentation on theme: "An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and."— Presentation transcript:

1 An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and Technology http://samate.nist.gov michael.kass@nist.gov

2 Outline Background on the SAMATE project Overview of work done to date Opportunities for Collaboration with NKU

3 The Questions SwA tools are increasingly being used to provide an argument for an applications software assurance through the entire SDLC Do software assurance tools work as they should? Do they really find vulnerabilities and catch bugs? How much assurance does running the tool provide? Software Assurance tools should be … –Tested: accurate and reliable –Peer reviewed –Generally accepted

4 DHS Tasks NIST to: Assess current SwA tools and methods in order to identify deficiencies which can lead to software product failures and vulnerabilities Identify gaps in SwA tools and methods, and suggest areas of further research Develop metrics for the effectiveness of SwA tools

5 NIST Software Assurance Metrics and Tool Evaluation (SAMATE) Project Goals Define a taxonomy of Software Assurance tools and their functions Define a “common/agreed-upon” classification scheme for software security flaws/weaknesses Develop functional specifications for SwA tools Create an open reference dataset of test material for SwA tools Define common metrics for measuring the assurance of applications and effectiveness of SwA tools Identify gaps in capabilities of today’s tools and make recommendations to DHS for funding research in these areas

6 SAMATE workshop tool focus group tool function taxonomy tool functional specification reference dataset of tests SAMATE Products code and tool metrics software flaw taxonomy

7 Products: SwA Tool Taxonomy (high-level view) ”External” Tools –Network Scanners –Web Application Scanners –Web Services Scanners –Dynamic Analysis/Fault Injection Tools

8 Products: SwA Tool Taxonomy (cont.) “Internal” Tools - –Compilers –Software Requirements Verification –Software Design/Model Verification –Source Code Scanners –Byte Code Scanners –Binary Code Scanners

9 Products: Common Software Flaw Taxonomy There are currently multiple taxonomies to choose from/integrate (CWE, CLASP, Fortify Software..others) We need to integrate them into one common, agreed-upon taxonomy A flaw taxonomy must cover the entire SDLC (flaws introduced during requirements, design, implementation, deployment) A taxonomy should also contain axes/views such as “remediation” or “point of introduction in SDLC” Volunteers helping with the flaw taxonomy include: Cigital, Mitre, Ounce Labs, Klockwork Inc., Secure Software Inc., Fortify Software, OWASP

10 Products: SwA Tool Specification Need to define core functions of each class of tool Need to define a “base” set of functions that constitute a minimally acceptable level of capability Specification must be clear, unambiguous and testable

11 Products: Reference Dataset for Tool Evaluation A reference dataset for static analysis must be “open” Users must be able to access, critique and contribute Collaboration and contribution: –Need a community front end (interface for contributors), where peer review decides if a submitted piece of code is a vulnerability Reference dataset must be based upon a common enumeration of software weaknesses as defined by Mitre’s CWE

12 Products: Common Metrics for Software No “standard” code metrics are being used among source code scanners Hard to get agreement on “global” set of code metrics, because risk varies depending upon the local requirements Code metric must be multi-dimensional; no single scalar metric can measure software assurance

13 Products: Common Metrics for SwA Tools How “effective” is a tool? –How many “false positives” does it produce? –How many real flaws does it miss? –Should the metric be some combination of the above?

14 Initial SAMATE Work: SAMATE (Software Assurance Metrics and Tool Evaluation) project began work in 2005 –Initial kickoff meeting August 2005 Survey the State of the Art in SwA tools Classify SwA tools Choose a class of SwA tool to develop a functional specification Enlist volunteers to help: –Develop SwA Tool Functional Specifications –Contribute Test Cases for SwA Tools –Help define effectiveness metrics for SwA tools Over 50 members on the SAMATE email list

15 Initial SAMATE Work (cont.): Follow-on SAMATE meeting in Long Beach November 2005 Paper presentations on the state of the art in SwA Tools –code and tool metrics, benchmarks “Target Practice” against contributed Test Cases –Test Case contributions from Fortify, Klocwork, MIT, Secure Software Inc. –Usability/validity of test cases (coverage, complexity, variation) was discussed –Required features of an online repository of SwA artifacts was discussed Discussion of what might constitute a “baseline” benchmark of test cases for source code analysis tools –set the bar “low” to start with –a mix of discreet and real world test cases is needed

16 Latest Work Currently developing a “baseline” functional specification for Source Code Analysis tools –Defining minimal functional requirements –Defining requirements for “optional” tool features –Defining a dictionary of terms What is a “weakness”, “false positive”, “control flow”, “inter-procedural analysis” …etc. –Linking functional tool requirements (finding weaknesses) to Mitre’s Common Weakness Enumeration CWECWE –Defining minimal code complexities that a tool should handle Continue work on an online SAMATE Reference Dataset (populate with test cases, and add usability features)

17 Source Code Scanner Requirements: The Tool Shall: SCA-RM-1: Identify any code weakness that is in the subset of the Common Weakness Enumeration list that apply to the coding language being analyzed ( listed in Appendix A) SCA-RM-2: Generate a text report identifying all weaknesses that it finds in a source code application SCA-RM-3: Identify a weakness by its proper CWE identifier SCA-RM-4: Specify the location of a weakness by providing the directory path, file name and line number SCA-RM-9: The tool shall be capable of detecting weaknesses that may exist within complex coding constructs (listed in Appendix B)

18 CWE Subset of Weaknesses for Source Code Analysis Tools (for test coverage) Location – Environment – Configuration – Code Source Code –Data Handling –API Abuse –Security Features –Time and State –Error Handling –Code Quality –Encapsulation Byte Code/Object Code Motivation/Intent Time of Introduction

19 Appendix B:Coding Constructs (for test complexity and variation) Initially based on MIT’s 22 C code constructs (Kratkiewicz, Lippman, MIT Lincoln Lab) –buffer (address, index,length/limit) complexity –obfuscation via container –local and secondary control flow –environmental dependencies –asynchrony –loop structure and complexity –memory (access type, location) –aliasing (pointer or variable) –tainted data (via input,file,socket,environment) –other (to be added) …

20 The coverage/complexity/variation “cube”of the SAMATE Reference Dataset CWE Coverage Code Complexity Variation

21 Test Case Coverage of CWE CWE is still “in progress”, but SAMATE is already aligning its specification and reference dataset terminology with it Coverage is based upon initial contributed tests by Klocwork (40), Fortify Software (80), MIT Lincoln Lab (1000) and Secure Software Inc. (20) NIST is supplementing this with other test cases to “fill in” coverage of the CWE A test suite for a “baseline benchmark” of source code analysis tools is the goal in populating the SRD (SAMATE Reference Dataset)

22 CWE Test Coverage Location Environment Configuration Code Source Code Data Handling Input Validation (PATH) Pathname Traversal and Equivalence Errors Injection Command Injection OS Command Injection (1 Fortify) Technology-Specific Input Validation Problems Output Validation Range Errors Buffer Errors OVER - Unbounded Transfer ('classic overflow') Stack overflow (43 Fortify, 1164 MIT, 1 Secure Software) Heap overflow (10 Fortify, 4 MIT, 1 Secure Software) Write-what-where condition (1 Secure Software) UNDER - Boundary beginning violation (1 Secure Software) READ - Out-of-bounds Read OREAD - Buffer over-read (1 MIT) Wrap-around error Unchecked array indexing LENCALC - Other length calculation error Miscalculated null termination (37 Fortify, 2 Secure Software) String Errors FORMAT - Format string vulnerability (7 Fortify, 1 Secure Software) Improper string length checking (1 Secure Software)

23 Type Errors Representation Errors (NUM) Numeric Errors OVERFLOW - Integer overflow (7 Fortify UNDERFLOW - Integer underflow Integer coercion error (1 Secure Software) (INFO) Information Management Errors LEAK - Information Leak (information disclosure) INT - Intended information leak (2 Fortify) API Abuse Often Misused: String Management (36 Fortify) Security Features Password Management Plaintext Storage (1 Secure Software) (CRYPTO) Cryptographic errors KEYMGT - Key Management Errors (1 Secure Software) (NUM) Numeric Errors OVERFLOW - Integer overflow (wrap or wraparound) UNDERFLOW - Integer underflow (wrap or wraparound) Integer coercion error (1 Secure Software) OBO - Off-by-one Error Sign extension error (1 Secure Software) Signed to unsigned conversion error Unsigned to signed conversion error (1 Secure Software) TRUNC - Numeric truncation error (1 Secure Software) BYTEORD - Numeric Byte Ordering Error Security Features CWE Test Coverage (cont.)

24 Time and State (RACE) Race Conditions SIGNAL - Signal handler race condition (1 Secure Software) Race condition in switch (1 Secure Software) TOCTOU - Time-of-check Time-of-use race condition (2 Fortify) Code Quality (RES) Resource Management Errors MEMLEAK - Memory leak (6 Fortify, 8 Klocwork) Double Free (2 Fortify, 1 Secure Software) Use After Free (10 Klocwork, 1 Secure Software) Code Quality (INIT) Initialization and Cleanup Errors Uninitialized variable (8 Klocwork) Pointer Issues Illegal Pointer Value (15 Klocwork) Byte/Object Code Motivation/Intent Time of Introduction CWE Test Coverage (cont.)

25 A Sample Test Case (derived from MIT contribution) CWE = Code.Source Code.Range Errors.Stack Overflow.Buffer Errors.Over.Stack Overflow index complexity = constant, secondary control flow = if, loop structure = non-standard for, scope = inter-procedural local control flow = function pointer void function1(char *buf) { /* BAD */ buf[10] = 'A'; } int main(int argc, char *argv[]) { void (*fptr)(char *); int test_value; int inc_value; int loop_counter; char buf[10]; test_value = 10; inc_value = 10 - (10 - 1); for(loop_counter = 0; ; loop_counter += inc_value) { if (loop_counter > test_value) break; fptr = function1; fptr(buf); } return 0; }

26 Opportunities for Collaboration between NKU and SAMATE Static Analysis Summit, June 29, 2006 at NIST http://samate.nist.gov/index.php/SAS http://samate.nist.gov/index.php/SAS –What is possible with today's techniques? –What is NOT possible today? –Where are the gaps that further research might fill? –What is the minimum performance bar for a source code analyzer? –Vetting of the draft SAMATE Source Code Analysis Specification Contributions to and use of the SRD http://samate.nist.gov/SRD http://samate.nist.gov/SRD –Test cases are needed to “fill in the coverage cube” –Studies/papers done using the SRD content

27 Contact Paul Black – SAMATE project leader at: –paul.black@nist.govpaul.black@nist.gov

28 References K. Kratkiewicz, R.Lippman, “A Taxonomy of Buffer Overflows for Evaluating Static and Dynamic Software Testing Tools”, ASE 2005


Download ppt "An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and."

Similar presentations


Ads by Google