Presentation is loading. Please wait.

Presentation is loading. Please wait.

Improving Static Analysis Results Accuracy Chris Wysopal CTO & Co-founder, Veracode SATE Summit October 1, 2010.

Similar presentations


Presentation on theme: "Improving Static Analysis Results Accuracy Chris Wysopal CTO & Co-founder, Veracode SATE Summit October 1, 2010."— Presentation transcript:

1 Improving Static Analysis Results Accuracy Chris Wysopal CTO & Co-founder, Veracode SATE Summit October 1, 2010

2 © 2010 Veracode, Inc. 2 Abstract  A major challenge for static analysis is finding all weaknesses of a specific weakness category for a wide range of input software. It is trivial to build a static analyzer with a high false positive rate and a low true positive rate and very difficult to build one with very low false positive rate and very high true positive rate. To build an accurate analyzer you need a broad real world data set to test with. I will discuss how we use the SATE test data in addition to other test data to improve the accuracy of our analyzer.

3 © 2010 Veracode, Inc. 3 Veracode History of Participation 3 organizations participated all 3 years: Checkmarx, Grammatech, and Veracode Perfect Veracode Attendance! 2008 2009 2010 Tota l C Java C C lighttpdnagiosnaimdspace mvnfor um openn msirssipvmdmdircrollerdovecotchrome wiresha rkpebbletomcat Armorize xx xx4 Aspect xxx 3 Checkmarx xxx xxxxx 8 Coverity xx xxx 5 cppcheck xxx 3 Devinspect (HP) x 1 Findbugs xxx 3 Flawfinderxxx 3 Fortifyxxxxxx 6 Grammatechxxx xx x 6 Klocwork xx 2 LDRA xx x 3 marfcat xxxxx5 Redlizards xxx 3 SofCheck xxx xx x 6 Sparrow xx 2 Veracodexxxxxxxxxxxxxxx15 Total44476645548564378

4 © 2010 Veracode, Inc. 4 CVE Matching Exercise  Only done for Chrome, Wireshark, and Tomcat  sate2010_cve/README.txt, “We did not find any matching tool warnings for CVEs in Wireshark and Chrome.”  Five CVE matched warnings were found for Tomcat. Veracode found one. Armorize’s Code Secure found the other four.  We submitted 2 CVE’s for Tomcat (2008-0128 and 2007-6286 in 5.5.13 and 2008-0128 again in 5.5.29). We should have been credited for 3.  We also reported 2010-2304 in Chrome 5.0.375.70. This was not credited either.

5 © 2010 Veracode, Inc. 5 What level of noise is acceptable? CVEs foundTotal Flaws Useful / Total * 100 Veracode -Tomcat 12653.076 Veracode -Tomcat 21691.449 Armorize - Tomcat 1350640.059 Armorize - Tomcat2159700.016 Veracode Tomcat 1 / Armorize Tomcat 151.9 Veracode Tomcat 2 / Armorize Tomcat 286.5

6 © 2010 Veracode, Inc. 6 Lab Measurement Capabilities  Synthetic test suites –Publicly available test suites such as NIST SAMATE. –Developed a complex synthetic test suite by taking the simple SAMATE cases and adding control flow and data flow complexity. –23,000 test programs for Java, Blackberry,.NET, and C/C++ for Windows, Windows Mobile, Solaris, and Linux. –Defects in these test applications are annotated in our test database. –Each defect is labeled with CWE ID and VCID, which is out own internal sub categorization of CWEs

7 © 2010 Veracode, Inc. 7 Lab Measurement Capabilities  Real world test suite –Baselined real world applications by performing a manual code review of the source code and annotating the security defects with CWE and VCID –Micro-baselined real world applications by using publicly disclosed vulnerability information and then identifying the location of the security defect in the code and annotating them with CWE and VCID –300 test programs for Java, Blackberry,.NET, and C/C++ for Windows, Windows Mobile, Solaris, and Linux. –Annotations entered into our test database. –135 flaws comprising 63 CVEs that were given for us to compare with what we found in our assessment. SATE provided us a file name and line number for each flaw. These are being added to the real world suite for future testing.

8 © 2010 Veracode, Inc. 8 Lab Measurement Capabilities  Test Execution –We run the full test suite over dozens of test machines. It takes 48 hours to complete over 20+ machines. The results of our automated analysis are stored in our test database.  Reporting –Reports are run to compare the annotated “known” defects to our analysis results. An undetected defect is counted as a false negative. A spurious detection is counted as a false positive. –We measure false positive and false negative rates for each CWE category and down to internal Veracode sub-categories. These results are trended over time. The results can be drilled down by a developer to see the details of the test case down to the source line. –Diffs from our last release can be shown to isolate FP and FN regressions.

9 © 2010 Veracode, Inc. 9 Production Measurement Capabilities  As a SaaS provider can measure how well we perform on every customer application for FALSE POSITIVES.  Quality analysts inspect the output for each job to make sure the application was modeled correctly and the result set is within normal bounds.  If necessary they will inspect down to the flaw level and mark some results false positives. This false positive data is tracked by CWE category.  In this way any deviations from Veracode’s false positive SLO of 15% can be eliminated immediately and test cases can be developed that engineering will use to create a permanent quality improvement.

10 © 2010 Veracode, Inc. 10 Continuous Improvement Process  Veracode’s analysis accuracy improves with every release of our analysis engine.  We release on a six week schedule.  Each release contains improvements gathered from lab and production test results.  Our policy is not to tradeoff false negatives to meet false positive accuracy targets and support new features. Our extensive test suite allows us to enforce a no false negative regression rule from release to release.

11 Questions? cwysopal@veracode.com @WeldPond on Twitter cwysopal@veracode.com


Download ppt "Improving Static Analysis Results Accuracy Chris Wysopal CTO & Co-founder, Veracode SATE Summit October 1, 2010."

Similar presentations


Ads by Google