Presentation is loading. Please wait.

Presentation is loading. Please wait.

ISSRE 2006 | November 10, 2006 Automated Adaptive Ranking and Filtering of Static Analysis Alerts Sarah Heckman Laurie Williams November 10, 2006.

Similar presentations


Presentation on theme: "ISSRE 2006 | November 10, 2006 Automated Adaptive Ranking and Filtering of Static Analysis Alerts Sarah Heckman Laurie Williams November 10, 2006."— Presentation transcript:

1 ISSRE 2006 | November 10, 2006 Automated Adaptive Ranking and Filtering of Static Analysis Alerts Sarah Heckman Laurie Williams November 10, 2006

2 ISSRE 2006 | November 10, 2006 Contents Motivation Research Objective AWARE Ranking and Filtering –Alert Ranking Factors Experiment Progress & Future Work Conclusions

3 ISSRE 2006 | November 10, 2006 Motivation Programmers tend to make the same mistakes Static analysis tools are useful for finding these recurring mistakes However, static analysis tools have a high rate of false positives

4 ISSRE 2006 | November 10, 2006 Research Objective To improve the correctness and security of a system by continuously, automatically, and efficiently providing adaptively ranked and filtered static analysis alerts to software engineers during development.

5 ISSRE 2006 | November 10, 2006 AWARE Automated Warning Application for Reliability Engineering Ranks static analysis alerts by the probability an alert is a true fault Ranking is adjusted by –Filtering alerts –Fixing alerts

6 ISSRE 2006 | November 10, 2006 Alert Ranking Factors Type Accuracy: Categorization of alerts based on observed accuracy of alert type Code Locality: Alerts reported by static analysis tools cluster by locality Generated Test Failure: Failing test cases derived from static analysis alerts provide a concrete fault condition

7 ISSRE 2006 | November 10, 2006 Experiment (1) Questions to Investigate –Does AWARE’s initial ranking perform better than a random ordering of alerts for various initial TA values? Number of initial false positives Average number of false positives between true positives –How many false positives must be filtered before all of the true positives reach the top of the ranking? Number of alerts filtered before all true positives reach the top of the list

8 ISSRE 2006 | November 10, 2006 Experiment (2) RealEstate Example –775 uncommented, non-blank LOC –Analyzed without annotations Check ‘n’ Crash Results –28 alerts, 27 analyzed –2 alerts were true positives

9 ISSRE 2006 | November 10, 2006 Experimental Results and Limitations AWARE ranks TP alerts at the top of list and has a lower average occurrence of FPs between TPs. Between 11 – 25% of alerts required filtering before all TPs reached the top of the ranking Limitations –Small sample size –The initial ranking value for TA were unrealistic

10 ISSRE 2006 | November 10, 2006 Progress & Future Work Current Work: –Development of AWARE tool for Eclipse IDE and Java –Use of AWARE in graduate level class Future Work: – Industrial case study –Extend AWARE to gather alerts from C/C++ static analyzers AWARE Research site: –http://agile.csc.ncsu.edu/aware

11 ISSRE 2006 | November 10, 2006 Questions? Sarah Heckman: sarah_heckman@ncsu.edu


Download ppt "ISSRE 2006 | November 10, 2006 Automated Adaptive Ranking and Filtering of Static Analysis Alerts Sarah Heckman Laurie Williams November 10, 2006."

Similar presentations


Ads by Google