Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Comparative Analysis of the Efficiency of Change Metrics and Static Code Attributes for Defect Prediction Raimund Moser, Witold Pedrycz, Giancarlo Succi.

Similar presentations


Presentation on theme: "A Comparative Analysis of the Efficiency of Change Metrics and Static Code Attributes for Defect Prediction Raimund Moser, Witold Pedrycz, Giancarlo Succi."— Presentation transcript:

1 A Comparative Analysis of the Efficiency of Change Metrics and Static Code Attributes for Defect Prediction Raimund Moser, Witold Pedrycz, Giancarlo Succi Free University of Bolzano-Bozen University of Alberta

2 Defect Prediction quality standards customer satisfaction Questions: 1. Which metrics are good defect predictors? 2. Which models should be used? 3. How accurate those models? 4. How much does it cost? Benefits?

3 Approaches for Defect Prediction 1. Product-centric 2.Process-centric 3.Combination of both Change history of source files (number or size of modifications, age of a file) Changes in the team structure Testing effort Technology Other human factors to software defects measures extracted from the: static/dynamic structure of source code design documents design requirements

4 Previous Work The relationship between software defects and code metrics No agreed answer No cost-insensitive prediction Impact of the software process on the defectiveness of software Two aspects of defect prediction:

5 Questions to Answer by This Work Questions: 1. Which metrics are good defect predictors? 2. Which models should be used? 3. How accurate those models? 4. How much does it cost? Benefits? Are change metrics more useful? Which change metrics are good? How can cost-sensitive analysis be used? Not how many defects are present in a subsystem but is source file defective?

6 Outline Experimental Set-Up Assessing Classification Accuracy Accuracy Classification Results Cost-Sensitive Classification Cost-Sensitive Defect Prediction Experiment Using a Cost Factor of 5

7 Data & Experimental Set-Up Public data set from the Eclipse CVS repository (releases 2.0, 2.1, 3.0) by Zimmermann et al. 18 change metrics concerning change history of files 31 static code attributes metrics that Zimmerman has used at a file level (correlation analysis, logistic regression, and ranking analysis)

8 One Possible Proposal of Change Metrics renaming or moving software elements the number of files that have been committed together with file x in weeks, starting from release date to its first appearance

9 Experiments 1. Change Model uses proposed change metrics 2. Code Model uses static code metrics 3. Combined Model uses both types of metrics Build Three Models for Predicting the Presence or Absence of Defects in Files

10 Outline Experimental Set-Up Assessing Classification Accuracy Accuracy Classification Results Cost-Sensitive Classification Cost-Sensitive Defect Prediction Experiment Using a Cost Factor of 5

11 Results Assessing Classification Accuracy

12 Accuracy Classification Results By analyzing the decision trees: Defect Free: Large MAX_CHANGESET or Low REVISIONS Smaller MAX_CHANGESET and Low REVISIONS and REFACTORINGS Defect Prone: High number of BUGFIXES

13 Outline Experimental Set-Up Assessing Classification Accuracy Accuracy Classification Results Cost-Sensitive Classification Cost-Sensitive Defect Prediction Experiment Using a Cost Factor of 5

14 Cost-Sensitive Classification Cost-sensitive classification - costs associated with different errors made by a model min  >1 FN implicate higher costs than FP Costly to fix an undetected defect in post release cycle than to inspect defect-free file

15 Cost-Sensitive Defect Prediction Results for J48 Learner, Release 2.0 Use some heuristics to stop increasing the recall FP<30%  =5

16 Experiment Using a Cost Factor of 5 Defect predictors based on change data outperform those based on static code attributes. Reject

17 Limitations Dependability on a specific environment Conclusions are on only three data miners Choice for code and change metrics Reliability of the data mapping between defects and locations in source code extraction of code or change metrics from repositories

18 Conclusions 18 change metrics, J48 learner,  =5 give accurate results for 3 releases of the Eclipse project: >75% of correctly classified files >80% recall < 30% FP rate Hence, the change metrics contain more discriminatory and meaningful information about the defect distribution that the source code itself. Important change metrics: Defect prone files with high revision numbers large bug fixing activities Defect-free files that are large CVS commits refactored several times files

19 Future Research Which information in change data is relevant for defect prediction? How to extract this data automatically?


Download ppt "A Comparative Analysis of the Efficiency of Change Metrics and Static Code Attributes for Defect Prediction Raimund Moser, Witold Pedrycz, Giancarlo Succi."

Similar presentations


Ads by Google