Analysis of CK Metrics “Empirical Analysis of Object-Oriented Design Metrics for Predicting High and Low Severity Faults” Yuming Zhou and Hareton Leung,

Slides:



Advertisements
Similar presentations
On the application of GP for software engineering predictive modeling: A systematic review Expert systems with Applications, Vol. 38 no. 9, 2011 Wasif.
Advertisements

The Robert Gordon University School of Engineering Dr. Mohamed Amish
CLUSTERING SUPPORT FOR FAULT PREDICTION IN SOFTWARE Maria La Becca Dipartimento di Matematica e Informatica, University of Basilicata, Potenza, Italy
Guidelines for the application of Data Envelopment Analysis to assess evolving software Alexander Chatzigeorgiou University of Macedonia Thessaloniki,
Presentation of the Quantitative Software Engineering (QuaSE) Lab, University of Alberta Giancarlo Succi Department of Electrical and Computer Engineering.
Copyright © 2011 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 12 Measures of Association.
Software Quality Ranking: Bringing Order to Software Modules in Testing Fei Xing Michael R. Lyu Ping Guo.
MetriCon 2.0 Correlating Automated Static Analysis Alert Density to Reported Vulnerabilities in Sendmail Michael Gegick, Laurie Williams North Carolina.
Towards Logistic Regression Models for Predicting Fault-prone Code across Software Projects Erika Camargo and Ochimizu Koichiro Japan Institute of Science.
1 ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Case Studies Instructor Paulo Alencar.
1 Empirical Validation of Three Software Metrics Suites to Predict Fault-Proneness of Object-Oriented Classes Developed Using Highly Iterative or Agile.
1 Predicting Bugs From History Software Evolution Chapter 4: Predicting Bugs from History T. Zimmermann, N. Nagappan, A Zeller.
Metrics. Basili’s work Basili et al., A Validation of Object- Oriented Design Metrics As Quality Indicators, IEEE TSE Vol. 22, No. 10, Oct. 96 Predictors.
Mining Metrics to Predict Component Failures Nachiappan Nagappan, Microsoft Research Thomas Ball, Microsoft Research Andreas Zeller, Saarland University.
Prediction of fault-proneness at early phase in object-oriented development Toshihiro Kamiya †, Shinji Kusumoto † and Katsuro Inoue †‡ † Osaka University.
Figures – Chapter 24.
What causes bugs? Joshua Sunshine. Bug taxonomy Bug components: – Fault/Defect – Error – Failure Bug categories – Post/pre release – Process stage – Hazard.
Design Metrics Software Engineering Fall 2003 Aditya P. Mathur Last update: October 28, 2003.
Software engineering for real-time systems
Rutgers’ HARD Track Experiences at TREC 2004 N.J. Belkin, I. Chaleva, M. Cole, Y.-L. Li, L. Liu, Y.-H. Liu, G. Muresan, C. L. Smith, Y. Sun, X.-J. Yuan,
Object-Oriented Metrics
Empirical Validation of OO Metrics in Two Different Iterative Software Processes Mohammad Alshayeb Information and Computer Science Department King Fahd.
1 Complexity metrics  measure certain aspects of the software (lines of code, # of if-statements, depth of nesting, …)  use these numbers as a criterion.
CSE USC Fraunhofer USA Center for Experimental Software Engineering, Maryland February Empiricism in Software Engineering Empiricism:
Writing Good Software Engineering Research Papers A Paper by Mary Shaw In Proceedings of the 25th International Conference on Software Engineering (ICSE),
Semi-supervised protein classification using cluster kernels Jason Weston, Christina Leslie, Eugene Ie, Dengyong Zhou, Andre Elisseeff and William Stafford.
Scientific method - 1 Scientific method is a body of techniques for investigating phenomena and acquiring new knowledge, as well as for correcting and.
Predicting Class Testability using Object-Oriented Metrics M. Bruntink and A. van Deursen Presented by Tom Chappell.
The Characteristics of an Experimental Hypothesis
Chidamber & Kemerer Suite of Metrics
How Significant Is the Effect of Faults Interaction on Coverage Based Fault Localizations? Xiaozhen Xue Advanced Empirical Software Testing Group Department.
Software Engineering Laboratory, Department of Computer Science, Graduate School of Information Science and Technology, Osaka University 1 Refactoring.
Japan Advanced Institute of Science and Technology
Evaluation of software engineering. Software engineering research : Research in SE aims to achieve two main goals: 1) To increase the knowledge about.
UNIVERSITAS SCIENTIARUM SZEGEDIENSIS UNIVERSITY OF SZEGED D epartment of Software Engineering New Conceptual Coupling and Cohesion Metrics for Object-Oriented.
Software Measurement & Metrics
S14: Analytical Review and Audit Approaches. Session Objectives To define analytical review To define analytical review To explain commonly used analytical.
1 OO Metrics-Sept2001 Principal Components of Orthogonal Object-Oriented Metrics Victor Laing SRS Information Services Software Assurance Technology Center.
The CK Metrics Suite. Weighted Methods Per Class b To use this metric, the software engineer must repeat this process n times, where n is the number of.
A Validation of Object-Oriented Design Metrics As Quality Indicators Basili et al. IEEE TSE Vol. 22, No. 10, Oct. 96.
Lockheed Martin Aeronautics Company Copyright ©2007 Lockheed Martin Corporation. All Rights Reserved. Empirical Study of Software Quality and Reliability.
The influence of developer quality on software fault- proneness prediction Yansong Wu, Yibiao Yang, Yangyang Zhao,Hongmin Lu, Yuming Zhou,Baowen Xu 資訊所.
THE IRISH SOFTWARE ENGINEERING RESEARCH CENTRELERO© What we currently know about software fault prediction: A systematic review of the fault prediction.
Research Heaven, West Virginia FY2003 Initiative: Hany Ammar, Mark Shereshevsky, Walid AbdelMoez, Rajesh Gunnalan, and Ahmad Hassan LANE Department of.
CSc 461/561 Information Systems Engineering Lecture 5 – Software Metrics.
Applied Quantitative Analysis and Practices LECTURE#31 By Dr. Osman Sadiq Paracha.
Experimentation in Computer Science (Part 2). Experimentation in Software Engineering --- Outline  Empirical Strategies  Measurement  Experiment Process.
Daniel Liu & Yigal Darsa - Presentation Early Estimation of Software Quality Using In-Process Testing Metrics: A Controlled Case Study Presenters: Yigal.
Some Alternative Approaches Two Samples. Outline Scales of measurement may narrow down our options, but the choice of final analysis is up to the researcher.
History & Research Research Methods Unit 1 / Learning Goal 2.
Object-Oriented (OO) estimation Martin Vigo Gabriel H. Lozano M.
Ontology Support for Abstraction Layer Modularization Hyun Cho, Jeff Gray Department of Computer Science University of Alabama
1 The Distribution of Faults in a Large Industrial Software System Thomas Ostrand Elaine Weyuker AT&T Labs -- Research Florham Park, NJ.
Analytical Review and Audit Approaches
Software Engineering Object Oriented Metrics. Objectives 1.To describe the distinguishing characteristics of Object-Oriented Metrics. 2.To introduce metrics.
1 740f02classsize18 The Confounding Effect of Class Size on the Validity of Object- Oriented Metrics Khaled El Eman, etal IEEE TOSE July 01.
Transfer and Multitask Learning Steve Clanton. Multiple Tasks and Generalization “The ability of a system to recognize and apply knowledge and skills.
Design Metrics CS 406 Software Engineering I Fall 2001 Aditya P. Mathur Last update: October 23, 2001.
Architecting Complexity HOW UNDERSTANDING COMPLEXITY PROMOTES SIMPLICITY.
Course Notes Set 12: Object-Oriented Metrics
Parts of an Academic Paper
Cristian Ferent and Alex Doboli
Object-Oriented Metrics
Design Metrics Software Engineering Fall 2003
Design Metrics Software Engineering Fall 2003
Software Metrics Validation using Version control
Prepared by Lee Revere and John Large
Predicting Fault-Prone Modules Based on Metrics Transitions
Software Metrics SAD ::: Fall 2015 Sabbir Muhammad Saleh.
Exploring Complexity Metrics as Indicators of Software Vulnerability
Presentation transcript:

Analysis of CK Metrics “Empirical Analysis of Object-Oriented Design Metrics for Predicting High and Low Severity Faults” Yuming Zhou and Hareton Leung, Member, IEEE Computer Society IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 32, NO. 10, OCTOBER 2006

Renaat Verbruggen Contribution of Paper 1 This paper makes a number of contributions. First, using a publicly available data set(NASA), they present new evidence indicating an association between OO design metrics and fault-proneness, thereby providing valuable data in an important area for which limited experimental data is available.

Renaat Verbruggen Contribution of Paper 2 Secondly, they validate the association between OO design metrics and fault- proneness of classes across fault severity. To the best of their knowledge, in spite of its importance, there has been no such previous research.

Renaat Verbruggen Contribution of Paper 3 Thirdly, on the methodological front, the fault- proneness of classes is analysed using not just the familiar method of logistical regression but also applied machine learning techniques.

Renaat Verbruggen Main findings This study makes two main findings. First, most of the OO design metrics in this study are statistically related to the fault-proneness of classes across fault severity. Second, the predictive abilities of these metrics strongly depend on the severity of faults. More specifically, the design metrics are better predictors of low severity faults in classes than of high severity faults.

Renaat Verbruggen

Descriptive Statistics of the Classes

Renaat Verbruggen Validation Results of the Hypotheses

Renaat Verbruggen Conclusions 1 The CBO, WMC, RFC, and LCOM metrics are statistically significant across fault severity, while DIT is not significant for any fault severity. NOC is statistically significant with regard to ungraded/low severity faults, but in an inverse direction, i.e., a class with a large NOC means a low fault-proneness. The significance of NOC with regard to high severity faults cannot be tested because of the quasi-complete separation problem. However, the results of machine learning methods indicate that the faultproneness prediction usefulness of NOC with regard to high severity faults is poor and, in this, is similar to that of DIT.

Renaat Verbruggen Conclusions 2 The fault-proneness prediction capabilities of these metrics differ greatly depending on the fault severity used. When applied to the classification of classes as fault-prone and not fault-prone in terms of ungraded/ low severity faults, the logistic regression and machine learning models based on these metrics achieve a performance comparable to previous studies. When applied to fault-proneness ranking of classes in terms of low severity faults, the logistic regression model based on these metrics outperforms the simple model based on class size. In both cases, however, the prediction usefulness of these metrics with regard to high severity faults is limited.