Supporting Release Management & Quality Assurance for Object-Oriented Legacy Systems - Lionel C. Briand Visiting Professor Simula Research Labs.

Slides:



Advertisements
Similar presentations
CLUSTERING SUPPORT FOR FAULT PREDICTION IN SOFTWARE Maria La Becca Dipartimento di Matematica e Informatica, University of Basilicata, Potenza, Italy
Advertisements

Overview Introduces a new cohesion metric called Conceptual Cohesion of Classes (C3) and uses this metric for fault prediction Compares a new cohesion.
Presentation of the Quantitative Software Engineering (QuaSE) Lab, University of Alberta Giancarlo Succi Department of Electrical and Computer Engineering.
1 ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Case Studies Instructor Paulo Alencar.
1 Empirical Validation of Three Software Metrics Suites to Predict Fault-Proneness of Object-Oriented Classes Developed Using Highly Iterative or Agile.
September 6, Achieving High Software Reliability Taghi M. Khoshgoftaar Empirical Software Engineering Laboratory Florida Atlantic.
1 Predicting Bugs From History Software Evolution Chapter 4: Predicting Bugs from History T. Zimmermann, N. Nagappan, A Zeller.
Mining Metrics to Predict Component Failures Nachiappan Nagappan, Microsoft Research Thomas Ball, Microsoft Research Andreas Zeller, Saarland University.
Prediction of fault-proneness at early phase in object-oriented development Toshihiro Kamiya †, Shinji Kusumoto † and Katsuro Inoue †‡ † Osaka University.
Figures – Chapter 24.
Contents Introduction Requirements Engineering Project Management
What causes bugs? Joshua Sunshine. Bug taxonomy Bug components: – Fault/Defect – Error – Failure Bug categories – Post/pre release – Process stage – Hazard.
Object-Oriented Metrics. Characteristics of OO ● Localization ● Encapsulation ● Information hiding ● Inheritence ● Object abstraction.
Prediction Basic concepts. Scope Prediction of:  Resources  Calendar time  Quality (or lack of quality)  Change impact  Process performance  Often.
Soft. Eng. II, Spr. 02Dr Driss Kettani, from I. Sommerville1 CSC-3325: Chapter 6 Title : The Software Quality Reading: I. Sommerville, Chap: 24.
Object-Oriented Metrics
Empirical Validation of OO Metrics in Two Different Iterative Software Processes Mohammad Alshayeb Information and Computer Science Department King Fahd.
Contents Introduction Requirements Engineering Project Management
SE 450 Software Processes & Product Metrics Activity Metrics.
(c) 2007 Mauro Pezzè & Michal Young Ch 1, slide 1 Software Test and Analysis in a Nutshell.
Software evolution.
Chapter 13: Defect Prevention & Process Improvement
Chapter 9 – Software Evolution and Maintenance
S Neuendorf 2004 Prediction of Software Defects SASQAG March 2004 by Steve Neuendorf.
Chapter 20: Defect Classification and Analysis  General Types of Defect Analyses.  ODC: Orthogonal Defect Classification.  Analysis of ODC Data.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 27 Slide 1 Quality Management.
CS 4310: Software Engineering
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 27 Slide 1 Quality Management 1.
Annex I: Methods & Tools prepared by some members of the ICH Q9 EWG for example only; not an official policy/guidance July 2006, slide 1 ICH Q9 QUALITY.
Software evolution. Objectives l To explain why change is inevitable if software systems are to remain useful l To discuss software maintenance and maintenance.
Generalization through a series of replicated experiments on maintainability Erik Arisholm.
Japan Advanced Institute of Science and Technology
Software Engineering CS3003 Lecture 3 Software maintenance and evolution.
Chapter 3: Software Maintenance Process Omar Meqdadi SE 3860 Lecture 3 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
Presented By : Abirami Poonkundran.  This paper is a case study on the impact of ◦ Syntactic Dependencies, ◦ Logical Dependencies and ◦ Work Dependencies.
 To explain the importance of software configuration management (CM)  To describe key CM activities namely CM planning, change management, version management.
SWEN 5430 Software Metrics Slide 1 Quality Management u Managing the quality of the software process and products using Software Metrics.
Extension to Multiple Regression. Simple regression With simple regression, we have a single predictor and outcome, and in general things are straightforward.
Product Metrics An overview. What are metrics? “ A quantitative measure of the degree to which a system, component, or process possesses a given attribute.”
OHTO -99 SOFTWARE ENGINEERING “SOFTWARE PRODUCT QUALITY” Today: - Software quality - Quality Components - ”Good” software properties.
Lecture 4 Software Metrics
The influence of developer quality on software fault- proneness prediction Yansong Wu, Yibiao Yang, Yangyang Zhao,Hongmin Lu, Yuming Zhou,Baowen Xu 資訊所.
Process Improvement. It is not necessary to change. Survival is not mandatory. »W. Edwards Deming Both change and stability are fundamental to process.
Figures – Chapter 9. Figure 9.1 A spiral model of development and evolution.
Page 1 TEST in the large RELEASE REWORK ASSESS packaged application documentation models and source code management documents requirement alloc. matrix.
Introduction to Software Project Estimation I (Condensed) Barry Schrag Software Engineering Consultant MCSD, MCAD, MCDBA Bellevue.
Computing and SE II Chapter 9: Design Methods and Design Models Er-Yu Ding Software Institute, NJU.
Research Heaven, West Virginia FY2003 Initiative: Hany Ammar, Mark Shereshevsky, Walid AbdelMoez, Rajesh Gunnalan, and Ahmad Hassan LANE Department of.
Model Selection and Validation. Model-Building Process 1. Data collection and preparation 2. Reduction of explanatory or predictor variables (for exploratory.
CSc 461/561 Information Systems Engineering Lecture 5 – Software Metrics.
Software Engineering 2004 Jyrki Nummenmaa 1 SOFTWARE PRODUCT QUALITY Today: - Software quality - Quality Components - ”Good” software properties.
Daniel Liu & Yigal Darsa - Presentation Early Estimation of Software Quality Using In-Process Testing Metrics: A Controlled Case Study Presenters: Yigal.
Generating Software Documentation in Use Case Maps from Filtered Execution Traces Edna Braun, Daniel Amyot, Timothy Lethbridge University of Ottawa, Canada.
© ABB Corporate Research January, 2004 Experiences and Results from Initiating Field Defect Prediction and Product Test Prioritization Efforts at.
Chapter 8 Testing. Principles of Object-Oriented Testing Å Object-oriented systems are built out of two or more interrelated objects Å Determining the.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 21 Slide 1 Software evolution.
Object-Oriented (OO) estimation Martin Vigo Gabriel H. Lozano M.
OOAD UNIT V B RAVINDER REDDY PROFESSOR DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING.
Role of Training in Program Evaluation: Evidence from the Peace Corps Projects Shahid Umar Ph.D. Candidate Rockefeller College of Public.
Design Metrics CS 406 Software Engineering I Fall 2001 Aditya P. Mathur Last update: October 23, 2001.
Defect Prediction Techniques He Qing What is Defect Prediction? Use historical data to predict defect. To plan for allocation of defect detection.
Overview Software Maintenance and Evolution Definitions
Course Notes Set 12: Object-Oriented Metrics
Design Metrics Software Engineering Fall 2003
Design Metrics Software Engineering Fall 2003
Chapter 13 Quality Management
Progression of Test Categories
Software metrics.
Software Metrics SAD ::: Fall 2015 Sabbir Muhammad Saleh.
Coupling Interaction: It occurs due to methods of a class invoking methods of other classes. Component Coupling: refers to interaction between two classes.
Presentation transcript:

Supporting Release Management & Quality Assurance for Object-Oriented Legacy Systems - Lionel C. Briand Visiting Professor Simula Research Labs

4/5/20012 Project Simula, Telenor Erik Arisholm, Valery Buzungu

4/5/20013 Can we help plan and manage new releases of a legacy system? Can we help focus V&V activities on high risk parts of the legacy system? Can we help predict the fault correction effort of a release after delivery? Can we help assess the impact of process change? Can we predict the risk associated with a change? => Use error and change data, code analysis

4/5/20014 Initial Study Fault-proneness model to focus V&V First one in the context of a legacy, OO system Realistic model evaluation Cost-effectiveness analysis

4/5/20015 Vision Package in tool (tree maps) Deployment & Training Update fault prediction models Perform changes Focused V&V Feedback: Fault-prone components Corporate learning learning Release Release Record Release Change and Fault Correction Data

4/5/20016 COS and XRadar COS – large telecom system, evolved over 5 years, developers, ~130K, Java, ~2000 application classes XRadar –Code structural metrics (e.g., control flow complexity) –Code quality (duplications, violations, style errors) –Change and fault correction data over releases 12 to 15.1 (5 releases) of COS –# changes and #fault corrections per class

4/5/20017 Class Fault probability is a function of Size, complexity, inheritance, and coupling of classes (XRadar, JHawk) Amount of change performed on classes Code quality in terms of violations, style errors, duplication Skills and experience Class and fault history Interactions are also likely

4/5/20018 Building & Assessing a Prediction Model Logistic regression Four releases (R1 to R4) with fault and change data Dependent variable: Fault corrections on R3 Explanatory variables: R2 measurements plus fault and change history for R1 The model is applied to R4, using R2 (history) and R3 measurements

4/5/20019 Data Analysis Explanatory variables are log-transformed –Alleviate the outlier problem –Helps account for interactions PCA Univariate analysis Multivariate analysis (Stepwise) –Balanced modeling data set –82 observations to build the model Cross-validation (R3) Cost-effectiveness (R4)

4/5/ PCA Results PC1: size, import coupling, violations, duplication, style errors, change counts PC2: number of releases, structural change measures PC3: Cohesion PC4: Fan-in PC5: Class ancestors …

4/5/ Univariate Analysis Results PC1 measures very significant Change counts and fault corrections in previous release also significant No inheritance or cohesion measure is significant Nor are historic variables

4/5/ Multivariate Analysis History variables selected Inheritance variables too Due to interactions? Variablep-value Ncss Javadocs Ccn LCOM HIER INST NSUP NSUB FaultCorrections ChangeCount n1FaultCorrections n1ChangeCount0.0297

4/5/ Cross-validation

4/5/ Cost-Effectiveness

4/5/ Cost-Effectiveness (no history)

4/5/ Caveats and Problems Assumption: Most of the faults in release n are related to changes in release n-1 and to a lesser extent n-2 It is difficult, in general, to collect data about cause-effect relationships between changes and faults No change effort data: size measures as surrogate? Usually several concurrent version “streams” and merges are taking place No centralized defect tracking system

4/5/ Conclusions CE analysis seems practical Model seems cost-effective (cost reduction ~ 29%) History variables are very important predictors Still much room for improvement Currently undertaking more extensive data collection through the configuration management system