1 Software Quality Metrics Ch 4 in Kan Steve Chenoweth, RHIT What do you measure?

Slides:



Advertisements
Similar presentations
Software Quality Engineering
Advertisements

R&D Metrics Todd Little. What Makes a Good Metric? What is it measuring? Why do we care? How easy is it to gather & compute? Could you do it well and.
Metrics to improve software process
Chapter 4 Quality Assurance in Context
1 Defect Removal Effectiveness Kan ch 6 Steve Chenoweth, RHIT Left – Some defects are easier to remove than others. This is the cruise ship Costa Concordia,
Copyright © 1994 Carnegie Mellon University Disciplined Software Engineering - Lecture 1 1 Disciplined Software Engineering Lecture #7 Software Engineering.
1 In-Process Metrics for Software Testing Kan Ch 10 Steve Chenoweth, RHIT Left – In materials testing, the goal always is to break it! That’s how you know.
1 Exponential Distribution and Reliability Growth Models Kan Ch 8 Steve Chenoweth, RHIT Right: Wait – I always thought “exponential growth” was like this!
1 Ishikawa’s 7 Pillars of Wisdom Kan Ch 5 Steve Chenoweth, RHIT.
SE 450 Software Processes & Product Metrics Reliability: An Introduction.
6/19/2007SE _06_19_Overview_Inspections.ppt1 Team Software Project (TSP) June 19, 2007 High Level Designs, Code Inspections & Measurement.
Software Quality Metrics Overview
CMPUT Software QualityMajor Software Strategies - 1 © Paul Sorenson MAJOR SW STRATEGIES BASED HEAVILY ON ROBERT GRADY'S BOOK “Practical Software.
SE 450 Software Processes & Product Metrics Software Metrics Overview.
SE 450 Software Processes & Product Metrics 1 Defect Removal.
SE 450 Software Processes & Product Metrics Activity Metrics.
06/12/2007SE _6_12_Design.ppt1 Design Phase Outputs: Completed & Inspected SDS & Integration Test Plan Completed & Inspected System Test Plan.
EXAMPLES OF METRICS PROGRAMS
Swami NatarajanJuly 14, 2015 RIT Software Engineering Reliability: Introduction.
3. Software product quality metrics The quality of a product: -the “totality of characteristics that bear on its ability to satisfy stated or implied needs”.
Software Metrics/Quality Metrics
1 Quality Management Models Kan Ch 9 Steve Chenoweth, RHIT Right – To keep in mind – For Kan, this is part of Total Quality Management.
1 Measuring and analyzing customer satisfaction Kan, Ch 14 Steve Chenoweth, RHIT.
Lecture 3.
Kan Ch 7 Steve Chenoweth, RHIT
Software Quality SEII-Lecture 15
1 Measurement Theory Ch 3 in Kan Steve Chenoweth, RHIT.
Software Quality Chapter Software Quality  How can you tell if software has high quality?  How can we measure the quality of software?  How.
1 Software Quality Engineering CS410 Class 5 Seven Basic Quality Tools.
Slide 6.1 CHAPTER 6 TESTING. Slide 6.2 Overview l Quality issues l Nonexecution-based testing l Execution-based testing l What should be tested? l Testing.
 The software systems must do what they are supposed to do. “do the right things”  They must perform these specific tasks correctly or satisfactorily.
Software Testing and Quality Assurance Software Quality Metrics 1.
Cmpe 589 Spring Software Quality Metrics Product  product attributes –Size, complexity, design features, performance, quality level Process  Used.
1 Software Quality CIS 375 Bruce R. Maxim UM-Dearborn.
Software Reliability SEG3202 N. El Kadri.
SOFTWARE ENGINEERING1 Introduction. Software Software (IEEE): collection of programs, procedures, rules, and associated documentation and data SOFTWARE.
1 Quiz T/F: TQM is a clearly defined quality management process standard. Define the following: –Defect Rate –FPA –Ratio Scale –OO –Ordinal Scale List.
CS 350, slide set 6 M. Overstreet Old Dominion University Spring 2005.
Software Cost Estimation 1. APPROACHES Traditional: LOC estimation Modern: Functional Point Analysis 2.
1 The Do’s and Don’ts of Software Process Improvement Steve Chenoweth, RHIT.
Software Measurement & Metrics
Software Metrics and Reliability. Definitions According to ANSI, “ Software Reliability is defined as the probability of failure – free software operation.
OHTO -99 SOFTWARE ENGINEERING “SOFTWARE PRODUCT QUALITY” Today: - Software quality - Quality Components - ”Good” software properties.
INFO 636 Software Engineering Process I Prof. Glenn Booker Week 9 – Quality Management 1INFO636 Week 9.
PSP Quality Strategy [SE-280 Dr. Mark L. Hornick 1.
1 Metrics and lessons learned for OO projects Kan Ch 12 Steve Chenoweth, RHIT Above – New chapter, same Halstead. He also predicted various other project.
SOFTWARE ENGINEERING1 Introduction. SOFTWARE ENGINEERING2 Software Q : If you have to write a 10,000 line program in C to solve a problem, how long will.
THE SQA CHARTER Find defects so they can be REMOVED Anticipate defects so they can be PREVENTED.  Software Quality Assurance 
SOFTWARE METRICS. Software Process Revisited The Software Process has a common process framework containing: u framework activities - for all software.
Cmpe 589 Spring 2006 Lecture 2. Software Engineering Definition –A strategy for producing high quality software.
Advanced S/w Eng - s/w productivity issues 1 Software Productivity Issues Why do software projects fail? Advanced Software Engineering COM360 University.
Approaches to ---Testing Software Some of us “hope” that our software works as opposed to “ensuring” that our software works? Why? Just foolish Lazy Believe.
Software Quality Metrics III. Software Quality Metrics  The subset of metrics that focus on quality  Software quality metrics can be divided into: End-product.
CSE SW Metrics and Quality Engineering Copyright © , Dennis J. Frailey, All Rights Reserved CSE8314M13 8/20/2001Slide 1 SMU CSE 8314 /
Copyright , Dennis J. Frailey CSE Software Measurement and Quality Engineering CSE8314 M00 - Version 7.09 SMU CSE 8314 Software Measurement.
ESTM Associates, Inc. Excellence in Science, Technology and Management 1www.estm.biz Applying Six Sigma to Software Development: A Practical Guide Dr.
CS223: Software Engineering Lecture 25: Software Testing.
CAT Executive Review Team 3: Lions. Cycle 2 Key Lessons: Quality.
Swami NatarajanOctober 1, 2016 RIT Software Engineering Software Metrics Overview.
Introduction Edited by Enas Naffar using the following textbooks: - A concise introduction to Software Engineering - Software Engineering for students-
Approaches to ---Testing Software
SEVERITY & PRIORITY RELATIONSHIP
Lecture 15: Technical Metrics
Introduction SOFTWARE ENGINEERING.
Verification and Testing
Software Reliability: 2 Alternate Definitions
Introduction Edited by Enas Naffar using the following textbooks: - A concise introduction to Software Engineering - Software Engineering for students-
SQA for Individuals based on
Software metrics.
Chapter # 7 Software Quality Metrics
Presentation transcript:

1 Software Quality Metrics Ch 4 in Kan Steve Chenoweth, RHIT What do you measure?

2 How do we know if it’s “quality”? Mean time to failure Defect density Customer problems Customer satisfaction Do you see “quality” here?

3 ANSI An error is a human mistake that results in incorrect software. The resulting fault is an accidental condition that causes a unit of the system to fail to function as required. A defect is an anomaly in a product. A failure occurs when a functional unit of a software-related system can no longer perform its required function or cannot perform within its specified limits.

4 Can we compare? Defect rate for product 1 vs defect rate for product 2? Need to “operationalize” the two numbers: – How many “opportunities for error” (OPEs) during a specified time? – Often have to compare “failures” instead – Infer defects – Time frames need to be similar – Lots more defects right after release

5 Can we even compare LOC? Are LOC like IQ points – people use them just because they are handy? Does coding style count, etc.? “Everyone knows that debugging is twice as hard as writing a program in the first place. So if you are as clever as you can be when you write it, how will you ever debug it?” ―Brian Kernighan

6 How about this question? In a next release of a product, what do you count? – All lines shipped to each customer? – Just the lines changed? – Something in-between? “ A whole nuther way” to consider what counts…

7 The customer doesn’t care! They want a decreasing defect rate, over time. They’ll get spoiled by a good release that had little new stuff in it.

8 How about function points? Another way to consider “opportunities for error” (OPEs). Uses a weighted total of 5 major components: – Number of external inputs x 4 – Number of external outputs x 5 – Number of internal files x 10 – Number of external interface files x 7 – Number of external inquiries x 4

9 Problems per user-month (PUM) Problems reported / total license-months Usually measured after release

10 Customer Satisfaction Metrics Often a 5-point scale: Can be overall, or For specific qualities. HP uses “FURPS”: – Functionality – Usability – Reliability – Performance – Service

11 Customer Sat Can be expressed to the customer as a question, in different ways: Completely satisfied? Satisfied? Dissatisfied? Neutral or worse? Most commonly asked for

12 Relative scope of the 3 metrics These are subset relationships

13 Study of quality processes The model is from machine testing We look at defect arrival pattern. Good – arrivals stabilize at low level!

14 In software, can track in all phases Moving from phase to phase, this also yields defect removal rates. Which of these patterns is better?

15 Looking at it in a good way… “Phase effectiveness” Tall bars = low number of defects escaped

16 Maintenance – worth measuring Typical metrics looked at carefully: – Fix backlog and backlog management index – Fix response time and fix responsiveness – Percent delinquent fixes – Fix quality

17 Fix backlog Open versus closed problems Backlog management index by month

18 Fix response time / responsiveness Mean time of all problems from open to closed Believed short response times  customer satisfaction Closely related – percent “delinquent” fixes

19 Fix quality Number of defective fixes: – Didn’t fix the reported problem, or – Fixed it but injected a new defect.

20 Company “metrics programs” Ok, we all have different ones! Motorola, for example: – Includes “SEA” – Schedule Estimation Accuracy – Etc. HP: – Average fixed defects/working day – Etc. IBM Rochester: – Fix response time – Etc.

21 Collecting data Expensive Needs consistency Should be targeted Often use forms and checklists See pp

22 So, what do you use? How formal are your measurements? How consistently applied? How much time does it take? Who reviews the data? Who makes process changes? See Kan’s “Recommendations for small organizations,” pp