Software Metrics/Quality Metrics

Slides:



Advertisements
Similar presentations
FPA – IFPUG CPM 4.1 Rules.
Advertisements

Metrics to improve software process
Metrics for Process and Projects
Metrics for Process and Projects
Software Quality Metrics Overview
A GOAL-BASED FRAMEWORK FOR SOFTWARE MEASUREMENT
R&D SDM 1 Metrics How to measure and assess software engineering? 2009 Theo Schouten.
SE 450 Software Processes & Product Metrics Software Metrics Overview.
1 PROJECT SIZING AND ESTIMATING - EFFECTIVELY USING FUNCTIONAL MEASUREMENT Southern California Software Process Improvement.
SIM5102 Software Evaluation
SOFTWARE PROJECT MANAGEMENT AND COST ESTIMATION © University of LiverpoolCOMP 319slide 1.
3. Software product quality metrics The quality of a product: -the “totality of characteristics that bear on its ability to satisfy stated or implied needs”.
Software Process and Product Metrics
1 Software Quality Metrics Ch 4 in Kan Steve Chenoweth, RHIT What do you measure?
12 Steps to Useful Software Metrics
Why Metrics in Software Testing? How would you answer questions such as: –Project oriented questions How long would it take to test? How much will it cost.
Software Metric capture notions of size and complexity.
Copyright © The David Consulting Group, Inc. 1 UNDERSTANDING and EFFECTIVELY USING FUNCTIONAL MEASUREMENT Presented By The David Consulting Group.
Quality of Information systems. Quality Quality is the degree on which a product satifies the requirements Quality management requires that : that requirements.
Cmpe 589 Spring Software Quality Metrics Product  product attributes –Size, complexity, design features, performance, quality level Process  Used.
COCOMO Models Ognian Kabranov SEG3300 A&B W2004 R.L. Probert.
1 Software Quality CIS 375 Bruce R. Maxim UM-Dearborn.
Error reports as a source for SPI Tor Stålhane Jingyue Li, Jan M.N. Kristiansen IDI / NTNU.
Chapter 6 : Software Metrics
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 23Slide 1 Chapter 23 Software Cost Estimation.
Function Point Analysis What is Function Point Analysis (FPA)? It is designed to estimate and measure the time, and thereby the cost, of developing new.
Software Measurement & Metrics
Some Sub-Activities within Requirements Engineering 1.Prototyping 2.Requirements Documentation 3.Requirements Validation 4.Requirements Measurements 5.Requirements.
This chapter is extracted from Sommerville’s slides. Text book chapter
Product Metrics An overview. What are metrics? “ A quantitative measure of the degree to which a system, component, or process possesses a given attribute.”
Software Engineering SM ? 1. Outline of this presentation What is SM The Need for SM Type of SM Size Oriented Metric Function Oriented Metric 218/10/2015.
1 Estimation Function Point Analysis December 5, 2006.
Software Project Management Lecture # 3. Outline Chapter 22- “Metrics for Process & Projects”  Measurement  Measures  Metrics  Software Metrics Process.
OHTO -99 SOFTWARE ENGINEERING “SOFTWARE PRODUCT QUALITY” Today: - Software quality - Quality Components - ”Good” software properties.
Software Quality Metrics
Lecture 4 Software Metrics
Copyright © 1994 Carnegie Mellon University Disciplined Software Engineering - Lecture 3 1 Software Size Estimation I Material adapted from: Disciplined.
This material is approved for public release. Distribution is limited by the Software Engineering Institute to attendees. Sponsored by the U.S. Department.
Disciplined Software Engineering Lecture #3 Software Engineering Institute Carnegie Mellon University Pittsburgh, PA Sponsored by the U.S. Department.
Function Point Analysis. Function Points Analysis (FPA) What is Function Point Analysis (FPA)? Function points are a standard unit of measure that represent.
SEG3300 A&B W2004R.L. Probert1 COCOMO Models Ognian Kabranov.
Computing and SE II Chapter 15: Software Process Management Er-Yu Ding Software Institute, NJU.
Chapter 3: Software Project Management Metrics
Cmpe 589 Spring 2006 Lecture 2. Software Engineering Definition –A strategy for producing high quality software.
Effort Estimation Has been an “art” for a long time because
Estimating “Size” of Software There are many ways to estimate the volume or size of software. ( understanding requirements is key to this activity ) –We.
Effort Estimation In WBS,one can estimate effort (micro-level) but needed to know: –Size of the deliverable –Productivity of resource in producing that.
Software Quality Metrics III. Software Quality Metrics  The subset of metrics that focus on quality  Software quality metrics can be divided into: End-product.
Hussein Alhashimi. “If you can’t measure it, you can’t manage it” Tom DeMarco,
540f07cost12oct41 Reviews Postmortem u Surprises? u Use white background on slides u Do not zip files on CD u Team leader should introduce team members.
Advanced Software Engineering Lecture 4: Process & Project Metrics.
FUNCTION POINT ANALYSIS & ESTIMATION
Software Test Plan Why do you need a test plan? –Provides a road map –Provides a feasibility check of: Resources/Cost Schedule Goal What is a test plan?
What is a software? Computer Software, or just Software, is the collection of computer programs and related data that provide the instructions telling.
Cost9b 1 Living with Function Points Bernstein and Lubashevsky Text pp
Cost23 1 Question of the Day u Which of the following things measure the “size” of the project in terms of the functionality that has to be provided in.
Swami NatarajanOctober 1, 2016 RIT Software Engineering Software Metrics Overview.
Software Metrics 1.
Function Point Analysis
Alternative Software Size Measures for Cost Estimation
Personal Software Process Software Estimation
Function Point.
Chapter 5: Software effort estimation- part 2
Software Metrics “How do we measure the software?”
Chapter 13 Quality Management
More on Estimation In general, effort estimation is based on several parameters and the model ( E= a + b*S**c ): Personnel Environment Quality Size or.
COCOMO Models.
Software metrics.
Software Effort Estimation
COCOMO MODEL.
Presentation transcript:

Software Metrics/Quality Metrics Software “Quality” Metrics: Product Pre-Release and Post-Release Process Project Data Collection Product Characteristics Project Characteristics Process Characteristics

Software Metrics Software Product Pre-Release & Post Release Processes All the deliverables Focus has been on Code, but interested in all artifacts Product metrics include concerns of complexity, performance, size, “quality,” etc. Pre-Release & Post Release Processes Focus has been on Pre-Release processes Easiest staring point for metrics – Testing and the number of “bug” found Process metrics used to improve on development and support activities Process metrics include defect removal effectiveness, problem fix response time, etc. Project Cost Schedule HR staffing and other Resources Customer Satisfaction Project metrics used to improve productivity, cost, etc. Project metrics include cost such as effort/size, speed such as size/time, etc. Project and Process metrics are often intertwined. Will talk about this more Function point

Product Quality Metrics What are all the deliverables ? Code and Help Text Documentation (function, install, usage, etc. in requirements & design specifications) Education (set-up/configure, end-user, etc.) Test Scenarios and Test Cases Quality Question : (mostly intrinsic to the product but affects external customer satisfactions ) When/where does it fail; how often how many; defect rate

GQM (one more time from Basili) A reminder on generating measurements: In coming up with metrics think of GQM What’s the goal What’s the Question What’s the metric Goal: is Improved Quality Question: What is the Post Release Defect Rate? Metric: Number of problems found per user months

Some Definitions of Error to Failure Error – human mistake that results in incorrect software (one or more fault or defect) Defect or Fault – a mistake in the software product that may or may not be encountered Problem – a non-functioning behavior of the software as a result of a defect/fault in the product. Note that an error can cause one or more defects, and a defect can cause one or more problems. But a problem may never surface even if there is a defect which was caused by a human error.

When/Where Do Product Failures Occur When/Where are somewhat intertwined Right away – e.g. happens at install Sometimes - e.g. happens at initialization-configuration Sometimes – e.g. happens at certain file access Generalized Metric Mean time to failure (MTTF) Difficult to assess What should be the goal (8 hours, 30 days, 6 months), or should we just say --- “lessen the failure rate”? Hard to test for and analyze (especially- prod. education, doc., etc.) Applies better for simple logic (like stays up for z amount of time) Meantime to failure for install problem should probably be close to 0

Product Defects and Defect Rate Most of the metric has been asked in terms of code but should be more inclusive: Defect Volume: How many defects (for the complete product - not just for code) Defect Rate = defects/(opportunity of defect) Defects of all kind or by type (e.g. code, test cases, design, etc.) Defects by severity (not quite a rate – more by category) Opportunity of defect by (* also used to assess volume) : Code : loc, function point, module Documentation : pages, diagrams Education or training: # of power point slides (doc) or amt. of time (delivery)

Code Defect Opportunity (LOC) Using Lines of code (loc) “problems” Executable, non-executable (comments) Test cases and scaffolding code Data and file declaration Physical line or logical line Language difference (C, C++, assembler, Visual Basic, etc.)

Possible Code Defect Rate Metrics Often used : Valid Unique Defect per line of executable and/or data code released(shipped) IBM’s total valid unique defects / KSSI Total valid unique defects / KCSI (only changed code) Valid Unique Defect of “high severity” per line of executable and/or data code released (shipped) What about all “in-line comments”; should they not count ? These provide opportunity of defects too. (especially for pre and post condition specifications) What about Help text ?

Product Quality Metric (User View) Defect rate is not as useful from user perspective: What type of problems do users face?: screen interface data reliability/(validity) functional completeness end user education product stability - crashes error message and recovery Inconsistencies in the handling of similar fucntionalities How often are these types of defect encountered? ---- counted with -- MTTF -- means more to users? Possible metric is : Problems per User Month(PUM) user month is dependent on length of period and the number of users (this takes some tracking effort) More Broader Customer Satisfaction issues CUPRIMDSO – capability, usability, performance, rel. etc. (IBM), FURPS – functionality, usability, reliability, etc. (HP)

Begin Function Point Separate Segment

Function Point (product size or complexity) metric Often used to assess the software complexity and/or size May be used as the “opportunity for defect” part of defect rate Started by Albrecht of IBM in late 70’s Gained momentum in the 90’s with IFPUG as software service industry looked for a metric Function Point does provide some advantages over loc language independence don’t need the actual lines of code to do the counting takes into account of many aspects of the software product Some disadvantages include : a little complex to come up with the final number consistency (data reliability) sometimes varies by people

Function Point Metric via GQM Goal : Measure the Size(volume) of Software Question: What is the size of a software in terms of its: Data files Transactions Metrics: amount/difficulty of “Functionalities” to represent size/volume consider Function Points ---- (defined in this lecture) What kind of validity problem might you encounter? – “construct”: applicability, “predictive”: relational ; “content” : coverage?

FP Utility Where is FP used? Comparing software in a “normalized fashion” independent of op. system, languages, etc. Benchmarking and Prediction: size .vs. cost size vs development schedule size vs defect rate Outsourcing Negotiation

Methodology Identify and Classifying: Evaluation of Complexity Level Data (or files/tables) Transactions Evaluation of Complexity Level Compute the Initial Functional Point Re-assess the range of other factors that may influence the computed Functional Point and Compute the Function Point

1) Identifying & Classifying 5 “Basic Entities” Data/File: Internally generated and stored (logical files and tables) Data maintained externally and requires an external interface to access (external interfaces ) Transactions: Information or data entry into a system for transaction processing (inputs) Information or data “leaving” the system such as reports or feeds to another application (outputs) Information or data displayed on the screen in response to query (query) Note: - What about “tough” algorithms and other function oriented stuff? (We take of that separately in a separate 14 “Degree of Influences”)

2) Evaluating Complexity Using a complexity table, each of the 5 basic entity is evaluated as : low average high Complexity table uses 3 attributes for decisions # of Record Element Types (RET): e.g. employee data type, student record type ---- # of file types # of unique attributes (fields) or Data Element Types (DET) for each record : e.g. name, address, employee number, and hiring date would make 4 DETs for the employee file # of File Type Referenced (FTR): e.g an external payroll record file that needs to be accessed

5 Basic Entity Types uses the RET, DET, and FTR for Complexity Evaluation For Logical Files and External Interfaces (DATA): # of RET 1-19 DET 20-50 DET 50+ DET 1 Low Low Ave 2 -5 Low Avg High 6+ Avg High High For Input/Output/Query (TRANSACTIONS): # of FTR 1-4 DET 5 -15 DET 16+ DET 0 - 1 Low Low Ave 2 Low Avg High 3+ Avg High High

Example Consider a requirement: ability or functionality to add a new employee to the “system.” (Data): Employee information involves, say, 3 external file that each has a different Record Element Types (RET) Employee Basic Information file has employee data records Each employee record has 55 fields (1 RET and 55 DET) - AVERAGE Employee Benefits records file Each benefit record has 10 fields (1 RET and 10 DET) - LOW Employee Tax records file Each tax record has 5 fields ( 1 RET and 5 DET) - LOW (Transaction): Adding a new employee involves 1 input transaction which involves 3 file types referenced (FTR) and a total of 70 fields (DET). So for the 1 input transaction the complexity is HIGH

Function Point (FP) Computation Composed of 5 “Basic Entities” input items output items inquiry master and logical files external interfaces And a “complexity level index” matrix : Low Average High Input 3 4 6 Output 4 5 7 Inquiry 3 4 6 Logical files 7 10 15 Ext. Interface 5 7 10

3) Compute Initial Function Point Σ [Basic Entity x Complexity Level Index] all basic entities Continuing the Example of adding new employee: - 1 external interface (average) = 7 - 1 external interface (low) = 5 - 1 input (high) = 6 Initial Function Point = 1x7 + 1x5 + 1x5 + 1x6 = 23

4) More to Consider There are 14 more “Degree of Influences” (DI) on a scale of 0 - 5 : data communications distributed data processing performance criteria heavy hardware utilization high transaction rate online data entry end user efficiency on-line update complex computation reusability ease of installation ease of operation portability (supports multiple sites) maintainability (easy to change)

Function Point Computation (cont.) Define Technical Complexity Factor (TCF): TCF = .65 + (.01 x DI ) where DI = SUM ( influence factor value) So note that .65 ≤ TCF ≤ 1.35 Function Point (FP) = Initial FP x TCF Finishing the Example: Suppose after considering 14 DI’s, our TCF = 1.15, then: Function Point = Initial FP x TCF = 23 x 1.15 = 26.45

Defect Rate: Defects/FP by CMM Levels C. Jones estimated defect rates by SEI’s CMM levels through the maintenance life of a software product: CMM Level 1 organizations – 0.75 defect/FP CMM Level 2 - 0.44 CMM Level 3 – 0.27 CMM Level 4 – 0.14 CMM Level 5 – 0.05 Be careful of this type of claims – use it with caution

End Function Point Separate Segment

Pre-Release Process Quality Metrics Most common one is from testing (Defect Discovery Rate) defects found (by severity) per time period ( per dev. phase) Compare “defect arrivals” by time by test phase looking for “stabilization” (**what would the curve look like?*) looking for a decreasing pattern Compare number of defects by products those with high number of problems found during pre-release tend to be “buggy” after release (interesting phenomenon) Other Pre-Release quality metric: Defect Removal Effectiveness (e.g. Via inspection) defects removed / ( total latent defects) latent defects are estimated : how estimated? --- go back later with defects found in the field

Post-Release Product and Process # of Problems per Usage-Month (# of PUM) Post-Release “Fix Process”: Fix Quality = Number of Fix bugs/ Total number of fixes Very sensitive if fix quality is not close to zero Post-Release Process Quality Problem backlog = total # of problems unresolved by severity by arrival date Problem Backlog Index = # of problems resolved / # of arrivals per some time period such as week or month Average Fix Response Time ( from problem open to close ) These metrics are usually compared with a goal: average response time on severity 1 problem is 24 hours problem backlog index is between 1.3 and .8 (.8 may be problem!)

Decide on what Metrics are to be used Collecting Data Decide on what Metrics are to be used measuring what (validity of measure) what’s the goal (validity of measure) Decide on how to collect the data clearly defining the data to be collected assure the recording is accurate (reliability) assure the classification is accurate (reliability/validity) Decide on tools to help in the collection source code count problem tracking

Data Collection Methodology (Basili & Weiss) Establish the goal of the data collection Develop a list of questions of interest Establish data categories Design and test data collection mechanism (e.g. forms) Collect and check the reliability data Analyze the data