2Software MetricsMetrics are measures to provide feedback to the project mangers, developers, and programmers about quality of their work, project and products.
3QA Questions During the development process we ask: Will we produce a product that meets or exceed the quality attributes set requirements and expectations of the customer?At the end of a process we ask:Have we produced a product that meets or exceeds the quality attribute set requirement
4Role of QA EngineerFor each element of the customer quality attribute set,you must select and possibly create specific measurements that can be applied repeatedly during the development process andthen again at its conclusion.The results of such measurement can be used to determine progress towards a finally attainment of quality goals
5MetricsMeasurements combined with desired results are referred as metricsWe have checklist and appraisal methods/activities to ensure the health of the process
6Types of Software Metrics Process Metrics: can be used to improve the software development and maintenance process, e.g. patterns of defect removal, response time of a fix process, effectiveness of the defect removal process during development.Product Metrics: describe the characteristics of the product, such as its size, complexity, performance.Project Metrics: describe the characteristics of the project and its execution, such as, number of software developers, staffing pattern over the lifecycle of the project, cost and schedule.Software Quality Metrics: are the metrics that deal with quality aspect of the software process, product and project.In-Process and End Product quality Metrics
7Software Quality Engineering The essence of software quality engineering is to investigate the relationship among in-process metrics, project characteristics, and end product quality; and, based on the findings, to engineer improvements in both process and product quality.In Customer-oriented SQA, the quality attribute set drives the metrics selection and development process.
9Defect Arrival Rate (DAR) It is the number of defects found during testing measured at regular intervals over some period of timeRather than a single value at set of values is associated with this metricsWhen plotted on a graph,the data may rise, indicating a positive defect arrival rate;It may stay flat, indicating a constant defect arrival rate;Or decrease, indicating a negative defect arrival rate.
10Defect Arrival Rate (DAR) Interpretation of DAR can be difficult: a Negative DARmay be indicating an improvement in productTo validate this interpretation, one must remove other possibilities, such as, decline in test effectivenessNew test may need to be designed to improve the test effectivenessMay indicate under staffing of the test organizationA plot of DAR over time span could be useful indicator
11Test Effectiveness Tests that always pass are considered ineffective Such test form ‘regression testing’, if any of them fails a regression in quality of product has occurred.Test effectiveness (TE) is measured asTE = Dn / TnDn is the number of defects found by formal testsTn is the total number of formal testsWhen calculated at regular intervals and plotted:If the graph rises over time, TE may be improvingIf the graph is falling over time, TE may be waningThe interpretation should made in the context of other metrics being used in the process
12Defects by PhaseFixing a defect is early in the process is cheaper and easy.At conclusion of each discrete phase of development process, a count of new defects is taken and plotted to observe the trend.Defect by phase is a variation of DAR metricsDomain of this metrics is the development phase, rather than regular interval.InterpretationA rising graph might indicate that the methods used for defect detection in earlier phases were not effective.A decreasing graph may indicate the effectiveness of defect removal in earlier phases
13Defect Removal Effectiveness (DRE) DRE = Dr / (Dr + Dt) x 100Dr is the number of defects removed prior to releaseDt is the total number of defects that remain in the product at releaseInterpretation:Effectiveness of this metric depends on thoroughness and diligence with which your staff reports defects.This metrics may be applied on phase-by-phase basis to gage the relative effectiveness of defect removal by phase.Weak areas in the process may be identified to improvementThe result may plotted and trend may be observed and used to adjust the process.
14Defect BacklogIt is count of the number of defects in the product following its releaseIt is usually metered at regular interval and plotted for trend analysis.A more useful way to represent defect backlog is defect by severity, e.g., a month after release of your product, the backlog contains2 severity 1 defects8 severity 2 defects24 severity 3 defects90 severity 4 defectsCased on this information, the PM may decide to shift resources to resolve severity 1 & 2 defectsSuch a high improvement requests may also indicate review of the requirements gathering process
15Backlog management index (BMI) Problems arise after product releaseNew problems arrive that impact the net result of your team’s efforts to reduce the backlog.If the number of new problems a closed faster than the new one are opened, the team is winning otherwise it is losing ground.BMI = Dc / DnDc number defect closed during some period of timeDc number defect new defects that arrive during the same period of timeInterpretation: if BMI is greater than 1, your team is gaining ground, otherwise it is losingA trend observed in a plot may indicate the level of backlog management effort.
16Fix Response TimeIt is the average time it takes your team to fix a defect.It may the elapsed time between the discovery of a defect and the development of a verified/unverified fixA better metrics would be Fix response time by the severity of defect.A percent of timely fixed defects is used as fix responsiveness measure and high value indicates the customer satisfaction
17Percent Delinquent Fixes Afix is delinquent if it exceeds your fix response criteria.PDF = (Fd / Fn ) * 100FD number of delinquent fixedFN number of non-delinquent fixes andMultiply by 100The metrics is also better by severity since.
18Defective FixesA defect for which a fix has been prepared that later turns out to be defective or worse, creates one or more additional problems is called a defective fix.The count of such defective fixes is the metricThe new defects introduced by defective fixes must be tracked
20Defect DensityThe general concept of defect rate is the number of defects over the opportunities for error (OFE) during a specific time frame.Defect density is used to measure the number of defects discovered per some unit of product size, e.g., KLOC, FPIf a product has large number of defects during formal testing, customer will discover a similarly large number of defects while using the product and it converse is true as well.The answers to question related to customer defect tolerance may help to select an acceptable value for the metric.Phase-wise application of the metric may also be helpful
21Defect by severityIt is a simple count of unresolved defects by severityUsually measured at regular intervals andPlotted to see any trend, showing progress towards acceptable value for each severity.Movement away from those value may indicate that projects at risk of failing to satisfy the condition of the metric
22Mean time between failure - MTBF The MTBF metric is simple average of elapsed time between failures that occur during test.This metric is defined in terms of the type of testing performed during the measurement period, e.g., moderate-stress testing, heavy stress testingMinimum ship critera
23Customer –Reported problems It is a simple count of the number of new (no duplicate) problems reported by customer over some interval.When measured at regular intervals and plotted, trend identified would require investigation on the causes behind the trendIf and increase in CPR identified and a correlation or cause-effect analysis indicate a relationship between the CPR and the number end-users using the product, it may indicate that the product has a serious scalability problemsA profiling implementation may help to determine the usage patron of the end used for different features of the product
24Customer Satisfaction Customer Satisfaction metrics is typically measured through a customer satisfaction survey.Questions are designed to be answered on a range responses, typically 1-5questions should be designed to assess both the respondents subjective and objective perception of product quality
25Beyond the MetricsDoes our metrics bucket suffice for our quality attribute setWe might have to create or alter certain metricsUsability studies are conducted by independent labs that invite groups of end users to their test facility to evaluate the usability of product.Checklist: are an effective means by which to determine whether a product possesses very specific non-measurable attributes or attribute elements
26Process for Metrics Definition The attributes in the Quality Attributes Set are considered one by oneThe attribute statement is divided into individual attribute elementsFor Each Element, one has to see “Is the element measurable or not?”If Not:One has to chose between various non-measurable QA optionsE.g., usability Studies, Checklists, etcIf yes:Look in the Metrics Bucket that if any of the metrics can be used to measure the said attribute element/feature.If no measure is available one has to define a new metricsSome times some other metrics being used may suffice for the attribute element in question and now new metrics may be required.
27Ease of UseSoftware’s customers prefer to purchase software products that don’t require them to read the manual or use the on-line help facilities. They look for products with Graphical User Interfaces (GUIs) that “look and feel” like other products that they use regularly, such as their word processors and spreadsheet programs. Those programs have what they call “intuitive” user interfaces, which is another way of saying that they can learn the products by playing with it for a short period of time without consulting the manual.They also prefer products that have a GUI that is sparsely populated with buttons and pop-up (or pull-down) menus, leaving a large work area in which they can create their frequent masterpieces.
28Metrics for ease of use The attribute element 1 is not measurable Therefore, usability studiesSpecific questions may be designed for the user in the study groupsEG, NUTESMetrics: number of buttons/menus etc. on the interfaceOther commonly used applications may used to determine an acceptable threshold value
29Defect ToleranceTo Software’s customers, defect such as some typos in message strings and in help text as well as minor disparities between documented and actual behavior or function will be tolerated until the next release. On the other hand, they will not tolerate that alter or destroy their works in progress or that adversely affect their productivity such defects will likely drive them to abandonee the products in favor of a product that may be less robust but reliable. They consider defects such as general exceptions, hangs, data corruption, and long delays between operation to be intolerable defects.Metrics: number of defects by severity
30Defect ImpactSoftware’s customers see themselves as highly productive people who prefer to work on several things at once. They often start several applications on their workstations simultaneously, jumping from on to another. Many of Software’s customers have had an experience where they noticed that whenever they jumped from their word processor to a particular vendor’s desktop publishing system, they had to wait several minutes for the view to redraw. The desktop publishing system developers decided to optimize memory usage, sacrificing view redrawing performance. They assumed that most users would not switch from application to application while using their product; consequently view redrawing would be infrequent. To save memory, they decided to save the current view on disk, retrieving it whenever they needed to perform a redraw. This design decision saved a large amount of memory but sacrificed redrawing performance. Though some users might appreciate the designers’ effort to decrease memory usage, Software’s customers view the resulting poor performance of view redrawing as a major defect since it severely impacted their productivity.No metrics may be requires as the other metrics “number of defects by severity” my be used”