Presentation is loading. Please wait.

Presentation is loading. Please wait.

M ANAGKEMENT COMPONENTS OF SOFTWARE QUALITY. S OFTWARE QUALITY METRICS Chapter 21.

Similar presentations


Presentation on theme: "M ANAGKEMENT COMPONENTS OF SOFTWARE QUALITY. S OFTWARE QUALITY METRICS Chapter 21."— Presentation transcript:

1 M ANAGKEMENT COMPONENTS OF SOFTWARE QUALITY

2 S OFTWARE QUALITY METRICS Chapter 21

3 S OFTWARE QUALITY METRICS “You can’t control what you can’t measure” a function whose inputs are software data and whose output is a single numerical value that can be taken as the degree to which the software owns a given quality attribute. Software quality metrics is a function whose inputs are software data and whose output is a single numerical value that can be taken as the degree to which the software owns a given quality attribute.

4 O BJECTIVES OF QUALITY MEASUREMENT Main objectives of software quality metrics : assist management control planning the appropriate managerial changes (1) To assist management control as well as planning the appropriate managerial changes. Achievement of this objective is based on calculation and analysis of metrics regarding Achievement of this objective is based on calculation and analysis of metrics regarding: Deviations of actual functional (quality) performance from planned performance Deviations of actual timetable and budget performance from planned performance.

5 O BJECTIVES OF QUALITY MEASUREMENT identify situations that require process improvement (2) To identify situations that require process improvement in the form of preventive or corrective actions introduced throughout the organization. Achievement of this objective is based on: gathering of metrics information gathering of metrics information regarding the performance of teams, units, etc.

6 S OFTWARE QUALITY METRICS for comparison of performance data with indicators, quantitative values The metrics are used for comparison of performance data with indicators, quantitative values such as: Defined software quality standards Quality targets set for organizations or individuals Previous year’s quality achievements Previous project’s quality achievements Average quality levels achieved by other teams applying the same development tools in similar development environments Average quality achievements of the organization Industry practices for meeting quality requirements.

7 C LASSIFICATION OF SOFTWARE QUALITY METRICS quality metrics can fall into a number of categories Software quality metrics can fall into a number of categories. distinguishes between life cycle and other phases of the software system: The classification category distinguishes between life cycle and other phases of the software system: Process metrics Process metrics, related to the software development process (Section 21.3) Product metrics Product metrics, related to software maintenance (Section 21.4)

8 SOFTWARE QUALITY METRICS measures for system size A sizeable number of software quality metrics involve one of the two following measures for system size : KLOC – this classic metric measures the size of software by thousands of code lines. Function points – a measure of the development resources (human resources) required to develop a program.

9 S OFTWARE PROCESS QUALITY METRICS Software process quality metrics may be classified into two classes: Error density metrics Error density metrics Error severity metrics. Error severity metrics.

10 E RROR DENSITY METRICS S OFTWARE PROCESS QUALITY METRICS / E RROR DENSITY METRICS Calculation of error density metrics involves two measures: (1) Software volume use the number of lines Software volume measures. Some density metrics use the number of lines of code while others apply function points. (2) Errors counted weighted number of errors. Errors counted measures. Some relate to the number of errors and others to the weighted number of errors.

11 This section describes six different types of metrics.

12 W EIGHTED MEASURES considered to provide more accurate evaluation of the error situation Weighted measures that determine the severity of the errors are considered to provide more accurate evaluation of the error situation. classification of the detected errors into severity classes, followed by weighting each class. This measure is classification of the detected errors into severity classes, followed by weighting each class. multiples of the number of errors found in each severity class by the adequate relative severity weight The weighted error measure is computed by summing up multiples of the number of errors found in each severity class by the adequate relative severity weight.

13 E XAMPLE 1. calculation of the number of code errors (NCE) and the weighted number of code errors (WCE). This example demonstrates the calculation of the number of code errors (NCE) and the weighted number of code errors (WCE). A software development department applies two alternative measures, NCE and WCE, to the code errors detected in its software development projects. Three classes of error severity and their relative weights are also defined:

14 E XAMPLE 1. were 42 low severity errors, 17 medium severity errors, and 11 high severity errors The code error summary for the department’s project indicated that here were 42 low severity errors, 17 medium severity errors, and 11 high severity errors. Calculation of NCE and WCE gave these results:

15 E XAMPLE 2. introduces the factor of weighted measures This example follows Example 1 and introduces the factor of weighted measures so as to demonstrate the implications of their use. two alternative metrics for calculation of code error density: CED and WCED. A software development department applies two alternative metrics for calculation of code error density: CED and WCED. indicators for unacceptable software quality: CED > 2 and WCED > 4. The unit determined the following indicators for unacceptable software quality: CED > 2 and WCED > 4. the code error summary For our calculations we apply the three classes of quality and their relative weights and the code error summary for the project mentioned in Example 1.

16 E XAMPLE 2. The software system size is 40 KLOC The software system size is 40 KLOC. Calculation of the two metrics resulted in the following:

17 E XAMPLE 2. are different. The conclusions reached after application of the un weighted versus weighted metrics are different. the WCED metric does indicate quality below the acceptable level a result that calls for management concern. While the CED does not indicate quality below the acceptable level, the WCED metric does indicate quality below the acceptable level ( the unit’s quality is not acceptable), a result that calls for management concern.

18 E RROR SEVERITY METRICS S OFTWARE PROCESS QUALITY METRICS - E RROR SEVERITY METRICS ASCE = WCE/NCE = 192/70 => 2.7

19 TIMETABLE METRICS S OFTWARE PROCESS TIMETABLE METRICS Software process timetable metrics Software process timetable metrics may be based on accounts of success (completion of milestones per schedule) in addition to failure events (non-completion per schedule). To calculate TCDAM, delays reported for all relevant milestones are summed up. Milestones completed on time or before schedule are considered “O” delays.

20 PRODUCTIVITY METRICS S OFTWARE PROCESS - PRODUCTIVITY METRICS “direct” metrics that deal with a project’s human resources productivity “indirect” metrics that focus on the extent of software reuse This group of metrics includes “direct” metrics that deal with a project’s human resources productivity as well as “indirect” metrics that focus on the extent of software reuse.

21 P RODUCT METRICS software system’s operational phase Product metrics refer to the software system’s operational phase – years of regular use of the software system by customers. customer service during the software’s operational phase. In most cases, the software developer is required to provide customer service during the software’s operational phase. Customer services are of two main types: Help desk services (HD) Help desk services (HD) Corrective maintenance services Corrective maintenance services HD metrics are based on all customer calls corrective maintenance metrics are based on failure reports. HD metrics are based on all customer calls while corrective maintenance metrics are based on failure reports.

22 C ORRECTIVE MAINTENANCE QUALITY METRICS Software maintenance metrics are classified as follows: maintenance services were unable to complete the failure correction Failures of maintenance services metrics – deal with cases where maintenance services were unable to complete the failure correction on time. the services of the software system are unavailable Software system availability metrics – deal with periods of time where the services of the software system are unavailable or only partly available.

23 F AILURES OF MAINTENANCE SERVICES METRICS software failure problem that was supposed to be solved maintenance service failure A customer call related to a software failure problem that was supposed to be solved after a previous call is commonly treated as a maintenance service failure. Maintenance Repeated repair Failure (MRepF), is defined as follows: Where: RepYF = the number of repeated software failure calls RepYF = the number of repeated software failure calls (service failures). NYF = number of software failures detected NYF = number of software failures detected during a year of maintenance service.

24 S OFTWARE SYSTEM AVAILABILITY METRICS User metrics distinguish between: Full availability – where all software system functions perform correctly. Total unavailability – where all software system functions fail. Key: number of hours software system is in service ■ NYSerH = number of hours software system is in service during one year. number of hours where at least one function is unavailable ■ NYFH = number of hours where at least one function is unavailable (failed) during one year number of hours of total failure ■ NYTFH = number of hours of total failure (all system functions failed) during one year.

25 S OFTWARE SYSTEM AVAILABILITY METRICS - E XAMPLES For an office software system that is operating 50 hours per week for 52 weeks per year, NYSerH = 2600 (50 × 52). NYFH = 400 hours. ( given )  Full availability = 2600 – 400 / 2600 = 85%.  For a real-time software application that serves users 24 hours a day,  NYSerH = 8760 (365 × 24).  NYFH = 850 hours. ( given )  NYTFH = 250 hours. ( given )  Full availability = 8760 – 850 / 8760 = 90%.  Total unavailability = 250 / 8760 = 3%.

26 I MPLEMENTATION OF SOFTWARE QUALITY METRICS The application of software quality metrics in an organization requires: Definition of software quality metrics Definition of software quality metrics – relevant and adequate for teams, departments, etc. Statistical analysis of collected metrics data. Statistical analysis of collected metrics data.

27 T HE P ROCESS OF DEFINING SOFTWARE QUALITY METRICS The definition of metrics involves a four–stage process: Definition of attributes to be measured (1) Definition of attributes to be measured: software quality, development team productivity, etc. Definition of the metrics that measure the required attributes. (2) Definition of the metrics that measure the required attributes. Determination of target values indicators (3) Determination of target values based on standards, previous year’s performance, etc. These values serve as indicators of whether the unit measured complies with the demands of a given attribute. (4) Determination of metrics application processes: frequency of reporting – Reporting method, including frequency of reporting Metrics data collection method – Metrics data collection method.

28

29 S TATISTICAL ANALYSIS OF METRICS DATA statistical analysis is required of the metrics’ results. For the metrics data to be a valuable part of the SQA process, statistical analysis is required of the metrics’ results. provides opportunities for comparing a series of project metrics include comparison of predefined indicators. Analysis of metrics data provides opportunities for comparing a series of project metrics include comparison of predefined indicators. the use of graphic presentationsenable us to quickly identify trends in the metrics values. Analyzing metrics results, by the use of graphic presentations (showing also the indicator values), enable us to quickly identify trends in the metrics values.

30 ANALYSIS OF METRICS DATA - P ROGRESS I NDICATOR growing number of planned and actual completions over time Shows the growing number of planned and actual completions over time., each project is expected to produce multiple progress charts for different types of tasks, different teams, etc.

31 ANALYSIS OF METRICS DATA - RESOURCES E FFORT I NDICATOR shows monthly actual versus planned effort (resources).

32 ANALYSIS OF METRICS DATA - C OST I NDICATOR The actual and budgeted quantities are derived from an earned value system, and are shown in terms of staff-hours.

33 ANALYSIS OF METRICS DATA - R EVIEW R ESULTS Action Item inspection defects Review Results indicators provide insight into the status of action items from life-cycle reviews. The term Action Item (AI) refers to inspection defects and customer comments.

34 ANALYSIS OF METRICS DATA - R EQUIREMENTS S TABILITY provides an indication of the completeness, stability, and understanding of the requirements. Requirements Stability provides an indication of the completeness, stability, and understanding of the requirements. in the form of trend charts that show the *total number of requirements, *total changes to the requirements, and the number of TBDs over time Requirements stability indicators are in the form of trend charts that show the *total number of requirements, *total changes to the requirements, and the number of TBDs over time. A TBD refers to an *undefined requirement. can lead to poor product quality, increased cost, and schedule missing A lack of requirements stability can lead to poor product quality, increased cost, and schedule missing. Based on requirements stability trends, corrective action may be necessary

35

36 ANALYSIS OF METRICS DATA – T RAINING INDICATORS information on the training program Training indicators provide managers with information on the training program and whether the staff has necessary skills

37 T AKING ACTION IN RESPONSE TO METRICS ANALYSIS RESULTS The actions taken in response to metrics analysis can be classified as Examples of the direct changes reorganization in software development and maintenance methods revision of the metrics Direct actions if initiated by the project management. Examples of the direct changes initiated by management include reorganization in software development and maintenance methods, and revision of the metrics computed. (objective 1) Indirect actions if initiated by the Corrective Action Board. The CAB indirect actions are a result of analysis of metrics data collected from a variety of projects and development departments. (objective 2)

38 S UMMARY Main objectives of software quality metrics : assist management control planning the appropriate managerial changes (1) To assist management control as well as planning the appropriate managerial changes. identify situations that require process improvement (2) To identify situations that require process improvement in the form of preventive or corrective actions introduced throughout the organization.

39 Q UIZ 1) What do you understand from this graph? Explain what this graph shows. 2) Based on the information given in the graph, do you think the product will be ready for releasing on the time scheduled within budget (resources) constraints? Explain your answer.

40 The End..


Download ppt "M ANAGKEMENT COMPONENTS OF SOFTWARE QUALITY. S OFTWARE QUALITY METRICS Chapter 21."

Similar presentations


Ads by Google