Presentation is loading. Please wait.

Presentation is loading. Please wait.

Error reports as a source for SPI Tor Stålhane Jingyue Li, Jan M.N. Kristiansen IDI / NTNU.

Similar presentations


Presentation on theme: "Error reports as a source for SPI Tor Stålhane Jingyue Li, Jan M.N. Kristiansen IDI / NTNU."— Presentation transcript:

1 Error reports as a source for SPI Tor Stålhane Jingyue Li, Jan M.N. Kristiansen IDI / NTNU

2 Goals Business: How can we reduce the cost of corrective maintenance? Research: What are the main cost drivers for corrective maintenance

3 Company A – 1 Company A is a software line company with only one product. The product is deployed on more than 50 operating systems and hardware platforms. The company has 700 employees and of these 400 are developers and testers.

4 Company A – 2 Via the company’s DTS – Defect Tracking System – the company collected: Defect ID, priority, severity and report creation date Defect summary and detailed description Who found the defect and when Estimated correction time When was the defect corrected Detailed comments and work log of the person who corrected the defect

5 What we did in company A We improved the company’s DTS based on the IBM concept of orthogonal defect classification – ODC. Based on a study of earlier reported defects, the classification scheme was modified to fit the company’s needs.

6 DTS system enhancements at company A Added or revised attributes Value of the attributes Effort to fix Time-consuming: more than one person-day effort Easy: less than one person-day effort QualifierMissing; Incorrect; or Extraneous Fixing type Extended the ODC “type” attributes to reflect the actual defect correction activities of the company. Root cause Project entities, such as requirement, design, and documentation, which should be done better to prevent the defect from occurring earlier.

7 Company B – 1 Company B is a software house that develops business critical systems, primarily for the bank and finance sector. Most of their projects have a fixed price and a fixed delivery date. The company has 800 developers and testers.

8 Company B – 2 Via the company’s DTS the company collected among other things: Defect ID, priority, severity and report creation date Defect summary and detailed description Who found the defect and when Who tested the correction and how

9 What we did in company B Just as for company A, we improved the company’s DTS based on a study of earlier reported defects. In addition, the changes also enabled us to collect data that should later be used for software process improvement.

10 DTS system enhancements at company ¨B Added or revised attributes Value of the attributes Effort to fix Classify effort to reproduce and fix defects as “simple” – less than 20 minutes, “medium” – between 20 minutes and 4 hours, and “extensive” – more than 4 hours. Defect type Introduced a new set of attributes that were adapted to the way the developers and testers wanted to classify defects. Root cause The attributes here are project entities, such as requirement, design, development, and documentation, which can be done better to prevent the defect earlier.

11 Data collection In both companies, we collected defect data when the companies had used the new DTS for six months. Only defect reports that had all their fields filled in were used for later analysis. This gave us: Company A – 810 defects Company B – 688 defects

12 Data analysis – 1 Our goal was to identify cost drivers for the most expensive defects. Thus, we split the defects into categories depending on reported “time to fix”. Company A – two groups: “easy to fix” and “time consuming” Company B – three groups: “simple”, “medium” and “extensive”. “Simple” and “medium” defects was combined into one group – “simple” to be compatible with company A

13 Data analysis – 2 We identified the most important root causes for the costly fixes through qualitative analysis. For both companies we had “correction type” and “cause”. We also found important information in the –developer discussions (company A) –test descriptions (company B)

14 Root causes for high effort fixes - A Reasons for the costly debuggingNumbers of high effort defects in each business unit CoreB2CB2B Hard to determine the location of the defect 20374 Implemented functionality was new or needed a heavy rewrite 13292 Long clarification (discussion) of defect 1950 The original fix introduces new defects / multiple fixes 1390 Others (documentation is incorrect or communication is bad) 200 Reasons not clear3160

15 Root causes for company A Based on the collected data, we identified the most important root causes for costly defects: It was hard to determine the defect’s location Implemented functionality was new or needed a heavy rewrite Long, costly discussion on whether the defect report really was a defect or just misuse of the system.

16 Root causes for high effort fixes – B Root cause attribute Number of defects Simple MediumExtensive Sum Functional defect 9211 Wrong test data in build 77481 Bad specification 8912101 Bad test environment 9110 Development problems 31757374 Sum 50176577

17 Root causes for company B Bad specifications and development problems account for 91% of the high effort defects. If we consider the sub-categories defined for these two root causes, we find that 70% of all correction costs are due to: Errors in business logic Unclear requirements Problems in graphical interface

18 Important maintenance factors – 1 Several published studies have identified the following important maintenance cost factors: Maintainers’ experience with system and application domain System size and complexity The development activity where the defect is discovered Tool and process support

19 Important maintenance factors – 2 System size and complexity – large complex systems makes it difficult to –Analyze the system to decide where the defect stems from - A –Deiced how to fix the defect - A –Find the right solution to implement – A, B

20 Important maintenance factors – 3 Low maintainers’ experience with system and application domain cause –Need for heavy rewrite of functionality - A –Development problems, e.g. with business logic and user interface – B

21 Important maintenance factors – 3 ISO 9126 defines maintainability as: Analyzability Changeability Stability Testability The high maintenance cost factors size and complexity fit well into this model. However, the model focuses on software characteristics and ignore the influence of the developers’ knowledge and experience

22 Conclusions - 1 Important data sources when analyzing maintainability are: Resources needed for fixing Defect typology, e.g. ODC Qualitative data such as test description and developer discussions The effort model should be updated regularly, based on the defect profile and project environment.

23 Conclusions – 2 There is no “best estimator” for corrective maintenance. Important factors are Software characteristics – e.g. as defined in ISO 9126 Staff characteristics – e.g. knowledge and experience


Download ppt "Error reports as a source for SPI Tor Stålhane Jingyue Li, Jan M.N. Kristiansen IDI / NTNU."

Similar presentations


Ads by Google