Presentation is loading. Please wait.

Presentation is loading. Please wait.

SE382 Software Engineering Lecture 21b

Similar presentations


Presentation on theme: "SE382 Software Engineering Lecture 21b"— Presentation transcript:

1 SE382 Software Engineering Lecture 21b
Review Techniques (1)

2 Chapter 20 Review Techniques
Slide Set to accompany Software Engineering: A Practitioner’s Approach, 8/e by Roger S. Pressman and Bruce R. Maxim Slides copyright © 1996, 2001, 2005, 2009, 2014 by Roger S. Pressman For non-profit educational use only May be reproduced ONLY for student use at the university level when used in conjunction with Software Engineering: A Practitioner's Approach, 8/e. Any other reproduction or use is prohibited without the express written permission of the author. All copyright information MUST appear if these slides are posted on a website for student use. These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 8/e (McGraw-Hill 2014). Slides copyright 2014 by Roger Pressman.

3 Reviews ... there is no particular reason
why your friend and colleague cannot also be your sternest critic Jerry Weinberg These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 8/e (McGraw-Hill 2014). Slides copyright 2014 by Roger Pressman.

4 What Are Reviews? A meeting conducted by technical people for technical people A technical assessment of a work product created during the software engineering process A software quality assurance mechanism A training ground These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 8/e (McGraw-Hill 2014). Slides copyright 2014 by Roger Pressman.

5 What Reviews Are Not A project summary or progress assessment
A meeting intended solely to impart information A mechanism for political or personal reprisal! These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 8/e (McGraw-Hill 2014). Slides copyright 2014 by Roger Pressman.

6 What do we look for in a review?
Errors and defects Error: A quality problem found before the software is released to end users Defect: A quality problem found only after the software has been released to end-users We make this distinction because errors and defects have very different economic, business, psychological, and human impact However, the temporal distinction made between errors & defects here is not mainstream thinking These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 8/e (McGraw-Hill 2014). Slides copyright 2014 by Roger Pressman.

7 SE382 Software Engineering Lecture 22a
Review Techniques (2)

8 Defect Amplification Model
A defect amplification model can be used to illustrate the generation and detection of errors during the design and code generation actions of a software process These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 8/e (McGraw-Hill 2014). Slides copyright 2014 by Roger Pressman.

9 Defect Amplification Example: Without Reviews
Passed though Amplified New Detection efficiency Ref: Software Engineering: A Practitioner’s Approach, 8/e (McGraw-Hill 2014)

10 Defect Amplification Example: With Reviews
Passed though Amplified New Detection efficiency Ref: Software Engineering: A Practitioner’s Approach, 8/e (McGraw-Hill 2014)

11 Defect Amplification Example: Result Analysis
The software process that does not include reviews, yields 94 errors at the beginning of testing and releases 12 latent defects to the field The software process that does include reviews, yields 24 errors at the beginning of testing and releases 3 latent defects to the field A cost analysis indicates that the process with no reviews costs approximately 3 times more than the process with reviews, taking the cost of correcting the latent defects into account These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 8/e (McGraw-Hill 2014). Slides copyright 2014 by Roger Pressman.

12 Review Metrics: Total Review Effort Ereview
Ereview = Ep + Ea + Er Preparation effort, Ep — the effort (in person-hours) required to review a work product prior to the actual review meeting Assessment effort, Ea — the effort (in person- hours) that is expending during the actual review Rework effort, Er — the effort (in person-hours) that is dedicated to the correction of those errors uncovered during the review These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 8/e (McGraw-Hill 2014). Slides copyright 2014 by Roger Pressman.

13 Review Metrics: Total Number of Errors Discovered Errtot
Errtot = Errminor + Errmajor Minor errors found, Errminor — the number of errors found that can be categorized as minor (requiring less than some pre-specified effort to correct) Major errors found, Errmajor — the number of errors found that can be categorized as major (requiring more than some pre-specified effort to correct) These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 8/e (McGraw-Hill 2014). Slides copyright 2014 by Roger Pressman.

14 Review Metrics: Defect Density (errors found per unit of work product reviewed)
Defect Density = Errtot / WPS Work product size, WPS — a measure of the size of the work product that has been reviewed (e.g., the number of models, or the number of document pages, or the number of lines of code) These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 8/e (McGraw-Hill 2014). Slides copyright 2014 by Roger Pressman.

15 Using Metrics: Example
Defect density for requirements models = 0.6 errors/page New requirement model = 32 pages Errors predicted to be found during the review = 19 Actual errors found during the review = 6 Result analysis: You’ve done an extremely good job in developing the requirements model or your review approach was not thorough enough These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 8/e (McGraw-Hill 2014). Slides copyright 2014 by Roger Pressman.

16 Using Metrics: Example (contd.)
Effort required to correct a minor model error immediately after the review = 4 person-hours Effort required to correct a major requirement error immediately after the review= 18 person-hours Ratio of occurrence of minor to major errors = 6 Average effort to find and correct a requirements error during review = 6 person-hours These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 8/e (McGraw-Hill 2014). Slides copyright 2014 by Roger Pressman.

17 Using Metrics: Example (contd.)
Effort required to uncover & fix a requirements related error found during testing = Etesting = 36 person-hours Saving in effort = Etesting – Ereviews = 36 – 6 = 30 person-hours/error Errors found during requirements model review = 22 Saving in testing effort for requirements-related errors = 660 person-hours These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 8/e (McGraw-Hill 2014). Slides copyright 2014 by Roger Pressman.

18 Effort Expended with & without Reviews
More effort during earlier stages Less effort in later stages and quicker deployment These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 8/e (McGraw-Hill 2014). Slides copyright 2014 by Roger Pressman.

19 Reference Model for Technical Reviews
This model identifies characteristics that contribute to the formality to which a review is conducted More emphasis on any of the characteristics makes the review more formal These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 8/e (McGraw-Hill 2014). Slides copyright 2014 by Roger Pressman.

20 Formal Technical Reviews (FTR)
FTRs take place in the form of meetings that follow a well organized plan and are managed by a designated review leader. Typically 5 or more individuals attend the meetings in which minutes are recorded, a formal report is generated, and, if needed, follow-up is done Informal Technical Reviews (ITR) ITRs follow a more casual approach in which one or more reviewers help the software engineer improve his work product. Typically, these meetings do not follow any particular plan and no formal minutes are recorded

21 Informal Technical Review: Types
A simple desk check of a software engineering work product with a colleague A casual meeting (involving more than 2 people) for the purpose of reviewing a work product The review-oriented aspects of pair programming Pair programming facilitates continuous review as a work product (design or code) is created. The benefit is immediate discovery of errors and better work product quality as a consequence These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 8/e (McGraw-Hill 2014). Slides copyright 2014 by Roger Pressman.

22 Post-Mortem Review (PMR)
PMR is the evaluation of the results of a project after the software has been delivered to end users Unlike other reviews that focuses on a specific work product, a PMR examines the entire software project Attended by the software team and other stake- holders, PMR can be conducted in a workshop format The intent is to identify excellences (achievements & positive experiences) and challenges (problems & negative experiences) and to extract lessons learned from both. The objective is to suggest improvements to both process and practice going forward Ref: Software Engineering: A Practitioner’s Approach, 8/e (McGraw-Hill 2014)


Download ppt "SE382 Software Engineering Lecture 21b"

Similar presentations


Ads by Google