Presentation is loading. Please wait.

Presentation is loading. Please wait.

SE 501 Software Development Processes Dr. Basit Qureshi College of Computer Science and Information Systems Prince Sultan University Lecture for Week 4.

Similar presentations


Presentation on theme: "SE 501 Software Development Processes Dr. Basit Qureshi College of Computer Science and Information Systems Prince Sultan University Lecture for Week 4."— Presentation transcript:

1 SE 501 Software Development Processes Dr. Basit Qureshi College of Computer Science and Information Systems Prince Sultan University Lecture for Week 4

2 Contents What are Metrics…? Process, Project, Product Software Measurement – Size oriented Metrics – Function Oriented Metrics – Object Oriented Metrics – Metrics for Software Quality More on Metrics… – Metrics for Requirements – Metrics for Design – Metrics for Source Code – Metrics for Web Applications – Metrics for Testing – Metrics for Maintenance – Metrics for Quality Summary SE 501 Dr. Basit Qureshi2

3 Bibliography Pressman, Software Engineering: A practitioners Approach, 7 th Ed. 2009 Managing Software Development version 2, Companion Guide, Carnegie Mellon University, pp 169-175, 2000. Ian Sommerville, Software Engineering, 9th edition, Addison Wesley, 2010. Humphrey, Watts S. Managing the Software Process, Addison Wesley, 1999 Hunter, Robin B. and Thayer, Richard H. Software Process Improvement. Los Alamitos: IEEE Press. 2001 SE 501 Dr. Basit Qureshi3

4 WHAT ARE METRICS…? SE 501 Dr. Basit Qureshi4

5 What are metrics…? Metrics are: – Measurements – Collections of data about project activities – Resources – Deliverables Metrics can be used to help estimate projects, measure project progress and performance, and quantify product attributes SE 501 Dr. Basit Qureshi5

6 6 What are Metrics? Software process and project metrics are quantitative measures They are a management tool They offer insight into the effectiveness of the software process and the projects that are conducted using the process as a framework Basic quality and productivity data are collected. These data are analyzed, compared against past averages, and assessed The goal is to determine whether quality and productivity improvements have occurred The data can also be used to pinpoint problem areas Remedies can then be developed and the software process can be improved

7 7 A Quote on Measurement “When you can measure what you are speaking about and express it in numbers, you know something about it; but when you cannot measure, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science.” LORD WILLIAM KELVIN (1824 – 1907)

8 8 Reasons to Measure To characterize in order to – Gain an understanding of processes, products, resources, and environments – Establish baselines for comparisons with future assessments To evaluate in order to – Determine status with respect to plans To predict in order to – Gain understanding of relationships among processes and products – Build models of these relationships To improve in order to – Identify roadblocks, root causes, inefficiencies, and other opportunities for improving product quality and process performance

9 Software metrics…? Software metrics can be classified into three categories: – Product metrics (size, complexity, performance) – Process metrics (used to improve development and maintenance) – Project metrics (cost, schedule, productivity) SE 501 Dr. Basit Qureshi9

10 Metrics in Process Domain Process metrics are collected across all projects and over long periods of time They are used for making strategic decisions The intent is to provide a set of process indicators that lead to long-term software process improvement The only way to know how/where to improve any process is to – Measure specific attributes of the process – Develop a set of meaningful metrics based on these attributes – Use the metrics to provide indicators that will lead to a strategy for improvement SE 501 Dr. Basit Qureshi10

11 Metrics in Process Domain We measure the effectiveness of a process by deriving a set of metrics based on outcomes of the process such as – Errors uncovered before release of the software – Defects delivered to and reported by the end users – Work products delivered – Human effort expended – Calendar time expended – Conformance to the schedule – Time and effort to complete each generic activity SE 501 Dr. Basit Qureshi11

12 Metrics in Project Domain Project metrics enable a software project manager to – Assess the status of an ongoing project – Track potential risks – Uncover problem areas before their status becomes critical – Adjust work flow or tasks – Evaluate the project team’s ability to control quality of software work products Many of the same metrics are used in both the process and project domain Project metrics are used for making tactical decisions – They are used to adapt project workflow and technical activities SE 501 Dr. Basit Qureshi12

13 Metrics in Project Domain The first application of project metrics occurs during estimation – Metrics from past projects are used as a basis for estimating time and effort As a project proceeds, the amount of time and effort expended are compared to original estimates As technical work commences, other project metrics become important – Production rates are measured (represented in terms of models created, review hours, function points, and delivered source lines of code) – Error uncovered during each generic framework activity (i.e, communication, planning, modeling, construction, deployment) are measured SE 501 Dr. Basit Qureshi13

14 Metrics in Project Domain Project metrics are used to – Minimize the development schedule by making the adjustments necessary to avoid delays and mitigate potential problems and risks – Assess product quality on an ongoing basis and, when necessary, to modify the technical approach to improve quality Benefits As quality improves, defects are minimized As defects go down, the amount of rework required during the project is also reduced As rework goes down, the overall project cost is reduced SE 501 Dr. Basit Qureshi14

15 MORE ON METRICS…? SE 501 Dr. Basit Qureshi15

16 16 These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e (McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. Measures, Metrics and Indicators A measure provides a quantitative indication of the extent, amount, dimension, capacity, or size of some attribute of a product or process The IEEE glossary defines a metric as “a quantitative measure of the degree to which a system, component, or process possesses a given attribute.” An indicator is a metric or combination of metrics that provide insight into the software process, a software project, or the product itself

17 17 These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e (McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. A Good Manager Measures process measurement What do we use as a basis? size? size? function? function? project metrics process metrics product product metrics

18 18 These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e (McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. Why Do We Measure? assess the status of an ongoing project track potential risks uncover problem areas before they go “critical,” adjust work flow or tasks, evaluate the project team’s ability to control quality of software work products.

19 19 These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e (McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. Process Measurement We measure the efficacy of a software process indirectly. That is, we derive a set of metrics based on the outcomes that can be derived from the process. Outcomes include measures of errors uncovered before release of the software defects delivered to and reported by end-users work products delivered (productivity) human effort expended calendar time expended schedule conformance other measures. We also derive process metrics by measuring the characteristics of specific software engineering tasks.

20 20 These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e (McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. Process Metrics Guidelines Use common sense and organizational sensitivity when interpreting metrics data. Provide regular feedback to the individuals and teams who collect measures and metrics. Don’t use metrics to appraise individuals. Work with practitioners and teams to set clear goals and metrics that will be used to achieve them. Never use metrics to threaten individuals or teams. Metrics data that indicate a problem area should not be considered “negative.” These data are merely an indicator for process improvement. Don’t obsess on a single metric to the exclusion of other important metrics.

21 21 These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e (McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. Software Process Improvement SPI Process model Improvement goals Process metrics Process improvement recommendations

22 22 These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e (McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. Process Metrics Quality-related focus on quality of work products and deliverables Productivity-related Production of work-products related to effort expended Statistical SQA data error categorization & analysis Defect removal efficiency propagation of errors from process activity to activity Reuse data The number of components produced and their degree of reusability

23 23 These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e (McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. Project Metrics Effort/time per software engineering task Errors uncovered per review hour Scheduled vs. actual milestone dates Changes (number) and their characteristics Distribution of effort on software engineering tasks

24 A. Metrics for Requirements Model The function point metric (FP), first proposed by Albrecht [1979], can be used effectively as a means for measuring the functionality delivered by a system. Function points are derived using an empirical relationship based on countable (direct) measures of software's information domain and assessments of software complexity Information domain values are defined in the following manner: – number of external inputs (EIs) – number of external outputs (EOs) – number of external inquiries (EQs) – number of internal logical files (ILFs) – Number of external interface files (EIFs) Metrics for Specification Quality [1993] SE 501 Dr. Basit Qureshi24

25 B. Metrics for Design Model 1. Architectural Design Metrics 2. Object Oriented Design Metrics 3. Class Oriented Metrics SE 501 Dr. Basit Qureshi25

26 1. Architectural Design Metrics Defined by Card and Glass [1990] Architectural design metrics Structural complexity = g(fan-out) Data complexity = f(input & output variables, fan-out) System complexity = h(structural & data complexity) HK metric: architectural complexity as a function of fan-in and fan-out Morphology metrics: a function of the number of modules and the number of interfaces between modules Design Structure Quality Index (DSQI) proposed by US Airforce Systems Command 1987. 26 These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e (McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. B. Metrics for Design Model

27 27 These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e (McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 2. Object-Oriented Metrics Whitmire 1997 described nine measurable characteristics of OO design Number of scenario scripts (use-cases) Number of support classes ( required to implement the system but are not immediately related to the problem domain ) Average number of support classes per key class (analysis class) Number of subsystems ( an aggregation of classes that support a function that is visible to the end-user of a system ) B. Metrics for Design Model

28 28 These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e (McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 3. Class-Oriented Metrics weighted methods per class depth of the inheritance tree number of children coupling between object classes response for a class lack of cohesion in methods Proposed by Chidamber and Kemerer [Chi94]: Method inheritance factor Coupling factor Polymorphism factor The MOOD Metrics Suite The MOOD Metrics Suite [Har98b]: B. Metrics for Design Model

29 29 These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e (McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. Halstead’s Software Science [1977]: (count and occurrence) of operators and operands within a component or program N1=Total Number of operator occurrences n1= number of distinct operators in program N2=Total Number of operands occurrences n2= number of distinct operands in program N = n1 log n1 + n2 log n2 Volume = N log (n1+n2) It should be noted that Halstead’s “laws” have generated substantial controversy, and many believe that the underlying theory has flaws. However, experimental verification for selected programming languages has been performed (e.g. [FEL89]). C. Metrics for Source Code

30 30 These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e (McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. Number of static Web pages (the end-user has no control over the content displayed on the page) Number of dynamic Web pages (end-user actions result in customized content displayed on the page) Number of internal page links (internal page links are pointers that provide a hyperlink to some other Web page within the WebApp) Number of persistent data objects Number of external systems interfaced Number of static content objects Number of dynamic content objects Number of executable functions D. Metrics for Web Applications

31 31 These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e (McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. E. Metrics for Testing Testing effort can also be estimated using metrics derived from Halstead measures Binder [Bin94] suggests a broad array of design metrics that have a direct influence on the “testability” of an OO system. Lack of cohesion in methods (LCOM). Percent public and protected (PAP). Public access to data members (PAD). Number of root classes (NOR). Fan-in (FIN). Number of children (NOC) and depth of the inheritance tree (DIT).

32 32 These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e (McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. F. Maintenance Metrics IEEE Std. 982.1-1988 [IEE94] suggests a software maturity index (SMI) that provides an indication of the stability of a software product (based on changes that occur for each release of the product). The following information is determined: M T =the number of modules in the current release F c =the number of modules in the current release that have been changed F a =the number of modules in the current release that have been added F d =the number of modules from the preceding release that were deleted in the current release The software maturity index is computed in the following manner: SMI = [M T - (F a + F c + F d )]/M T As SMI approaches 1.0, the product begins to stabilize.

33 33 These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e (McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. G. Metrics for Measuring Quality Correctness — the degree to which a program operates according to specification Maintainability—the degree to which a program is amenable to change Integrity—the degree to which a program is impervious to outside attack Usability—the degree to which a program is easy to use

34 34 These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e (McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. Defect Removal Efficiency where: E is the number of errors found before delivery of the software to the end-user D is the number of defects found after delivery. DRE = E /(E + D) G. Metrics for Measuring Quality

35 SOFTWARE MEASUREMENTS SE 501 Dr. Basit Qureshi35

36 Categories for Software Measurement Two categories of software measurement – Direct measures of the Software process (cost, effort, etc.) Software product (lines of code produced, execution speed, defects reported over time, etc.) – Indirect measures of the Software product (functionality, quality, complexity, efficiency, reliability, maintainability, etc.) Project metrics can be consolidated to create process metrics for an organization SE 501 Dr. Basit Qureshi36

37 KLOC Derived by normalizing quality and/or productivity measures by considering the size of the software produced Thousand lines of code (KLOC) are often chosen as the normalization value Metrics include – Errors per KLOC – Errors in KLOC per person-month – Defects per KLOC – KLOC per person-month – Dollars per KLOC – Pages of documentation per KLOC SE 501 Dr. Basit Qureshi37

38 KLOC Size-oriented metrics are not universally accepted as the best way to measure the software process Opponents argue that KLOC measurements – Are dependent on the programming language – Penalize well-designed but short programs – Cannot easily accommodate nonprocedural languages – Require a level of detail that may be difficult to achieve Many problems – The ambiguity of the counting – meaning is not the same with Assembler or high-level languages – What to count? Blank lines, comments, data definitions, only executable lines..? – Problematic for productivity studies: the amount of LOC is negatively correlated with design efficiency SE 501 Dr. Basit Qureshi38

39 39 These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e (McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. Example: LOC Approach Average productivity for systems of this type = 620 LOC/pm. Burdened labor rate =$8000 per month, the cost per line of code is approximately $13. Based on the LOC estimate and the historical productivity data, the total estimated project cost is Based on the LOC estimate and the historical productivity data, the total estimated project cost is $431,000 and the estimated effort is 54 person-months.

40 Function Point (FP) Based on a combination of program characteristics – external inputs and outputs – user interactions – external interfaces – files used by the system A weight is associated with each of these The function point count is computed by multiplying each raw count by the weight and summing all values SE 501 Dr. Basit Qureshi40

41 Function Point (FP) Function point count modified by complexity of the project FPs can be used to estimate LOC depending on a language factor – LOC = AVC * number of function points – AVC is a language-dependent factor FPs can be very subjective, depend on estimator – Automatic function-point counting maybe impossible SE 501 Dr. Basit Qureshi41

42 Function Point (FP) System Elements and their Complexity SE 501 Dr. Basit Qureshi42

43 Function Point (FP) Count the following: – inputs – outputs – inquiries – logical internal files – external interfaces Apply “simple, average, complex” multipliers Apply the 14 adjustment factors (such as designed for reuse? in a heavy traffic environment? etc.) SE 501 Dr. Basit Qureshi43

44 Function Point (FP) Compute the technical complexity factor (TCF) – Assign a value from 0 (“not present”) to 5 (“strong influence throughout”) to each of 14 factors such as transaction rates, portability – Add 14 numbers => total degree of influence (DI) TCF = 0.65 + 0.01 * DI – Technical complexity factor (TCF) lies between 0.65 and 1.35 The number of function points (FP) is given by FP = UFP * TCF SE 501 Dr. Basit Qureshi44

45 Function Point (FP) Converting Function Points to Lines of Code SE 501 Dr. Basit Qureshi45 Language LOC/Function Code Point

46 Function Point (FP) Like the KLOC measure, function point use also has proponents and opponents Proponents claim that – FP is programming language independent – FP is based on data that are more likely to be known in the early stages of a project, making it more attractive as an estimation approach Opponents claim that – FP requires some “sleight of hand” because the computation is based on subjective data – Counts of the information domain can be difficult to collect after the fact – FP has no direct physical meaning…it’s just a number SE 501 Dr. Basit Qureshi46

47 Function Point (FP) Relationship between LOC and FP depends upon – The programming language that is used to implement the software – The quality of the design FP and LOC have been found to be relatively accurate predictors of software development effort and cost – However, a historical baseline of information must first be established LOC and FP can be used to estimate object-oriented software projects – However, they do not provide enough granularity for the schedule and effort adjustments required in the iterations of an evolutionary or incremental process SE 501 Dr. Basit Qureshi47

48 48 These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e (McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. Example: FP Approach The estimated number of FP is derived: FP estimated = count-total * [0.65 + 0.01 * S (F i )] FP estimated = 375 organizational average productivity = 6.5 FP/pm. burdened labor rate = $8000 per month, approximately $1230/FP. Based on the FP estimate and the historical productivity data, Based on the FP estimate and the historical productivity data, total estimated project cost is $461,000 and estimated effort is 58 person-months.

49 Summary What are Metrics… Software Measurement Integrating Metrics in Software Process FP Estimation example Summary SE 501 Dr. Basit Qureshi49


Download ppt "SE 501 Software Development Processes Dr. Basit Qureshi College of Computer Science and Information Systems Prince Sultan University Lecture for Week 4."

Similar presentations


Ads by Google