Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Lecture 22 Instructor Paulo Alencar.

Similar presentations


Presentation on theme: "1 ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Lecture 22 Instructor Paulo Alencar."— Presentation transcript:

1 1 ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Lecture 22 Instructor Paulo Alencar

2 2 Overview  Software Quality Metrics  Black Box Metrics  White Box Metrics  Development Estimates  Maintenance Estimates

3 3 Software Metrics Black Box Metrics –Function Points –COCOMO White Box Metrics –LOC –Halstead’s Software Science –McCabe’s Cyclomatic Complexity –Information Flow Metric –Syntactic Interconnection

4 4 Software project planning consists of two primary tasks: analysis estimation (effort of programmer months, development interval, staffing levels, testing, maintenance costs, etc.). Important : –most common cause of software development failure is poor (optimistic) planning. Software Metrics

5 5 Risks in estimation can be illustrated as: Must keep historical database to assist in future estimation. Software Metrics

6 6 Black Box Metrics We examine the following metrics and their potential use in software testing and maintenance. Function Oriented Metrics –Feature Points –Function Points COCOMO

7 7 Mainly used in business applications The focus is on program functionality A measure of the information domain + a subjective assessment of complexity Most common are: –function points and –feature points (FP). Reference: R.S. Pressman, “Software Engineering: A Practitioner’s Approach, 3 rd Edition”, McGraw Hill, Chapter 2. Function-Oriented Metrics

8 8 Function Points The function point metric is evaluated using the following tables: Weighting Factor ParameterCountSimpleAverageComplexWeight # of user inputs * 346 = # of user outputs * 457 = #of user inquiries * 346 = # of files* 71015 = # of external interfaces * 5710 = Total_weight =

9 9 The following relationship is used to compute function points: Function Points

10 10 where F i (i = 1 to 14) are complexity adjustment values based on the table below: 1.Reliable backup/recovery needed? 2.Any data communications needed? 3.Any distributed processing functions? 4.Performance critical? 5.Will system run in an existing heavily utilized operational environment? 6.Any on-line data entry? 7.Does on-line data entry need multiple screens/operations? Function Points

11 11 8.Are master files updated on-line? 9.Are inputs, outputs, queries complex? 10.Is internal processing complex? 11.Must code be reusable? 12.Are conversion and installation included in design? 13.Is multiple installations in different organizations needed in design? 14.Is the application to facilitate change and ease of use by user? Function Points

12 12 Each of the F i criteria are given a rating of 0 to 5 as: –No Influence = 0;Incidental = 1; –Moderate = 2Average = 3; –Significant = 4 Essential = 5 Function Points

13 13 Once function points are calculated, they are used in a manner analogous to LOC as a measure of software productivity, quality and other attributes, e.g.: –productivity FP/person-month –qualityfaults/FP –cost$$/FP –documentationdoc_pages/FP Function-Oriented Metrics

14 14 Example: Function Points

15 15 Example: Your PBX project Total of FPs = 25 F 4 = 4, F 10 = 4, other F i ’s are set to 0. Sum of all F i ’s = 8. FP = 25 x (0.65 + 0.01 x 8) = 18.25 Lines of code in C = 18.25 x 128 LOC = 2336 LOC For the given example, developers have implemented their projects using about 2500 LOC which is very close to predicted value of 2336 LOC

16 16 Feature Point Metrics It represents the same thing – “functionality” delivered by the software. Measurement parameters and weights are: Number of user inputs – weight = 4 Number of user outputs – weight = 5 Number of user inquiries – weight = 4 Number of files – weight = 7 Number of external interfaces – weight = 7 Number of algorithms – weight = 3 Total_weight or total_count = ?

17 17 COCOMO The overall resources for software project must be estimated: –development costs (i.e., programmer-months) –development interval –staffing levels –maintenance costs General approaches include: –expert judgment (e.g., past experience times judgmental factor accounting for differences). –algorithmic (empirical models).

18 18 Several models exit with various success and ease/difficulty of use. We consider the COCOMO (Constructive Cost Model). Decompose the software into small enough units to be able to estimate the LOC. Definitions: – KDSI as kilo delivered source instructions (statements) not including comments, test drivers, etc. –PM - person months 3 levels of the Cocomo models: Basic, Intermediate and, Detailed (We will not see the last one here) Empirical Estimation Models

19 19 Model 1: Basic Apply the following formulae to get rough estimates: –PM = 2.4(KDSI) 1.05 –T DEV = 2.5(PM) 0.38 (chronological months) COCOMO

20 20 Effort estimates ©Ian Sommerville 1995

21 21 Organic mode project, 32KLOC –PM = 2.4 (32) 1.05 = 91 person months –TDEV = 2.5 (91) 0.38 = 14 months –N = 91/14 = 6.5 people Embedded mode project, 128KLOC –PM = 3.6 (128) 1.2 = 1216 person-months –TDEV = 2.5 (1216) 0.32 = 24 months –N = 1216/24 = 51 people COCOMO examples ©Ian Sommerville 1995

22 22 Model 2: Intermediate step I: obtain the nominal effort estimation as: –PM NOM = a i (KDSI) bi where COCOMO aiai bibi Organic3.21.05 Semi-detached3.01.12 Embedded2.81.2

23 23 –organic: small s/w team; familiar; in-house environment; extensive experience; specifications negotiable. –embedded: firm, tight constraints; (hardware SRS), generally less known territory. –semi-detached: in between. step II: determine the effort multipliers: –From a table of total of 15 attributes, each rated on a 6 point scale COCOMO

24 24 Four attribute groups: 1.product attributes: required reliability, product complexity, 2.computer attributes: constraints: execution time, primary memory, virtual machine environment, volatility (h/w & s/w), turnaround time, 3.personnel attributes: analyst, programmers’ capability, application experience, VM experience, PL experience, 4.project attributes: modern programming practices, s/w tools used, schedule realistic. COCOMO

25 25 a total of 15 attributes, each rated on a 6 point scale: very low - low - nominal - high - very high - extra high use the Cocomo model to calculate the effort adjustment factor (EAF) as: step III: estimate the development effort as: PM DEV = PM NOM  EAF COCOMO

26 26 step IV: estimate the related resources as: T DEV = c i (PM DEV ) d i cici didi Organic2.50.38 Semi-detached2.50.35 Embedded2.50.32 COCOMO

27 27 Estimating Software Maintenance Costs A basic question: what is the number of programmers needed to maintain software system? A simple guesstimate may be:

28 28 Alternatives: by Boehm –define the ACT (annual change traffic) as the fraction of statements changed per year: Level 1 model: PM AM = ACT  PM DEV where AM = annual on maintenance Estimating Software Maintenance Costs

29 29 Level 2 model: PM AM = ACT  PM DEV  EAF M where the EAF M may be different from the EAF for development since: different personnel experience level, motivation, etc Estimating Software Maintenance Costs

30 30 Factors which influence software productivity (also maintenance): 1.People Factors: the size and expertise of the development (maintenance) organization. 2.Problem Factors: the complexity of the problem and the number of changes to design constraints or requirements. Estimating Software Maintenance Costs

31 31 3.Process Factors: the analysis, design, test techniques, languages, CASE tools, etc. 4.Product Factors: the required reliability and performance of the system. 5.Resource Factors: the availability of CASE tools, hardware and software resources. Estimating Software Maintenance Costs


Download ppt "1 ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Lecture 22 Instructor Paulo Alencar."

Similar presentations


Ads by Google