Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter : 23 Product Metrics. A Framework For Product Metrics Measure, Metric, And Indicator A measure provides an indication of the extent, amount, dimension,

Similar presentations


Presentation on theme: "Chapter : 23 Product Metrics. A Framework For Product Metrics Measure, Metric, And Indicator A measure provides an indication of the extent, amount, dimension,"— Presentation transcript:

1 Chapter : 23 Product Metrics

2 A Framework For Product Metrics Measure, Metric, And Indicator A measure provides an indication of the extent, amount, dimension, capacity, or size of some attribute of a product or process. Measurement is an act of determining a measure. Metric is a quantitative measure of the degree to which a system, component, or process possesses a given attribute. When a single data point has been collected (e.g. the no of errors uncovered within a single software component), a measure has been established.

3 Measurement occurs as the result of the collection of one or more data points (e.g. a number of component reviews and unit tests are investigated to collect measure of the no of errors for each). A software metric relates the individual measure in some way (e.g. the avgas no of errors found per review, the avgas no of errors found per unit test). An indicator is a metric or combination of metrics that provides insight into the software process, a software project, or product itself.

4 The Challenge Of Product Metrics Over past four decades, many researchers have attempted to develop a single, metric that provides a comprehensive measure of software complexity. Although dozen of complexity measures have been proposed, each takes a somewhat different view of what complexity is and what attributes of a system lead to complexity. The danger of attempting to find measures which characterize so many different attributes is that inevitably the measures have to satisfy conflicting aims.

5 Measurement Principles A series of product metrics that (1) assist in the evaluation of the analysis and design models, (2) provide an indication of the complexity of procedural design and source code, and (3) facilitate the design of more effective testing, it is important to understand basic measurement principles. A measurement process that can be characterized by five activities : 1)Formulation : The derivation of software measures and metrics appropriate for the representation of the software that is being considered. 2)Collection : The mechanism used to accumulate

6 data required to derive the formulated metrics. 3. Analysis : The computation of metrics and the application of mathematical tools. 4. Interpretation : The evaluation of metric resulting in insight into the quality of the representation. 5. Feedback : Recommendations derived from the interpretation of product metric transmitted to the software team. The following principles are representative of many that can be proposed for metrics characterization and validation : 1) A metric should have desirable mathematical properties.

7 2) When a metric represents a software characteristics that increases when positive traits occur or decrease when undesirable traits are encountered, the value of the metric should increase or decrease in the same manner. 3) Each metric should be validated empirically in a wide variety of contexts before being published or used to make decisions. The principles for collection and analysis activities : (1) whenever possible, data collection and analysis should be automated, (2) valid statistical techniques should be applied to establish relationship between internal product attributes and external quality characteristics, and (3) interpretative guidelines and recommendations should be established for each metric.

8 Goal-Oriented Software Measurement The Goal/Question/Metric (GQM) paradigm (model) has been developed as a technique for identifying meaningful metrics for any part of the software process. It emphasizes the need to (1) establish an explicit measurement goal that is specific to the process activity or product characteristics that is to be assessed, (2) define a set of questions that must be answered in order to achieve the goal, and (3) identify well-formulated metric that help to answer these questions.

9 A goal definition template can be used to define each measurement goal. The template takes the form : Analyze {the name of activity or attribute to be measured} for the purpose of {the overall objective of the analysis} with respect to {the aspect of the activity or attribute that is considered} from the view point of {the people who have an interest in the measurement} in the context of {the environment in which the measurement takes place}.

10 E.g. Analyze the Safe Home software architecture for the purpose of evaluating architectural components with respect to the ability to make Safe Home more extensible from the view point of the software engineers performing the work in the context of product enhancement over the next three years. With a measurement goal explicitly defined, a set of questions is developed. E.g. Is the complexity of each component within the bounds that facilitate modification and extension? Answers to these questions help to determine whether the measurement goal has been achieved.

11 The Attributes Of Effective Software Metrics A set of attributes that should be encompassed by effective software metrics. 1.Simple and computable : It should be easy to learn how to derive the metric, and its computation should not demand inordinate effort or time. 2.Empirically and intuitively (naturally) persuasive (believable) : The metric should satisfy the engineer’s intuitive notions about the product attribute under consideration. 3.Consistent and objective : The metric should yield results that are unambiguous.

12 4. Consistent in its use of units and dimensions : The mathematical computation of the metric should use measures that do not lead to bizarre (odd) combinations of units. 5. Programming language independent : Metrics should be based on the requirement model, design model, or the structure of the program itself. They should not be dependent on the programming language syntax and semantics. 6. An effective mechanism for high-quality feedback : The metric should provide with information that can lead to a higher-quality work product.

13 Metrics For Requirement Model Function-Based Metrics The function point (FP) metric can be used as a means for measuring the functionality delivered by the application as a normalization value. FP metric can be used to (1) estimate the cost or effort required to design, code and test the software, (2) predict the number of errors that will be encountered during testing, and (3) forecast the no of components and/or the no of projected source lines in the implemented system. Function points are calculated using directly measurable items and assessment of software complexity.

14 The directly measurable items are as follows : (They are also called information domain values.) 1)Number of external inputs : Input originated from a user or transmitted from another application and provides distinct application-oriented data or control information. Often used to update internal logical files. 2)Number of external outputs : Each external output is derived data within application that provides information to the end user. Reports, screens, error messages etc. Not individual items within a report or a screen. 3)Number of external Inquiries : Online input that results in the generation of some immediate software response in the form of output.

15 4)Number of Internal Logical Files : Each logical file is logical grouping of data that resides within the application’s boundary and its maintained via external inputs. 5)Number of External interfaces Files: Each external logical file is a logical grouping of data that resides external to the application but provides information that may be of use to application. Once these data have been collected, the table in following table in following figure is completed and complexity value is associated with each count. Organizations that use function point method develop criteria for determining whether a particular entry is simple, average, or complex. This is subjective.

16 16 The formula for function point is as follows : FP = count total x [0.65 + 0.01 x ∑ Fi ] –Where count total is the sum of all FP entries. –Fi( i=1 to 14) are complexity value adjustment factors based on 14 questions. ParameterCountSimpleAverageComplexProduct User InputsN1346N1 x (S,A,C) User OutputsN2457N2 x (S,A,C) InquiriesN3346N3 x (S,A,C) FilesN471015N4 x (S,A,C) External InterfacesN55710N5 x (S,A,C) Total ------  ∑

17 17 The Fi (i=1 to 14) are adjustment factors (VAF) based on the responses to the following questions : 1. Reliable backup and recovery 2. Data comm. Required? 3. Distributed processing? 4. Performance critical? 5. Existing heavily used environment? 6. Online data entry required? 7. Online in multiple screens? 8. Master file updated online? 9. I/O/ and queries complex? 10. Processing complex? 11. Code reusable? 12. Conversion / installation included in design? 13. Multiple installations? 14 Designed for ease of use?

18 Each of these questions is answered using a scale that ranges from 0 (not important or applicable) to 5 (absolutely essential).

19 19 Assume that Fi is having value as 46. The formula for function point is as follows : FP = 58 x [0.65 + 0.01 x 46 ] = 58 * [ 0.65+0.46] = 64 ParameterCountSimpleAverageComplexProduct User Inputs33469 User Outputs24578 Inquiries23466 Files1710157 External Interfaces4571028 Total ------  58 Example: Based on the FP value, the project team can estimate the overall implemented size of function.

20 Metrics For Specification Quality A list of characteristics that can be used to assess the quality of the requirements model and the corresponding requirement specification : Specificity, completeness, correctness, understandability, verifiability, internal and external consistency etc. Although many of these characteristics appear to be qualitative in nature, each can be represented using one or more metrics. Assume that there are nr requirements in specification, such that nr = nf + nnf

21 where, nf is the number of functional requirement and nnf is the number of non functional (e.g. performance) requiremnets. To determine the specificity of requirements, a metric is based on the consistency of the reviewers’ interpretation of each requirement : Q1= nui / nr Where, nui is the number of requirements for which all reviewers had identical interpretation. The closer the value of Q to 1, the lower is the ambiguity of the specification. The competeness of functional requirements can be determined by computing the ratio Q2 = nu / [ni * ns]

22 where, nu is the number of unique functional requirements, ni is the number of inputs defined or implied by the specification, and ns is the number of states specified. The Q2 ratio measures the percentage of necessary functions that have been specified for a system. For correctness, consider the degree to which requirements have been validated : Q3 = nc / [nc + nnv] where, nc is the number of requirements that have been validated as correct and, nnv is the number of requirements that have not yet been validated.

23 Metrics For Design Model Architectural Design Metrics It focuses on characteristics of the program architecture. These metrics are black box. Three software design complexity measure : structural complexity, data complexity, and system complexity. For hierarchical architectures, structural complexity of a module I is defined in the following manner : S(i) = fout 2 (i) Where fout (i) is the fan-out (no of modules that are directly invoked by module i) of module i.

24 Data complexity provides an indication of the complexity in internal interface for module I and is defined as : D(i) = v(i) / [fout (i) + 1] where, v(i) is the number of input and output variables that are passed to and from module i. System complexity is defined as the sum of structural and data complexity C(i) = S(i) + D(i) where, nc is the number of requirements that have been validated as correct and, nnv is the number of requirements that have not yet been validated.

25 There are some metrics that can be used to compare different program architecture. Size = n + a where n is the no of nodes and a is the no of arcs. Depth = longest path from the root node to a leaf node. Width = maximum number of nodes at any one level of the architecture. The arc – to – node ratio, r = a / n, measures the connectivity density of the architecture and may provide a simple indication of the coupling of the architecture. Figure 23.4

26 Information obtained from data and architectural design to derive a design structure quality index (DSQI) that ranges from 0 to 1. The following values must be ascertained to compute the DSQI : S1 = total no of Modules defined in the programe architecture S2 = No of Modules whose correct functions depends on the source of data input or that produce data to be used elsewhere. S3 = No of modules whose correct functions depends on prior processing S4 = No of database items. S5 = Total no of Unique database items.

27 S6 = No of database segment. S7 = no of modules with a single entry and exit Once the values S1 to S7 are determined for a computer program the following intermediate values can be computed. Program structure: D1,where D1 is defied as follows: If the architectural design was developed using a distinct method, then D1 = 1, other wise D1=0 Module Independence: D2 = 1-[s2/s1] Module not dependent on prior processing :D3=1- [s3/s1] Database size: D4 =1-[s5/s4] Module entrance / exit characteristics: D6=1-[s7/s1]

28 With this intermediate values determine, the DSQI is computed in the following manner: DSQI= Ewidi where I = 1 to 6, wi is the relative weighting of importance of each of intermediate values, and Ewi =1 [if all the di are weighted equally then wi – 0.167]

29 Metrics For Object Oriented Design Nine distinct and measurable characteristic of an OO design. Size: size is define in terms of four views: population, volume, length, functionality. Population is measured by taking static count of OO entities such as classes or operation. Volume measures are identical to population but are collected dynamically. Length is measure of a chain of interconnected designs elements. Functionality, metrics provides an indirect indication of the value delivered to the customer by an OO application Complexity: can be view by examining how classes of OO design are interrelated to one and another. Coupling: the physical connection between the elements of OO design represent coupling within an OO system

30 Sufficiency: A design compound is a sufficient if it fully reflects all the properties of the application domain object. Completeness: the only difference completeness and sufficiency is the features set against which we compare the abstraction or design component. Cohesion: the Cohesiveness of a class is determined by examining the degree to which the set of properties it possess the part of the problem or design domain Primitiveness: The degree to which an operation is atomic Similarity: the degree to which two or more classes are similar in terms of structure, function, behaviour, or purpose is indicated by this measure

31 volatility: Volatility measured the likelihood that a change will occur.

32 Class Oriented Metrics –The CK metrics suite Chidamber and Kemerer have proposed one of the most widely referenced sets of OO software metrics. 1)Weighted Methods Per Class (WMC) : Assume that n methods of complexity c1, c2, c3, …, cn are defined for a class C. The specific complexity metric is chosen (e.g. cyclomatic complexity) should be normalized so that nominal complexity for a method takes on a value of 1.0. WMC = ∑ ci for i= 1 to n. The number of methods and their complexity are reasonable indicators of the amount of effort required to implement and test a class. The larger the number of methods, the more complex is the inheritance tree. So, WMC should be kept as low as reasonable.

33 2) Depth Of Inheritance Tree (DIT) : The metric is “the maximum length from the node to the root of the tree.” Referring to fig. 23.5, the value of DIT for the class hierarchy shown is 4. A deep class hierarchy leads to greater design complexity. 3) Number Of Children (NOC) : The subclasses that are immediately subordinate to a class in the class hierarchy are termed its children. Referring to fig. 23.5, class C2 has three children – c21, c22, and C23. As NOC grows, reuse increases, but also, the amount of testing will increase. 4)Coupling Between Object Classes (CBO) : CBO is the number of collaborations listed for a class on its CRC index card. As CBO increases, the reusability of a class will decrease.

34 5) Response For A Class (RFC) : The response set of a class is “ a set of methods that can potentially be executed in response to a message received by an object of that class.” RFC is the number of methods in the response set. As RFC increases, the effort required for testing also increases. 6) Lack Of Cohesion In Methods (LCOM) : LCOM is the number of methods that access one or more of the same attributes. If no methods access the same attributes, then LCOM = 0. If LCOM is high, the complexity of the class design also increases.

35 Class Oriented Metrics –The MOOD Metrics Suite A sampling of MOOD metrics follows. 1)Method Inheritance Factor (MIF) : The degree to which the class architecture of an OO system makes use f inheritance for both methods and attributes is defined as MIF = ∑ Mi (Ci) / ∑ Ma (Ci) where the summation occurs over i = 1 to TC. TC is defined as the total number of classes inn the architecture, Ci is a class within the architecture, and Ma (Ci) = Md (Ci) + Mi (Ci)

36 where Ma (Ci) = No of methods that can be invoked in association with Ci. Md (Ci) = No of methods declared in the class Ci. Mi (Ci) = No of methods inherited (and not overridden) in Ci. 2) Coupling Factor (CF) : The MOOD metric suite defines coupling in following way : CF = ∑i ∑ j is_client ((Ci, Cj) / Tc2 - Tc) (Note : Tc2 is Tc Square) where the summation occurs over i = 1 to TC. The function

37 is_client = 1, if and only if a relationship exists between the client class Cc and the server class Cs, and Cc ≠ Cs = 0, otherwise As the value for CF increases, the complexity of the OO software will also increase.

38 OO Metrics Proposed By Lorenz and Kidd One example of the metrics proposed by Lorenz and Kidd is : Class Size (CS) : The overall size of a class can be determined using the following measures :  The total no of operations (both inherited and private instance operations) that are encapsulated within a class.  The no of attributes (both inherited and private instance operations) that are encapsulated by the class. Large values for CS indicate that a class may have too much responsibility. This will reduce the reusability of the class and complicate implementation and testing.

39 Component Level Design Metrics Component-level design metrics include measures of the “three Cs” – module cohesion, coupling, and complexity. Component-level design metrics may be applied once a procedural design has been developed and are “glass box” in the sense that they require knowledge of the inner workings of the module under consideration. Cohesion Metrics : The metrics are defined in terms of five concepts and measures :  Data Slice : A data slice is a backward walk through a module that looks for data values that affect the module location at which the walk through began.

40  Data tokens : The variables defined for a module can be defined as data tokens for the module.  Glue tokens : This set of data tokens lies on one or more data slice.  Superglue tokens : These data tokens are common to every data slice in a module.  Stickiness : The relative stickiness of a glue token is directly proportional to the number of data slices that it binds. Coupling Metrics : A metric for module coupling encompasses data and control flow coupling, global coupling, and environmental coupling. For data and control flow coupling,

41 di = number of input data parameters Ci = number of input control parameters do = number of output data parameters Co = number of output control parameters For global coupling, gd = number of global variables used as data gc = number of global variables used as control For environmental coupling, w = number of modules called (fan-out) r = number of modules calling the module under consideration (fan-in) Using these measures, a module coupling indicator mc is defined in the following way :

42 mc = k / M where k is a proportionality constant and M = di + (a * ci) + do + (b * co) +gd + (c * gc) + w + r Values for k, a, b, and c must be derived empirically. In order to have the coupling metric move upward as the degree of coupling increases, a revised coupling metric may be defined as C = 1 – mc Where the degree of coupling increases as the value of M increases.

43 Complexity Metrics : A variety of software metrics can be computed to determine the complexity of program control flow. The most widely used complexity metric for computer software is cyclomatic complexity.

44 Operation-Oriented Metrics Three simple metrics, proposed by Lorenz and Kidd, are appropriate : 1.Average Operation Size (OS avg) : Size can be determined by counting the number of lines of code or the number of messages sent by the operation. As the number of messages are sent by a single operation increases, it is likely that responsibilities have not been well allocated within a class. 2.Operation Complexity (OC) : The complexity of an operation can be computed using any of the complexity metrics proposed for conventional software. 3.Average No of Parameters Per Operation (NP avg) : The larger the no of operation parameters, the more complex the collaboration between objects.

45 User Interface Design Metrics Layout Appropriateness (LA) is a design metric for human/ computer interface. A typical GUI uses layout entities – icons, text, menus – to assist the user in completing tasks. The absolute and relative position of each layout entity, the frequency with which it is used, and the cost of transition from one layout entity to the next all contribute to the appropriateness of the interface. A web page metric indicates that simp0le characteristics of the elements of the layout can also have significant impact on the perceived quality of the GUI design.

46 Design Metrics For WebApps Interface Metrics Suggested MetricDescription Layout ComplexityNo of distinct regions defined for an interface. Mouse Pick EffortAvg no of mouse picks per function Selection ComplexityAvg no of links that can be selected per page. Content Acquisition Time Avg no of words of text per page.

47 Aesthetic Metrics Suggested MetricDescription Word CountTotal no of words that appear on a page. Link CountTotal links on a page. Color CountTotal colors employed. Page SizeTotal bytes for page as well as elements, graphics, and style sheets. Font CountTotal fonts employed.

48 Content Metrics Suggested MetricDescription Page waitAvg time required for a page to download at different connection speeds Graphics ComplexityAvg no of graphics media per page Audio ComplexityAvg no of audio media per page. Video ComplexityAvg no of video media per page. Animation ComplexityAvg no of animations per page.

49 Navigation Metrics Suggested MetricDescription Page-linking Complexity No ofg links per page. ConnectivityTotal no of internal links, not including dynamic generated links. Connectivity DensityConnectivity divided by page count

50 Metrics For Source Code A set of primitive measures that may be derived after code is generated or estimated once design is complete. The measures are : 1.n1 = number of distinct operators that appear in a program. 2.N2 = number of distinct operands that appear in a program 3.N1 = total number of operator occurrences 4.N2 = total number of operand occurrences These measures are used to develop expressions for overall program length, potential minimum volume for an algorithm, the actual volume, the program level, and other features such as development effort, development time etc.

51 The length N can be estimated as N = n1 log2 n1 + n2 log2 n2 The program volume may be defined as V = N log2 (n1 + n2) A volume ratio L can be defined as ratio of volume of the most compact form of a program to the volume of the actual program. L = (2 / n1) * (n2 / N2)

52 Metrics For Testing Metrics proposed for testing focus on the process of testing. Halstead Metrics Applied To Testing Using the definitions for program volume V and program level PL, Halstead effort e can be computed as PL = 1/ ( (n1/2) * (N2/n2) ) E = V / PL The percentage of overall testing effort to be allocated to a module k can be estimated using the following relationship : Percentage of testing effort (k) = e(k) / ∑e(i) Where e(k) is computed for module k and ∑e(i) is the sum of Halstead effort across all modules of the system.

53 Metrics For Object-Oriented Testing Lack of cohesion in methods (LCOM) : The higher the value of LCOM, the more states must be tested to ensure that methods do not generate side effects. Percent public and protected (PAP) : This metric indicates the percentage of class attributes that are public or protected. High values for PAP increase the likelihood of side effects among the classes because public and protected attributes lead to high potential for coupling. Public access to data members (PAD) : It indicates the number of classes that can access another class’s attributes, a violation of encapsulation. High values for PAD lead to the potential for side effects among classes.

54 Number of root classes (NOR) : It is a count of distinct class hierarchies are described in the design model. Test suites for each root class and the corresponding class hierarchy must be developed. Fan – in (FIN) : Fan –in in the inheritance hierarchy is an indication of multiple inheritance. Fan – in > 1 indicates that a class inherits its attributes and operations from more than one root class. Number of children (NOC) and depth of the inheritance tree (DIT) : Super class methods will have to be retested for each subclass.

55 Metrics For Maintenance Software Maturity Index (SMI) indicates the stability of a software product. The following information is determined : MT = no of modules in the current release Fc= no of modules in the current release that have been changed Fa= no of modules in the current release that have been added Fd= no of modules from the preceding release that were deleted in the current release The software maturity index is computed as SMI = ( (MT – (Fa + Fc + Fd) ) / MT)


Download ppt "Chapter : 23 Product Metrics. A Framework For Product Metrics Measure, Metric, And Indicator A measure provides an indication of the extent, amount, dimension,"

Similar presentations


Ads by Google