Presentation is loading. Please wait.

Presentation is loading. Please wait.

Software Engineering Software Metrics James Gain

Similar presentations


Presentation on theme: "Software Engineering Software Metrics James Gain"— Presentation transcript:

1 Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za http://people.cs.uct.ac.za/~jgain/courses/SoftEng/

2 Objectives lIntroduce the necessity for software metrics lDifferentiate between process, project and product metrics lCompare and contrast Lines-Of-Code (LOC) and Function Point (FP) metrics lConsider how quality is measured in Software Engineering lDescribe statistical process control for managing variation between projects

3 Measurement & Metrics Against: Collecting metrics is too hard... it’s too time consuming... it’s too political... they can be used against individuals... it won’t prove anything For: In order to characterize, evaluate, predict and improve the process and product a metric baseline is essential. “Anything that you need to quantify can be measured in some way that is superior to not measuring it at all” Tom Gilb

4 Terminology lMeasure: Quantitative indication of the extent, amount, dimension, or size of some attribute of a product or process. A single data point lMetrics: The degree to which a system, component, or process possesses a given attribute. Relates several measures (e.g. average number of errors found per person hour) lIndicators: A combination of metrics that provides insight into the software process, project or product lDirect Metrics: Immediately measurable attributes (e.g. line of code, execution speed, defects reported) lIndirect Metrics: Aspects that are not immediately quantifiable (e.g. functionality, quantity, reliability) lFaults:  Errors: Faults found by the practitioners during software development  Defects: Faults found by the customers after release

5 A Good Manager Measures measurement What do we use as a basis? size? function? project metrics process metrics process product product metrics “Not everything that can be counted counts, and not everything that counts can be counted.” - Einstein

6 Process Metrics lFocus on quality achieved as a consequence of a repeatable or managed process. Strategic and Long Term. lStatistical Software Process Improvement (SSPI). Error Categorization and Analysis: lAll errors and defects are categorized by origin lThe cost to correct each error and defect is recorded lThe number of errors and defects in each category is computed lData is analyzed to find categories that result in the highest cost to the organization lPlans are developed to modify the process lDefect Removal Efficiency (DRE). Relationship between errors (E) and defects (D). The ideal is a DRE of 1: l

7 Project Metrics lUsed by a project manager and software team to adapt project work flow and technical activities. Tactical and Short Term. lPurpose:  Minimize the development schedule by making the necessary adjustments to avoid delays and mitigate problems  Assess product quality on an ongoing basis lMetrics:  Effort or time per SE task  Errors uncovered per review hour  Scheduled vs. actual milestone dates  Number of changes and their characteristics  Distribution of effort on SE tasks

8 Product Metrics lFocus on the quality of deliverables lProduct metrics are combined across several projects to produce process metrics lMetrics for the product:  Measures of the Analysis Model  Complexity of the Design Model 1.Internal algorithmic complexity 2.Architectural complexity 3.Data flow complexity  Code metrics

9 Metrics Guidelines lUse common sense and organizational sensitivity when interpreting metrics data lProvide regular feedback to the individuals and teams who have worked to collect measures and metrics. lDon’t use metrics to appraise individuals lWork with practitioners and teams to set clear goals and metrics that will be used to achieve them lNever use metrics to threaten individuals or teams lMetrics data that indicate a problem area should not be considered “negative.” These data are merely an indicator for process improvement lDon’t obsess on a single metric to the exclusion of other important metrics

10 Normalization for Metrics lHow does an organization combine metrics that come from different individuals or projects? lDepend on the size and complexity of the projec lNormalization: compensate for complexity aspects particular to a product lNormalization approaches:  Size oriented (lines of code approach)  Function oriented (function point approach)

11 Typical Normalized Metrics lSize-Oriented:  errors per KLOC (thousand lines of code), defects per KLOC, R per LOC, page of documentation per KLOC, errors / person- month, LOC per person-month, R / page of documentation lFunction-Oriented:  errors per FP, defects per FP, R per FP, pages of documentation per FP, FP per person-month ProjectLOCFPEffort (P/M) R(000)Pp. doc ErrorsDefectsPeople alpha1210018924168365134293 beta27200388624401224321865 gamma20200631433141050256646

12 Why Opt for FP Measures? lIndependent of programming language. Some programming languages are more compact, e.g. C++ vs. Assembler lUse readily countable characteristics of the “information domain” of the problem lDoes not “penalize” inventive implementations that require fewer LOC than others lMakes it easier to accommodate reuse and object-oriented approaches lOriginal FP approach good for typical Information Systems applications (interaction complexity) lVariants (Extended FP and 3D FP) more suitable for real- time and scientific software (algorithm and state transition complexity)

13 Computing Function Points Establish count for input domain and system interfaces Analyze information domain of the application and develop counts Weight each count by assessing complexity Assign level of complexity (simple, average, complex) or weight to each count Grade significance of external factors, F_i, such as reuse, concurrency, OS,... Assess the influence of global factors that affect the application Compute function points FP = SUM(count x weight) x C where complexity multiplier C = (0.65+0.01 x N) degree of influence N = SUM(F_i)

14 Analyzing the Information Domain

15 Taking Complexity into Account lComplexity Adjustment Values (F_i) are rated on a scale of 0 (not important) to 5 (very important): 1.Does the system require reliable backup and recovery? 2.Are data communications required? 3.Are there distributed processing functions? 4.Is performance critical? 5.System to be run in an existing, heavily utilized environment? 6.Does the system require on-line data entry? 7.On-line entry requires input over multiple screens or operations? 8.Are the master files updated on-line? 9.Are the inputs, outputs, files, or inquiries complex? 10.Is the internal processing complex? 11.Is the code designed to be reusable? 12.Are conversion and instillation included in the design? 13.Multiple installations in different organizations? 14.Is the application designed to facilitate change and ease-of-use?

16 Exercise: Function Points lCompute the function point value for a project with the following information domain characteristics: Number of user inputs: 32 Number of user outputs: 60 Number of user enquiries: 24 Number of files: 8 Number of external interfaces: 2 Assume that weights are average and external complexity adjustment values are not important. lAnswer:

17 Example: SafeHome Functionality User SafeHome System Sensors User Monitor and Response System System Config Data Password Zone Inquiry Sensor Inquiry Panic Button (De)activate Messages Zone Setting Sensor Status (De)activate Alarm Alert Password, Sensors, etc. Test Sensor

18 Example: SafeHome FP Calc complexity multiplier function points number of user inputs number of user outputs number of user inquiries number of files number of ext.interfaces measurement parameter 3 4 3 7 5 count weighting factor simple avg. complex 4 5 4 10 7 6 7 6 15 10 = = = = = count-total X X X X X 3 2 2 1 4 9 8 6 7 22 58 52 1.11

19 Exercise: Function Points lCompute the function point total for your project. Hint: The complexity adjustment values should be low ( ) lSome appropriate complexity factors are (each scores 0-5): 1.Is performance critical? 2.Does the system require on-line data entry? 3.On-line entry requires input over multiple screens or operations? 4.Are the inputs, outputs, files, or inquiries complex? 5.Is the internal processing complex? 6.Is the code designed to be reusable? 7.Is the application designed to facilitate change and ease-of-use?

20 OO Metrics: Distinguishing Characteristics lThe following characteristics require that special OO metrics be developed:  Encapsulation — Concentrate on classes rather than functions  Information hiding — An information hiding metric will provide an indication of quality  Inheritance — A pivotal indication of complexity  Abstraction — Metrics need to measure a class at different levels of abstraction and from different viewpoints lConclusion: the class is the fundamental unit of measurement

21 OO Project Metrics lNumber of Scenario Scripts (Use Cases):  Number of use-cases is directly proportional the number of classes needed to meet requirements  A strong indicator of program size lNumber of Key Classes (Class Diagram):  A key class focuses directly on the problem domain  NOT likely to be implemented via reuse  Typically 20-40% of all classes are key, the rest support infrastructure (e.g. GUI, communications, databases) lNumber of Subsystems (Package Diagram):  Provides insight into resource allocation, scheduling for parallel development and overall integration effort

22 OO Analysis and Design Metrics lRelated to Analysis and Design Principles lComplexity:  Weighted Methods per Class (WMC): Assume that n methods with cyclomatic complexity are defined for a class C:  Depth of the Inheritance Tree (DIT): The maximum length from a leaf to the root of the tree. Large DIT leads to greater design complexity but promotes reuse  Number of Children (NOC): Total number of children for each class. Large NOC may dilute abstraction and increase testing

23 Further OOA&D Metrics lCoupling:  Coupling between Object Classes (COB): Total number of collaborations listed for each class in CRC cards. Keep COB low because high values complicate modification and testing  Response For a Class (RFC): Set of methods potentially executed in response to a message received by a class. High RFC implies test and design complexity lCohesion:  Lack of Cohesion in Methods (LCOM): Number of methods in a class that access one or more of the same attributes. High LCOM means tightly coupled methods

24 OO Testability Metrics lEncapsulation:  Percent Public and Protected (PAP): Percentage of attributes that are public. Public attributes can be inherited and accessed externally. High PAP means more side effects  Public Access to Data members (PAD): Number of classes that access another classes attributes. Violates encapsulation lInheritance:  Number of Root Classes (NRC): Count of distinct class hierarchies. Must all be tested separately  Fan In (FIN): The number of superclasses associated with a class. FIN > 1 indicates multiple inheritance. Must be avoided  Number of Children (NOC) and Depth of Inheritance Tree (DIT): Superclasses need to be retested for each subclass

25 Quality Metrics lMeasures conformance to explicit requirements, following specified standards, satisfying of implicit requirements lSoftware quality can be difficult to measure and is often highly subjective 1.Correctness:  The degree to which a program operates according to specification  Metric = Defects per FP 2.Maintainability:  The degree to which a program is amenable to change  Metric = Mean Time to Change. Average time taken to analyze, design, implement and distribute a change

26 Quality Metrics: Further Measures 3.Integrity:  The degree to which a program is impervious to outside attack   Summed over all types of security attacks, i, where t = threat (probability that an attack of type i will occur within a given time) and s = security (probability that an attack of type i will be repelled) 4.Usability:  The degree to which a program is easy to use.  Metric = (1) the skill required to learn the system, (2) the time required to become moderately proficient, (3) the net increase in productivity, (4) assessment of the users attitude to the system  Covered in HCI course

27 Quality Metrics: McCall’s Approach PRODUCT TRANSITION PRODUCT TRANSITION PRODUCT REVISION PRODUCT REVISION PRODUCT OPERATION PRODUCT OPERATION Correctness Reliability Usability Integrity Efficiency Maintainability Flexibility Testability Portability Reusability Interoperability McCall’s Triangle of Quality

28 Quality Metrics: Deriving McCall’s Quality Metrics lAssess a set of quality factors on a scale of 0 (low) to 10 (high) lEach of McCall’s Quality Metrics is a weighted sum of different quality factors lWeighting is determined by product requirements lExample:  Correctness = Completeness + Consistency + Traceability  Completeness is the degree to which full implementation of required function has been achieved  Consistency is the use of uniform design and documentation techniques  Traceability is the ability to trace program components back to analysis lThis technique depends on good objective evaluators because quality factor scores can be subjective

29 Managing Variation lHow can we determine if metrics collected over a series of projects improve (or degrade) as a consequence of improvements in the process rather than noise? lStatistical Process Control:  Analyzes the dispersion (variability) and location (moving average)  Determine if metrics are: (a) Stable (the process exhibits only natural or controlled changes) or (b) Unstable (process exhibits out of control changes and metrics cannot be used to predict changes)

30 Control Chart lCompare sequences of metrics values against mean and standard deviation. e.g. metric is unstable if eight consecutive values lie on one side of the mean. Mean - std. dev. + std. dev.


Download ppt "Software Engineering Software Metrics James Gain"

Similar presentations


Ads by Google