By: TARUN MEHROTRA 12MCMB11.  More time is spent maintaining existing software than in developing new code.  Resources in M=3*(Resources in D)  Metrics.

Slides:



Advertisements
Similar presentations
KeTra.
Advertisements

Science as a Process Chapter 1 Section 2.
Animal, Plant & Soil Science
1 C2 Maturity Model Experimental Validation Statistical Analyses of ELICIT Experimentation Data Dr. David S. Alberts.
Causal-Comparative Research Designs
Testing Theories: Three Reasons Why Data Might not Match the Theory.
David Woo (dxw07u).  What is “White Box Testing”  Data Processing and Calculation Correctness Tests  Correctness Tests:  Path Coverage  Line Coverage.
Software Quality Ranking: Bringing Order to Software Modules in Testing Fei Xing Michael R. Lyu Ping Guo.
Critical Thinking.
Software Metrics.
Mutual Information Mathematical Biology Seminar
Agenda for January 25 th Administrative Items/Announcements Attendance Handouts: course enrollment, RPP instructions Course packs available for sale in.
CMP3265 – Professional Issues and Research Methods Research Proposals: n Aims and objectives, Method, Evaluation n Yesterday – aims and objectives: clear,
Basic Logic of Experimentation The design of an Internally valid experimental procedure requires us to: Form Equivalent Groups Treat Groups Identically.
Introduction to Multivariate Research & Factorial Designs
Correlational Designs
RESEARCH METHODS IN EDUCATIONAL PSYCHOLOGY
Chapter 7 Correlational Research Gay, Mills, and Airasian
Grade 3-8 English. 2 The Bottom Line This is the first year in which students took State tests in Grades 3,4,5,6,7, and 8. With the new individual.
Quantitative Research
CORRELATIO NAL RESEARCH METHOD. The researcher wanted to determine if there is a significant relationship between the nursing personnel characteristics.
Psychology and Scientific Research. Experimental Science Definition: inquiry in seeking facts and the search for truth through testing of theories and.
Chapter 12 (Ch. 11 in 2/3 Can. Ed.) Bivariate Association for Tabular Data: Basic Concepts.
Week 11 Chapter 12 – Association between variables measured at the nominal level.
1 Maximizing Learning in Online Training Courses: Meta-Analytic Evidence Traci Sitzmann Advanced Distributed Learning.
Chapter 2: The Research Enterprise in Psychology
University of Toronto Department of Computer Science © 2001, Steve Easterbrook CSC444 Lec22 1 Lecture 22: Software Measurement Basics of software measurement.
Chapter 2: The Research Enterprise in Psychology
Copyright © Cengage Learning. All rights reserved. 8 Tests of Hypotheses Based on a Single Sample.
Theory testing Part of what differentiates science from non-science is the process of theory testing. When a theory has been articulated carefully, it.
Copyright © Allyn & Bacon 2007 Chapter 2: Research Methods.
Testing Theories: Three Reasons Why Data Might not Match the Theory Psych 437.
Chapter 6 : Software Metrics
Chapter 1: Research Methods
The Research Enterprise in Psychology. The Scientific Method: Terminology Operational definitions are used to clarify precisely what is meant by each.
 Closing the loop: Providing test developers with performance level descriptors so standard setters can do their job Amanda A. Wolkowitz Alpine Testing.
L 1 Chapter 12 Correlational Designs EDUC 640 Dr. William M. Bauer.
Chapter 2 Research in Abnormal Psychology. Slide 2 Research in Abnormal Psychology  Clinical researchers face certain challenges that make their investigations.
1 Science as a Process Chapter 1 Section 2. 2 Objectives  Explain how science is different from other forms of human endeavor.  Identify the steps that.
1 ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Lecture 23 Instructor Paulo Alencar.
Software Quality Metrics
ASSUMPTIONS OF A SCIENCE OF PSYCHOLOGY Realism –The world exists independent of observer Causality –Events (mental states and behavior) are caused by prior.
1 Ch. 1: Software Development (Read) 5 Phases of Software Life Cycle: Problem Analysis and Specification Design Implementation (Coding) Testing, Execution.
EDUC 502: Introduction to Research August 29, 2005 Dr. Groth Note: This will be ed to your SU account after class tonight.
Introduction to Earth Science Section 2 Section 2: Science as a Process Preview Key Ideas Behavior of Natural Systems Scientific Methods Scientific Measurements.
Research Methods in Psychology Chapter 2. The Research ProcessPsychological MeasurementEthical Issues in Human and Animal ResearchBecoming a Critical.
Assessment and Testing
1.) *Experiment* 2.) Quasi-Experiment 3.) Correlation 4.) Naturalistic Observation 5.) Case Study 6.) Survey Research.
©2010 John Wiley and Sons Chapter 2 Research Methods in Human-Computer Interaction Chapter 2- Experimental Research.
Software Development Problem Analysis and Specification Design Implementation (Coding) Testing, Execution and Debugging Maintenance.
Hussein Alhashimi. “If you can’t measure it, you can’t manage it” Tom DeMarco,
Theory and Practice of Software Testing
Cyclomatic complexity (or conditional complexity) is a software metric (measurement). Its gives the number of indepented paths through strongly connected.
+ EXPERIMENTAL INVESTIGATIONS An experimental investigation is one in which a control is identified. The variables are measured in an effort to gather.
The Value of USAP in Software Architecture Design Presentation by: David Grizzanti.
Copyright © 2012 by Nelson Education Limited. Chapter 12 Association Between Variables Measured at the Ordinal Level 12-1.
Bivariate Association. Introduction This chapter is about measures of association This chapter is about measures of association These are designed to.
Static Software Metrics Tool
Chapter 12 Understanding Research Results: Description and Correlation
Research Methods in Psychology
Bias.
Software Metrics 1.
Software Engineering (CSI 321)
Software Reliability PPT BY:Dr. R. Mall 7/5/2018.
Halstead software science measures and other metrics for source code
Bivariate Association: Introduction and Basic Concepts
Software Metrics “How do we measure the software?”
Software metrics.
Exercises Causal Comparative Research
Causal Comparative Research Design
Presentation transcript:

By: TARUN MEHROTRA 12MCMB11

 More time is spent maintaining existing software than in developing new code.  Resources in M=3*(Resources in D)  Metrics should be made – 1>Difficulty experienced by Pgmer in understanding a program. 2>Speed & Accuracy with which modification is done.

 In 1972 M.Halstead publish his theory of Software Science. According to him amt of effort required to generate a pgm can be derived from: 1)Distinct operators 2)Distinct operands 3)Total frequencies of operators. 4)Total frequencies of operands. From these 4 Quantities Halstead calculates the number of mental comparisons required to generate a program

 Recently T.McCabe developed a definition of complexity based on “Decision Structure of a Program”.  In simply stated,McCabe’s metric counts the number of basic control path segments which when combined, will generate every possible path through a program.

 No exact mathematical relationship between two metrics, since no causal relationships exist between number of operators,operands and control paths.  Yet, as the No. of control path increases there would be an anticipated increase in the number of operators and significant correlation not be surprising.

There are 2 different complexity to be assessed:  Computational-refers to “the quantitative aspects of the solutions to computational problems.eg. Comparing efficiency of alternate algorithmic solutions.  Psychological-refers to “characteristic of software which make it difficult to understand and work with.  No simple relationship is expected between two.

 This report investigates the extent to which Halstead and McCabe metrics assess the psychological complexity of understanding and modifying software.  2 experiments reported were designed to investigate factors which influence: ->Understanding an existing program(EXP 1) ->Accurately implementing modifications to it.(EXP 2)

1)Participants  Each experiment involves 36 programmers.  All have working knowledge of Fortran. 2)Procedures Introductory exercise:  Material was prepared for each participant with written instructions on the experimental task.  All were presented with same short Fortran program( Euclid algo).

3) Independent Variables  Program Class: In Exp 1- Nine programs were chosen varied from 36 to 57 statements. In Exp 2- Three of the Nine programs from Exp 1.  Complexity of control flow: Control Flow Structures at 3 levels of complexity were defined for each program.

 Variable name mnemonicity: In Exp 1:Three Levels of mnemonicity for variable names were manipulated independently of program structure.  Comments: In Exp 2 :Three Levels of commenting in Exp 2: 1) Global-Appeared at front of the pgm. 2)In-Line-Placed throughout the pgm. 3)None

 Modifications: In Exp 2:Three types of modification were selected for each program.  Experimental Design: In Exp 1: Nine Programs,each one with 3 types of Control flow, with 3 levels of variable mnemonicity. Total=81 programs. In Exp 2: Three Programs,each one with 3 types of Control flow, 3 levels of commenting, modifications at 3 levels of difficulty. Total =81 programs.

 Complexity Measures ->Halstead’s effort metric (E):

->McCabe’s ->Length :Total number of Fortran statements excluding comments.

4) Dependent Variables ->In Experiment 1: Percent of statements correctly recalled. ->In Experiment 2: 1)Accuracy of the implemented modification 2)Time taken by the participant to perform the task. So, Performance measures were: 1)Percent of changes correctly implemented to program. 2)Number of minutes required to complete changes.

1. Experimental Manipulations 2. Distributional Information on Complexity Measures 3. Correlations with Performance 4. Moderator Effects

Experimental manipulations: Experiment 1:  Mean of 51 percent of the statements were correctly recalled across the experimental tasks.  Performance on naturally structured programs was superior to that on unstructured programs.  Differences in the mnemonicity of variable names had no effect on performance.

Experiment 2:  An average accuracy score of 62 percent was achieved overall implemented modifications.  Average time to complete modification was 17.9 min.  Accuracy and time were uncorrelated.

Distributional Information on Complexity M Distributional Information on Complexity Measures :

 Analysis Substantial inter correlations were observed among the complexity metrics in both exp. Exp 1- Halstead and McCabe were strongly related while their correlations with length were moderate. Exp 2-All three measures were strongly correlated on both unmodified and modified programs.

Correlations with performance-(EXP 1)

Analysis:  These correlations were all negative, indicating that fewer lines were correctly recalled as the level of complexity represented by these three measures increased.  Little difference was observed between the correlations in the aggregated and unaggregated data.

 There were three data points (circled in Fig. 1) which were developed by averaging across three participants who consistently outscored others.  High scores on these three data points resulted from the failure of random assignment.  With the three data points of the exceptional group removed, the correlations for all three complexity metrics improved (third row of Table III).

 CORRELATIONS OF COMPLEXITY METRICS WITH ACCURACY AND TIME IN EXPERIMENT 2

Analysis:  The correlations computed from the aggregated data were slightly larger than those computed from the unaggregated data.  The complexity metrics were generally more strongly correlated with time to completion than with the accuracy of the implementation, especially on the modified programs.  The largest number of significant correlations were observed for metrics computed from the modified programs.

 Moderator Effects –

Analysis  In Experiment 1, Halstead's E and McCabe's v(G) correlated significantly with performance only on unstructured programs.  While a similar pattern of correlations emerged in Experiment 2 between Halstead's E and time to complete the modification.

Broader Difference

Analysis  Differences in correlations between in-line and no-comment conditions either achieved or bordered on significance in all cases.

Analysis  Finally, relationships between complexity metrics and performance measures were moderated by the participants' years of professional programming experience.  As is evident in Table VII, the complexity metrics were more strongly related to performance among less experienced programmers in all cases.

THANK YOU