Presentation is loading. Please wait.

Presentation is loading. Please wait.

Software Release Readiness Metric ShipIT [0,1] Piyush Ranjan Satapathy Department of Computer Science & Engineering University of California Riverside.

Similar presentations


Presentation on theme: "Software Release Readiness Metric ShipIT [0,1] Piyush Ranjan Satapathy Department of Computer Science & Engineering University of California Riverside."— Presentation transcript:

1 Software Release Readiness Metric ShipIT [0,1] Piyush Ranjan Satapathy Department of Computer Science & Engineering University of California Riverside

2 Outline Why Release Metric ? Why Release Metric ? Stages & Factors Considered Stages & Factors Considered Metrics Considered Metrics Considered Formulation of ShipIT Formulation of ShipIT Justification & Elaboration of the formula Justification & Elaboration of the formula Validation of the Formula Validation of the Formula Conclusion Conclusion References References Q & A Q & A

3 Why Release Metric ? Cost: Penalty for late delivery? Cost: Penalty for late delivery? Confidence: accurate estimate? Complexity? Confidence: accurate estimate? Complexity? Quality: Danger of breaking something else? How much testing is needed? Quality: Danger of breaking something else? How much testing is needed? Schedule: Holidays? Vacation? Customer availability? Hard deadline? Schedule: Holidays? Vacation? Customer availability? Hard deadline? Relationship: Unhappy customer? Reference site? Relationship: Unhappy customer? Reference site? Workaround: Manual? Send staff on site? Workaround: Manual? Send staff on site? Quantify: How much loss? second? Quantify: How much loss? second? Accuracy: performance test environment? Accuracy: performance test environment? Alternatives: Optional download? Automatic download? Alternatives: Optional download? Automatic download? Support: Ease of download? Increased support load? Support: Ease of download? Increased support load? Competition: Market leader? Slipping? Competition: Market leader? Slipping? Usability: lose customers if slow? Usability: lose customers if slow?

4 Phases or Factors Considered RequirementAnalysisDesign Phase RequirementAnalysisDesign Phase Coding or Implementation Phase Coding or Implementation Phase Testing Phase Testing Phase Quality Assurance Phase Quality Assurance Phase Manuals & Documentation Manuals & Documentation Early Deployment Early Deployment Early Support Early Support

5 Metrics Considered 1.RequirementAnalysisDesign Phase  Planned vs Implemented features  List of unimplemented features  List of New features coming on… 2. Coding Phase 2.1 Creating Source  System Modules ( Already Implementged no. Vs Total planned no)  Application Modules ( Planned Vs Implemented no)  GUI Modules (Planned vs Implemented no.) 2.2 Creating Object (Use another way.. )  KSLOC (Thousands of Source Line of Codes)  Number of Functional Points (determines Complexity)  Known Anomalies in the code 2.3 Building Process  Compilation Time  No of warnings during compilation  Incremental Build time as per the platform dependency  Incremental build time as per the compiler dependency

6 Metrics Considered… 3. Testing Phase 3.1 Testing (Finding Bugs)  Unit testing ( Planned Vs Completed no of L1 testcases)  Integration testing (Planned vs Complted no of L2 testcases)  System testing (Planned vs completed no of L3 testcases) 3.2 Debugging  No of open issues  Line Coverage 4. Quality Assurance ( Regression testing)  Total no of Test hours planned  Debugged faults till date  Acceptable no of Faults 5. Manuals & Documentation  Requirement Documentation (% Completed)  Design Documentation (% Completed)  Implementation and Usability Documentation (% Completed)  Test Plan Documentation (% Completed)  User guide Documentation (% Completed)

7 Metrics Considered… 6. Supervision (Early Deployment) 6.1 Installation Process  Distribution of softwares ( Planned vs Completed)  Installation of software (Planned No. vs Completed No.)  Acceptance testing (No of Planned Vs executed Testcases) 6.2 Training process  Developing Training Materials ( % Completion)  Validating Training program (% Completion)  Implementing Training program (% Completion) 7. Support (Early Customer Feedback)  Handling Beta Customer Bugs  Reapplying the software development cycle  Major Metrics : Maintainability Index desired and Maintainability Index reached

8 Formulation of ‘ShipIT’ ShipIT = [(W RAD x RAD) + (W CODE x CODE) + (W TEST x TEST) + (W QA x QA) + [(W RAD x RAD) + (W CODE x CODE) + (W TEST x TEST) + (W QA x QA) + (W MD x MD) + (W SV x SV) +(W SP x SP)] / 100 Where, RAD = Factor of contribution towards the completion of software development from RequirementsAnalysisDesign Stage So are the CODE, TEST, QA, MD, SV and SP respectively… And W RAD, W CODE,, W TEST, W QA, W MD, W SV, W SP are all Є [0,100] and all sum up to 100.

9 Justification & Elaboration (1) Assumptions: 1. Perfect Waterfall Model ( No coding or testing phase until requirement analysis done) 2. Target is only Major release 3. Release after deploying in Customers site and obtaining the proper MI. 4. New requirements until the end of Detailed Design phase..Not after that..if so then for next version of release.. Elaboration: RAD = [( W R x R) + (W A x A) + (W D x D)] / 100; Where W R +W A + W D = 100 and R,A,D Є [0,1]. CODE = [(W source x Source) + (W object x Object) + (W Build x Build)] / 100 TEST = [(W Bugfinding x Bugfinding) + (W Debugging x Debugging)] / 100 QA = [(Pseudo Test Hours Completed) / (Total test hours planned) MD = [(W RD x RD) + (W DD x DD) + (W ID x ID) + (W TD x TD) + (W UD xUD) ] / 100 SV = [(W IP x IP) + (W TP x TP)]/ 100 SP = [Maintainability Index reached / Maintainability Index desired]

10 Justification & Elaboration (2) Used Methods & Models: 1. COCOMO Prediction Model Effort = a (KSLOC) b Time = a (effort) b 2. Halstead’s Metrics Model Effort = n1N2NlogN/ 2n2 T = E/18 Sec. 3. Albrecht’s Function Points Model LOC = SourceStatement x FP FP = UFC x TCF 4. Zero failure Method 5. Stopping Rules Method 6. Maintainability Index Method

11 Validation of the Formula ShipIT = [(22 x RAD) + (19 x CODE) + (30 x TEST) + (8 x QA) + (7 x MD) + (9 x SV) + (5 x SP) ] / 100 (From Research Data) [ref 1.] RAD = [( 30 x R) + (20 x A) + (50 x D)] / 100 CODE = [(40 x Source) + (20 x Object) + (40 x Build)] / 100 TEST = [(65 x Bugfinding) + (35 x Debugging)] / 100 QA = QA MD = [(15 x RD) + (15 x DD) + (15 x ID) + (25 x TD) + (30 xUD) ] / 100 SV = [(50 x IP) + (50 x TP)]/ 100 SP = SP Source = [(57 x Sm) + (28 x Am) + (15 x Gm)] / 100 (From Ref 2) Build = [(40x CT) + (30 x HT) + (15 x BPT) + (15 x BCT)] (from Ref2) Bugfinding = [(35 xL1) + (35 xL2) + (30 xL3)] /100

12 Conclusions  Software Release Readiness Metric Important for Market Vs Features Vs Quality  no one tool or method should be relied on to arbitrarily make the final determination of whether a software product should be released  Detecting the measurable factors in software development life cycle is a skill and comes from Experience…  Considering the most practical metrics and under certain assumptions the formula defined for “ShipIT” holds true !!!!!!!

13 References …..24 Research papers…Can’t put all.. …..24 Research papers…Can’t put all.. Ref1. Robert B. Grady, Hewlett-Packard, “Successfully Applying Software metrics”, IEEE Trans. Soft Engr., September 1994 (Vol. 27, No. 9), pp. 18-25 Ref2. Gregory A. Hansen, GAPI, “Simulating Software Development Processes”, IEEE Software, January 1996 (Vol. 29, No. 1), pp. 73-77

14 Q & A


Download ppt "Software Release Readiness Metric ShipIT [0,1] Piyush Ranjan Satapathy Department of Computer Science & Engineering University of California Riverside."

Similar presentations


Ads by Google