Presentation is loading. Please wait.

Presentation is loading. Please wait.

Software Engineering Economics (CS656)

Similar presentations


Presentation on theme: "Software Engineering Economics (CS656)"— Presentation transcript:

1 Software Engineering Economics (CS656)
Software Cost Estimation w/ COCOMO II Jongmoon Baik

2 Software Cost Estimation

3 “You can not control what you can not see”
- Tom Demarco -

4 Why Estimate Software? 30% of project never complete
% cost overruns not uncommon Average project exceeds cost by 90%; Schedule by 120% 15% of large project never deliver anything Only 16.2% of projects are successful * 1998, 1999, 2000 Standish report, Choas

5 When to Estimate? Estimation during the Bid
Short duration, fastest possible, least understanding Estimation at project Start Creating full plan, allocating resources, detailed estimation Estimation during the project How do you handle change

6 Why are we bad at estimating?
Complexity of the systems Infrequency - How often do we do the “same thing” vs. manufacturing or construction Underestimation bias Computers are “easy”; software is “easy” We deal with Goals not estimates Must be done by June Complexity is what makes estimating hard

7 Why are we bad at estimating? (2)
Complexity of the systems ~1000 FP in a pace maker (50K) ~18,800 FP in shuttle test scaffolding (1,000,000 LOC) ~75,400 FP in Nynex Switch (4,000,000LOC) “Human brain capacity is more or less fixed, but software complexity grows at least as fast as the square of the size of the project” Tony Bowden

8 Determining “Development effort”
Development effort measures Person-Month LOC per Hour Function point per hour Requirement per hour Most common is person-months (or hours) We will look at ways to get development effort

9 Software cost is big and growing
Why Is It Important?? Software cost is big and growing Many useful software products are not getting developed Get us better software not just more software Boehm et. Al, “Understanding and Controlling Software Cost”, IEEE TSE, SE4, 10, pp

10 Software Estimation Techniques
most important objectives: to explain the development life-cycle and to accurately predict the cost of developing software product Model-based techniques: a quite a few models have been developed n the last couple of decades. Many of them are proprietary and hence can not be compared and constrained in terms of the model structure. Functional form Expert-based techniques: Useful in the absence of quantified, empirical data Capture the knowledge and experience of practitioners seasoned within a domain Provide estimates based on a synthesis of the known outcomes of all the past projects Obvious drawbacks: an estimate is only as good as the expert’s opinion and no way to test that the opinion until it is too late to correct the damage if that opinion proves wrong Learning-oriented techniques: case studies represent an inductive process, whereby estimators and planners try to learn useful general lessons and estimation heuristics by extrapolation from specific examples Neural network: the most common software estimation model-building technique used as an alternative to mean least squares regression. Dynamic-based techniques: Explicitly acknowledge that software project effort or cost factors change over the duration of the system development: dynamic rather than static over time System dynamics approach modeling originated by Jay Forrester 45 years ago in 1961 Regression-based techniques: Most popular ways of building models Used in conjunction with model-based techniques OLS (ordinary least square): a statistical approach of general linear regression modeling using least squares (simple and easy to use) Robust regression: an improvement over standard OLS, alleviates the common problem of outliers in observed software engineering data Composite techniques: There are many pros and cons of using each of the existing techniques for cost estimation Incorporates a combination of two or more techniques to formulate the most appropriate functional form for estimation

11 Software Cost Estimation Steps
Establish Objectives Rough Sizing Make-or-Buy Detailed Planning Allocate Enough Time, Dollars, Talent Pin down Software Requirements Documents Definition, Assumption Work out as much detail as Objectives permit Use several independent Techniques + Sources Top-Down vs. Bottom-Up Algorithm Vs. Expert-Judgement Compare and iterate estimates Pin down and resolve inconsistencies Be Conservative Follow up

12 WHO SANG COCOMO? The Beach Boys [1988] “KoKoMo”
Aruba, Jamaica,ooo I wanna take you To Bermuda, Bahama,come on, pretty mama Key Largo, Montego, baby why don't we go jamaica Off the Florida Keys there's a place called Kokomo That's where you want to go to get away from it all Bodies in the sand Tropical drink melting in your hand We'll be falling in love to the rhythm of a steel drum band Down in Kokomo …………………….. ……………………….. "Kokomo“ lyrics originally appeared on the "Cocktail" Movie soundtrack

13 KBS2 – “an exploration party to challenge the globe”
Who are COCOMO? KBS2 – “an exploration party to challenge the globe” Sep. 4, 2005 A tribe in Kenya

14 What is COCOMO? “COCOMO (COnstructive COst MOdel) is a model designed by Barry Boehm to give an estimate of the number of programmer-months it will take to develop a software product.”

15 COCOMO II Overview - I COCOMO II Software development,
Software product size estimate Software product, process, computer, and personnel attributes Software reuse, maintenance, and increment parameters Software organization’s project data Software development, maintenance cost and schedule estimates Cost, schedule distribution by phase, activity, increment COCOMO recalibrated to organization’s data COCOMO II COCOMO II is tuned to modern software life cycles. The original COCOMO model has been very successful, but it doesn't apply to newer software development practices as well as it does to traditional practices. COCOMO II targets the software projects of the 1990s and 2000s

16 COCOMO II Overview - II Open interfaces and internals
Published in Software Cost Estimation with COCOMO II, Boehm et. al., 2000 COCOMO – Software Engineering Economics , Boehm, 1981 Numerous Implementation, Calibrations, Extensions Incremental Development, Ada, new environment technology Arguably the most frequently-used software cost model worldwide

17 List of COCOMO II Packages
USC COCOMO II USC Costar – Softstar Systems ESTIMATE PROFESSIONAL – SPC CostXpert – Marotz Partial List of COCOMO Packages (STSC, 1993) CB COCOMO, GECOMO Plus, COCOMOID, GHL COCOMO, COCOMO1, REVIC, CoCoPro, SECOMO, COSTAR, SWAN, COSTMODL

18 COCOMO II User Objectives
Making investment or other financial decisions involving a software development Setting project budgets and schedules as a basis for planning and control Deciding on or negotiating tradeoffs among software cost, schedule, functionality, performance or quality factors Making software cost and schedule risk management decisions Deciding which parts of a software system to develop, reuse, lease or purchase Making legacy software inventory decisions: what parts to modify, phase out, outsource, etc. Setting mixed investment strategies to improve your organization’s software capability, via reuse, tools, process maturity, outsourcing, etc. Deciding how to implement a process improvement strategy A business case or return-on-investment analysis involving software development needs either an estimate of the software development cost or a life cycle software expenditure profile how many people should you assign to the early, middle, and late stages of a software project? How much of the effort, cost and schedule should be expended to reach the major project milestones? to what extent can one say, “Quality is free?” Is it actually cheaper to build life-critical software than to build, say, inventory control software? You will have unavoidable uncertainties about many of the factors influencing your project’s cost and schedule. Some example uncertainties are the amount of software your project will develop and reuse. Cost models can help you perform sensitivity and risk analyses covering your sources of uncertainty. A good cost model can help you understand, for example, when it is cheaper to build a component than to rework an existing component. the model can help you to develop and periodically update a multi-year plan indicating how many of the highest-priority capabilities can be developed in each legacy system upgrade. Here, the model can help you to develop and periodically update a multi-year technology investment plan. For example, COCOMO II has capabilities both to estimate the additional investment required to produce reusable software, and the resulting savings accruing from its reuse. such as that provided in the SEI Capability Maturity Model [Paulk et al., 1995]. For Level 2, how should you combine software cost models such as COCOMO II with other methods of sizing, estimating, planning, and tracking software projects’ cost, schedule, and effort? For Level 3, how do you assess the benefits of training in terms of improvements in such cost drivers as application, platform, language, and tool experience? For Level 4, how can frameworks such as COCOMO II help you set up an effective quantitative software management program? For Level 5, how can you evolve this framework to accommodate such new practices as product lines, rapid application development, and commercial-off-the-shelf (COTS) software integration?

19 COCOMO II Objectives Provide accurate cost and schedule estimates for both current and likely future software projects. Enabling organizations to easily recalibrate, tailor or extend COCOMO II to better fit their unique situations. Provide careful, easy-to-understand definition of the model’s inputs, outputs and assumptions. Provide constructive model. Provide a normative model. Provide evolving model. Provide accurate cost and schedule estimates for both current and likely future software projects. COCOMO 81 is built on the 1970’s “waterfall” process framework (sequentially progressing through requirements, design, code, and test). Many organizations are evolving toward mixes of concurrent, iterative, incremental, and cyclic processes. COCOMO II provides a framework for tailoring a version of the model to your desired process, including the waterfall. Enable organizations to easily recalibrate, tailor or extend COCOMO II to better fit their unique situations. Recalibration enables your organization to fit COCOMO II to your collected project data, with its own special interpretations of project endpoints, product definitions, labor definitions, and organizational practices. Tailoring and extensibility enable you to orchestrate COCOMO II variants to fit your organization’s product, process, and personnel practice variants. COCOMO 81 provided recalibration, but tailoring only within the assumptions of the waterfall model. Provide careful, easy-to-understand definitions of the model’s inputs, outputs and assumptions. Without such definitions, it is easy to create “million dollar mistakes” when different people use different interpretations of the model’s estimates. COCOMO 81 was generally satisfactory here; COCOMO II provides some additional definition assistance for the cost driver rating scales. Provide a constructive model, in which the job of cost and schedule estimation helps people better understand the nature of the project being estimated. This was a major objective for COCOMO 81, and continues to be for COCOMO II. The primary improvements in this direction have been to add further cost drivers to help understand the effects of project decisions in the reuse and economy-of-scale areas. Provide a normative model, which allocates the resources necessary for effective software development or maintenance; which highlights unrealistic cost and schedule objectives; and which identifies management controllables for improving current or future projects. Software cost models calibrated to data generally do well on being normative, because very few poorly-managed projects collect data for calibration. As discussed with the previous objective, COCOMO II identifies further management controllables in the reuse and economy-of-scale areas. Provide an evolving model, which adds new capabilities to address new needs (e.g., COTS integration, rapid application development), and maintains relevance to evolving software practices. The COCOMO 81 effort did not plan for model evolution. COCOMO II is a continuing project, both for refining the main project cost-schedule estimation model and for providing extensions (discussed primarily in Chapter 5).

20 COCOMO II Evolution Strategies - I
Proceed incrementally Estimation issues of most importance and tractability w.r.t modeling, data collection, and calibration. Test the models and their concepts on first-hand experience Use COCOMO II in annual series of USC Digital Library projects Establish a COCOMO II Affiliates’ program Enabling us to draw on the prioritized needs, expertise, and calibration data of leading software organizations Proceed incrementally, addressing the estimation issues of most importance and tractability with respect to modeling, data collection and calibration. Our primary initial focus has been on developing a solid model for overall project cost and schedule estimation. We have also initially addressed some further areas where we have had a reasonable combination of experience and data, such as software maintenance. We have proceeded more cautiously in areas where we had less experience and data. These areas are discussed in Chapter 5: applications composition, COTS integration, rapid application development, detailed effort and schedule breakdowns by stage and activity. Test the models and their concepts on first-hand experience. We use COCOMO II in our annual series of USC Digital Library projects. These involve the architecting of potential products, and the selection, development and transition of 5-6 products annually. We have found the need to modify COCOMO II somewhat to adapt it to our two-semester, fixed-team-size projects. But the experience has given us numerous insights, particularly in the development of the COCOMO II extensions for COTS integration and rapid applications development. Establish a COCOMO II Affiliates’ program, enabling us to draw on the prioritized needs, expertise and calibration data of leading software organizations. This has given us access to the large-project experience unavailable in a university setting. We have been fortunate to have a good mix of commercial software developers, Government software contractors, and Government software acquisition organizations from whom to acquire a balanced set of project data points for calibrating COCOMO II. Also, the Affiliate connections have provided us with access to many of the leading software cost estimation and metrics practitioners. A list of the COCOMO II Affiliates is provided in the acknowledgements appearing after the Preface. The Affiliates’ prioritized needs and available expertise and data have led to several additional strategies:

21 COCOMO II Evolution Strategies - II
Provide an externally and internally open model. Avoid unnecessary incompatibilities with COCOMO 81. Experiment with a number of model extensions. Balanced expert- and data- determined modeling. Develop a sequence of increasingly accurate models. Key the COCOMO II models to projections of future software life cycle practices. Provide an externally and internally open model, enabling users to fully understand and deal with its inputs, outputs, internal models, and assumptions. The only hidden aspect of COCOMO II is its database of Affiliates’ project effort and schedule data. We have developed a set of safeguards and procedures that have prevented any leakage of Affiliates’ sensitive data. Avoid unnecessary incompatibilities with COCOMO 81, which most Affiliates have still found largely useful and relevant. There were a few exceptions to this strategy, where subsequent trends and insights had made the original COCOMO 81 approach obsolete. These included eliminating the Turnaround Time cost driver, rebaselining the Software Tools rating scale, eliminating the schedule-stretchout penalty, replacing the linear reuse model with a nonlinear one, and replacing development modes by scale factors. Experiment with a number of model extensions, prioritizing their pursuit based on Affiliates’ needs and available data. These have included some extensions of the main COCOMO II project cost and schedule model, such as adding cost drivers addressing development for reuse and personal continuity, and adding scale factors for process maturity and team cohesion. They also include the experimental extensions discussed in Chapter 5, covering estimators for COTS integration and rapid development effects, and for delivered software quality (defect density). Balance expert-determined and data-determined modeling, most effectively via Bayesian analysis techniques. The Bayesian calibration approach has been our most significant methodological improvement. Section 1.6 provides an overview, and Chapter 4 provides the full approach and results. Develop a sequence of increasingly accurate models, based on the increasingly detailed and accurate input data available to estimators as they progress through succeeding stages of the software life cycle. We have done this via the Applications Composition, Early Design, and Post-Architecture sequence of models discussed in the next two sections. Key the COCOMO II models to projections of future software life cycle practices.

22 S/W Estimation Accuracy vs. Phase
Plans and Rqts. Devel. Test Phases and Milestones Size (DSI) + Cost ($) + 4x 2x 1.5x 1.25x x 0.25x 0.5x Relative Size Range Completed Programs USAF/ESD Proposals Feasibility Product Design Detail Concept of Operation Spec. Accepted Software illustrates the effects of project uncertainties on the accuracy of software size and cost estimates. In the very early stage, one may not know the specific nature of the product to be developed to better than a factor of 4 As the lifecycle proceed, product decisions are made, the nature of the product and its consequent size, cost drivers are better known COCOMO II furnish coarse-grained cost driver information in the early project stages, and increasingly fine-grained cost drivers in later stages “Corn of Software Cost Estimation”

23 MBASE/Rational Anchor Point Milestones
App. Compos. Inception Elaboration, Construction LCO, LCA IOC Waterfall Rqts. Prod. Des. LCA Development LCO Sys Devel Transition SRR PDR Construction SAT Trans. Inception Phase Elaboration

24 Application Composition Model
Challenge: Modeling rapid application composition with graphical user interface (GUI) builders, client-server support, object-libraries, etc. Responses: Application-Point Sizing and Costing Model Reporting estimate ranges rather than point estimate

25 Application Point Estimation Procedure
Element Type Complexity-Weight Simple Medium Difficult Screen 1 2 3 Report 5 8 3GL Component 10 Step 1: Assess Element-Counts: estimate the number of screens, reports, and 3GL components that will comprise this application. Assume the standard definitions of these elements in your ICASE environment. Step 2: Classify each element instance into simple, medium and difficult complexity levels depending on values of characteristic dimensions. Use the following scheme: Step 3: Weigh the number in each cell using the following scheme. The weights reflect the relative effort required to implement an instance of that complexity level. Step 4: Determine Application-Points: add all the weighted element instances to get one number, the Application-Point count. Step 5: Estimate percentage of reuse you expect to be achieved in this project. Compute the New Application Points to be developed NAP =(Application-Points) (100-%reuse) / 100. Step 6: Determine a productivity rate, PROD=NAP/person-month, from the following scheme: Step 7: Compute the estimated person-months: PM=NAP/PROD.

26 Sizing Methods Source Lines of Code (SLOC)
SEI Definition Check List Unadjusted Function Points (UFP) IFPUG

27 Source Lines of Code Best Source : Historical data form previous projects Expert-Judged Lines of Code Expressed in thousands of source lines of code (KSLOC) Difficult Definition – Different Languages COCOMO II uses Logical Source Statement SEI Source Lines of Code Check List Excludes COTS, GFS, other products, language support libraries and operating systems, or other commercial libraries

28 SEI Source Lines of Code Checklist

29 Unadjusted Function Points - I
Based on the amount of functionality in a software project and a set of individual project factors. Useful since they are based on information that is available early in the project life-cycle. Measure a software project by quantifying the information processing functionality associated with major external data or control input, output, or file types.

30 Unadjusted Function Points - II
Step 1. Determine function counts by type. The unadjusted function point counts should be counted by a lead technical person based on information in the software requirements and design documents. The number of each the five user function types should be counted (Internal Logical File (ILF), External Interface File (EIF), External Input (EI), External Output (EO), and External Inquiry (EQ)). Step 2. Determine complexity-level function counts. Classify each function count into Low, Average, and High complexity levels depending on the number of data element types contained and the number of file types reference. Use the following scheme. Step 3. Apply complexity weights. Weight the number in each cell using the following scheme. The weight reflect the relative value of the function to the user. Step 4. Compute Unadjusted Function Points. Add all the weight functions counts to get one number, the Unadjusted Function Points.

31 Relating UFPs to SLOC USC-COCOMO II
Use conversion table (Backfiring) to convert UFPS into equivalent SLOC Support 41 implementation languages and USR1-5 for accommodation of user’s additional implementation languages Additional Ratios and Updates :

32 Exercise - I Suppose you are developing a stand-alone application composed of 2 modules for a client Module 1 written in C FP multiplier C  128 Module 2 written in C++ FP multiplier C++  53 Determine UFP’s and equivalent SLOC

33 Information on Two Modules
FP default weight values Module 1 Module 2

34 Early Design & Post-Architecture Models
B = 0.91 C = 3.67 D = 0.28 Early Design Model [6 EMs]: Post Architecture Model [16 EMs]: *Exclude SCED driver EMs: Effort multipliers to reflect characteristics of particular software under development A : Multiplicative calibration variable E : Captures relative (Economies/Diseconomies of scale) SF: Scale Factors Early Design Mode: a high-level model that is used in the exploration of architecture alternatives or incremental development strategies. Consistent with the general level of information available and the general level of estimation accuracy needed Post-Architecture Model: Use the same approach for product sizing (including reuse) and for scale drivers. Both has the same functional form to estimate the amount of effort and calendar time to develop a software project

35 Scale Factors & Cost Drivers
Project Level – 5 Scale Factors Used for both ED & PA models Early Design – 7 Cost Drivers Post Architecture – 17 Cost Drivers Product, Platform, Personnel, Project

36 Project Scale Factors - I
Relative economies or diseconomies of scale E < 1.0 : economies of scale Productivity increase as the project size increase Achieved via project specific tools (e.g., simulation, testbed) E = 1.0 : balance Linear model : often used for cost estimation of small projects E > 1.0 : diseconomies of scale Main factors : growth of interpersonal communication overhead and growth of large-system overhead

37 Project Scale Factors - II
Precedentedness (PREC): IF a product is similar to several previously developed projects, then the precedentedness is high Features: Organizational understanding of product objectives, Experience in working with related software systems, Concurrent development of associated new hardware and operational procedures, Need for innovative data processing architectures/algorithms Development Flexibility (FLEX): PREC and FLEX are largely intrinsic to a project and uncontrollable factors Features: Need for software conformance with pre-established requirements, Need for software conformance with external interface specifications, Conformance of inflexibilities above with premium on early composition Architecture//Risk Resolution: combined factor from two of scale factors in Ada COCOMO, “Design Thoroughness by PDR” and Risk Elimination by PDR” Relates the rating level to MBASE/RUP Lifecycle Architecture (LCA) milestone as well as waterfall PDR milestone A subjective average of the factors such as tool support, architect availability, risk identification/resolution, etc Team Cohesion (TEAM): accounts for the sources of project turbulence and entropy due to difficulties in synchronizing the project’s stakeholders: users, developers, maintainers, interfacers, others. These difficulties arise from differences in stakeholder objectives and cultures Subjective weighted average of the characteristics such as consistency of stakeholder objectives and cultures, ability , willingness of stakeholders to accommodate other stakeholders’ objectives, experience of stakeholders in operating as team, Stakeholder teambuilding to achieve shared vision and commitments Process Maturity (PMAT) : organized around SEI-CMM

38 PMAT == EPML EPML (Equivalent Process Maturity Level)

39 PA Model – Product EMs Reliability (RELY): the measure of the extent to which the software must perform its intended function over a period of time. If the effect of a software failure is only slight inconvenience then RELY is very low. If a failure would risk human life then RELY is very high. This cost driver can be influenced by the requirement to develop software for reusability, see the description for RUSE. Database (DATA): measure attempts to capture the effect large data requirements have on product development. The rating is determined by calculating D/P, the ratio of bytes in the database to SLOC in the program. The reason the size of the database is important to consider is because of the effort required to generate the test data that will be used to exercise the program. In other words, DATA is capturing the effort needed to assemble the data required to complete test of the program through IOC. Reusability (RUSE): accounts for the additional effort needed to construct components intended for reuse on the current or future projects. This effort is consumed with creating more generic design of software, more elaborate documentation, and more extensive testing to ensure components are ready for use in other applications. Documentation match to life cycle needs (DOCU): is evaluated in terms of the suitability of the project’s documentation to its life cycle needs. Attempting to save costs via Very Low or Low documentation levels will generally incur extra costs during the maintenance portion of the life cycle. Poor or missing documentation will increase the Software Understanding (SU)

40 PA Model - CPLX Product Complexity (CPLX): Complexity is divided into five areas: control operations, computational operations, device-dependent operations, data management operations, and user interface management operations. Using the table, select the area or combination of areas that characterize the product or the component of the product you are rating. The complexity rating is the subjective weighted average of the selected area ratings.

41 PA Model – Platform EMs Execution Time Constraint (TIME): a measure of the execution time constraint imposed upon a software system. The rating is expressed in terms of the percentage of available execution time expected to be used by the system or subsystem consuming the execution time resource. The rating ranges from nominal, less than 50% of the execution time resource used, to extra high, 95% of the execution time resource is consumed. Main Storage Constraint (STOR): represents the degree of main storage constraint imposed on a software system or subsystem. Given the remarkable increase in available processor execution time and main storage, one can question whether these constraint variables are still relevant. However, many applications continue to expand to consume whatever resources are available---particularly with large and growing COTS products---making these cost drivers still relevant. The rating ranges from nominal (less than 50%), to extra high (95%). Platform Volatility (PVOL): used here to mean the complex of hardware and software (OS, DBMS, etc.) the software product calls on to perform its tasks. If the software to be developed is an operating system then the platform is the computer hardware. If a database management system is to be developed then the platform is the hardware and the operating system. If a network text browser is to be developed then the platform is the network, computer hardware, the operating system, and the distributed information repositories. The platform includes any compilers or assemblers supporting the development of the software system. This rating ranges from low, where there is a major change every 12 months, to very high, where there is a major change every two weeks.

42 PA Model – Personnel EMs
Analyst Capability (ACAP): Analysts are personnel that work on requirements, high level design and detailed design. The major attributes that should be considered in this rating are Analysis and Design ability, efficiency and thoroughness, and the ability to communicate and cooperate. Programmer Capability (PCAP): Evaluation should be based on the capability of the programmers as a team rather than as individuals. Major factors which should be considered in the rating are ability, efficiency and thoroughness, and the ability to communicate and cooperate. Personnel Continuity (PCON): The rating scale for PCON is in terms of the project’s annual personnel turnover. Applications Experience (APEX): dependent on the level of applications experience of the project team developing the software system or subsystem. The ratings are defined in terms of the project team’s equivalent level of experience with this type of application. Language and Tool Experience (LTEX): measure of the level of programming language and software tool experience of the project team developing the software system or subsystem. Software development includes the use of tools that perform requirements and design representation and analysis, configuration management, document extraction, library management, program style and formatting, consistency checking, planning and control, etc. Platform Experience (PLEX): The Post-Architecture model broadens the productivity influence of platform experience, PLEX (formerly labeled PEXP), by recognizing the importance of understanding the use of more powerful platforms, including more graphic user interface, database, networking, and distributed middleware capabilities.

43 PA Model – Project EMs Use of Software Tools (TOOL): Software tools have improved significantly since the 1970’s projects used to calibrate the 1981 version of COCOMO. The tool rating ranges from simple edit and code, very low, to integrated life cycle management tools, very high. A Nominal TOOL rating in COCOMO 81 is equivalent to a Very Low TOOL rating in COCOMO II. An emerging extension of COCOMO II is in the process of elaborating the TOOL rating scale and breaking out the effects of TOOL capability, maturity, and integration. Multisite Development (SITE): Given the increasing frequency of multisite developments, and indications that multisite development effects are significant, the SITE cost driver has been added in COCOMO II. Determining its cost driver rating involves the assessment and judgement-based averaging of two factors: site collocation (from fully collocated to international distribution) and communication support (from surface mail and some phone access to full interactive multimedia). Required Development Schedule (SCED): This rating measures the schedule constraint imposed on the project team developing the software. The ratings are defined in terms of the percentage of schedule stretch-out or acceleration with respect to a nominal schedule for a project requiring a given amount of effort. Accelerated schedules tend to produce more effort in the earlier phases to eliminate risks and refine the architecture, more effort in the later phases to accomplish more testing and documentation in parallel.

44 ED EMs vs. PA EMs RCPX: Product Reliability and Complexity
RUSE: Developed for Reusability PDIF: Platform Difficulty PERS: Personnel Capability PREX: Personnel Experience FCIL: Facilities SCED: Required Development Schedule

45 ED Model EMs - RCPX

46 ED Model EMs - PDIF

47 ED Model EMs - PERS

48 ED Model EMs - PREX

49 ED Model EMs - FCIL

50 Calibration & Prediction Accuracy
COCOMO Calibration MRE: PRED (.30) Values

51 COCOMO II Family

52 COCOMO Model Comparison

53 USC-COCOMO II.2000 Demo.

54 Reuse & Product Line Mgmt.
Challenges - Estimate costs of both reusing software and developing software for future reuse - Estimate extra effects on schedule (if any) Responses - New nonlinear reuse model for effective size - Cost of developing reusable software estimated by RUSE effort multiplier - Gathering schedule data

55 Non-Linear Reuse Effect

56 Primary Cost Factors for Reuse (NASA)
Cost of Understanding 47% of the effort in SW maintenance involves understanding the SW to be modified [Parikh-Zvegintzov 1983] Relative cost of Checking Module Interfaces Relation b/w no. of modified modules and no. of module interface checking [Gerlich-Denskat 1994] N: number of module interface checks required m: number of modules for reuse k: number of modules modified

57 COCOMO II Reuse Model Add Assessment & Assimilation increment (AA)
- Similar to conversion planning increment Add software understanding increment (SU) To cover nonlinear software understanding effects Coupled with software unfamiliarity level (UNFM) Apply only if reused software is modified

58 Software Understanding

59 Assessment and Assimilation (AA)

60 Unfamiliarity (UNFM)

61 Guidelines for Quantifying Adapted Software

62 Requirement Evolution & Volatility (REVL)
Adjust the effective size of the product Causal factors: user interface evolution, technology upgrades, or COTS volatility Percentage of code discarded due to requirement evolution E.g., SLOC = 100K and REVL = 20 Project effective size = 120K

63 Example: Manufacturing Control System
Reused Code: 100 SLOC Full Cost: 2.94(100)1.10 (1.18) ($8K/PM) = $4400K International Factory Reuse: halfway between VH and XH Recommended Reliability rating: 1 level lower Recommended Documentation rating: High Develop for Reuse: $4400 (1.195)(1.18)(1.11) = $6824K

64 Subsequent Development w/ Reuse
Black-box plug-and-play: 30 KSLOC Reuse with modifications: 30 KSLOC New factory-specific SW: KSLOC Assessment and assimilation (AA): % Software understanding factor (SU): % Unfamiliarity factor (UNFM): % design modified (DM): % % code modified (CM): % % integration modified (IM): % AAF = .4(10) + .3 (20) + .3 (20) = .16 100 ESLOC = 40 + (30) (.02) + (30) (.02 + (.3) (.1) + .16) = = 46.9 COST = (46.9)1.10 (1.18) (1.195) (1.18) (1.1) ($8K) = $2966K

65 Reuse vs. Redevelopment

66 Q & A


Download ppt "Software Engineering Economics (CS656)"

Similar presentations


Ads by Google