Presentation is loading. Please wait.

Presentation is loading. Please wait.

Barry Boehm, USC-CSSE Fall 2011

Similar presentations


Presentation on theme: "Barry Boehm, USC-CSSE Fall 2011"— Presentation transcript:

1 Barry Boehm, USC-CSSE Fall 2011
Future Challenges for Systems and Software Cost Estimation and Measurement Barry Boehm, USC-CSSE Fall 2011 Many people have provided us with valuable insights on the challenge of integrating systems and software engineering, especially at the OSD/USC workshops in October 2007 and March We would particularly like to thank Bruce Amato (OSD), Elliot Axelband (Rand/USC), William Bail (Mitre), J.D. Baker (BAE Systems), Kristen Baldwin (OSD), Kirstie Bellman (Aerospace), Winsor Brown (USC), Jim Cain (BAE Systems), David Castellano (OSD), Clyde Chittister (CMU-SEI), Les DeLong (Aerospace), Chuck Dreissnack (SAIC/MDA), Tom Frazier (IDA), George Friedman (USC), Brian Gallagher (CMU-SEI), Stuart Glickman (Lockheed Martin), Gary Hafen (Lockheed Martin), Dan Ingold (USC), Judy Kerner (Aerospace), Kelly Kim (Boeing), Sue Koolmanojwong (USC), Per Kroll (IBM), DeWitt Latimer (USAF/USC), Rosalind Lewis (Aerospace), Azad Madni (ISTI), Mark Maier (Aerospace), Darrell Maxwell (USN), Ali Nikolai (SAIC), Lee Osterweil (UMass), Karen Owens (Aerospace), Adrian Pitman (Australia DMO), Art Pyster (Stevens), Shawn Rahmani (Boeing), Bob Rassa (Raytheon), Don Reifer (RCI/USC), John Rieff (Raytheon), Stan Rifkin (Master Systems), Wilson Rosa (USAF), Walker Royce (IBM), Kelly Schlegel (Boeing), Tom Schroeder (BAE Systems), David Seaver (Price Systems), Rick Selby (Northrop Grumman), Stan Settles (USC), Neil Siegel (Northrop Grumman), Frank Sisti (Aerospace), Peter Suk (Boeing), Denton Tarbet (Galorath), Rich Turner (Stevens), Gan Wang (BAE Systems), and Marilee Wheaton (Aerospace), for their valuable contributions to the study. 4/27/2017 USC-CSSE

2 Summary Current and future trends create challenges for systems and software data collection and analysis Metrics and “productivity:” “equivalent” size; requirements/design/product/value metrics; productivity growth and decline phenomena Cost drivers: effects of complexity, volatility, architecture Alternative processes: rapid/agile; systems of systems; evolutionary development Model integration: systems and software; cost, schedule, and quality; costs and benefits Updated systems and software data definitions and estimation methods needed for good management Being addressed in Nov 4-5 workshops 4/27/2017 USC-CSSE

3 Metrics and “Productivity”
“Equivalent” size Requirements/design/product/value metrics Productivity growth phenomena Incremental development productivity decline 4/27/2017 USC-CSSE

4 Size Issues and Definitions
An accurate size estimate is the most important input to parametric cost models. Desire consistent size definitions and measurements across different models and programming languages The AFCAA Guide sizing chapter addresses these: Common size measures defined and interpreted for all the models Guidelines for estimating software size Guidelines to convert size inputs between models so projects can be represented in in a consistent manner Using Source Lines of Code (SLOC) as common measure Logical source statements consisting of data declarations executables Rules for considering statement type, how produced, origin, build, etc. Providing automated code counting tools adhering to definition Providing conversion guidelines for physical statements Addressing other size units such as requirements, use cases, etc. 4/27/2017 USC-CSSE

5 Equivalent SLOC – A User Perspective *
“Equivalent” – A way of accounting for relative work done to generate software relative to the code-counted size of the delivered software “Source” lines of code: The number of logical statements prepared by the developer and used to generate the executing code Usual Third Generation Language (C, Java): count logical 3GL statements For Model-driven, Very High Level Language, or Macro-based development: count statements that generate customary 3GL code For maintenance above the 3GL level: count the generator statements For maintenance at the 3GL level: count the generated 3GL statements Two primary effects: Volatility and Reuse Volatility: % of ESLOC reworked or deleted due to requirements volatility Reuse: either with modification (modified) or without modification (adopted) *Stutzke, Richard D, Estimating Software-Intensive Systems, Upper Saddle River, N.J.: Addison Wesley, 2005 4/27/2017 USC-CSSE

6 Cockburn, Writing Effective Use Cases, 2001
“Number of Requirements” - Early estimation availability at kite level -Data collection and model calibration at clam level Cockburn, Writing Effective Use Cases, 2001 4/27/2017 USC-CSSE

7 IBM-UK Expansion Factor Experience
Business Objectives 5 Cloud Business Events/Subsystems 35 Kite Use Cases/Components 250 Sea level Main Steps/Main Operations Fish Alt. Steps/Detailed Operations 15,000 Clam SLOC* 1,000K – 1,500K Lava *(70 – 100 SLOC/Detailed Operation) (Hopkins & Jenkins, Eating the IT Elephant, 2008) 4/27/2017 USC-CSSE

8 SLOC/Requirement Data (Selby, 2009)
4/27/2017 USC-CSSE

9 Uncertainties in scope, COTS, reuse, services
Estimation Challenges: A Dual Cone of Uncertainty – Need early systems engineering, evolutionary development Uncertainties in scope, COTS, reuse, services Uncertainties in competition, technology, organizations, mission priorities There is Another Cone of Uncertainty: Shorter increments are better Uncertainties in competition and technology evolution and changes in organizations and mission priorities, can wreak havoc with the best of system development programs. In addition, the longer the development cycle, the more likely it will be that several of these uncertainties or changes will occur and make the originally-defined system obsolete. Therefore, planning to develop a system using short increments helps to ensure that early, high priority capabilities can be developed and fielded and changes can be more easily accommodated in future increments. 4/27/2017 USC-CSSE

10 Incremental Development Productivity Decline (IDPD)
Example: Site Defense BMD Software 5 builds, 7 years, $100M; operational and support software Build 1 productivity over 200 LOC/person month Build 5 productivity under 100 LOC/PM Including Build 1-4 breakage, integration, rework 318% change in requirements across all builds IDPD factor = 20% productivity decrease per build Similar trends in later unprecedented systems Not unique to DoD: key source of Windows Vista delays Maintenance of full non-COTS SLOC, not ESLOC Build 1: 200 KSLOC new; 200K = 240K ESLOC Build 2: 400 KSLOC of Build 1 software to maintain, integrate 4/27/2017 USC-CSSE

11 IDPD Cost Drivers: Conservative 4-Increment Example
Some savings: more experienced personnel (5-20%) Depending on personnel turnover rates Some increases: code base growth, diseconomies of scale, requirements volatility, user requests Breakage, maintenance of full code base (20-40%) Diseconomies of scale in development, integration (10-25%) Requirements volatility; user requests (10-25%) Best case: 20% more effort (IDPD=6%) Worst case: 85% (IDPD=23%) 4/27/2017 USC-CSSE

12 Effects of IDPD on Number of Increments
Model relating productivity decline to number of builds needed to reach 8M SLOC Full Operational Capability Assumes Build 1 production of 2M 100 SLOC/PM 20000 PM/ 24 mo. = 833 developers Constant staff size for all builds Analysis varies the productivity decline per build Extremely important to determine the incremental development productivity decline (IDPD) factor per build SLOC 8M 2M 4/27/2017 USC-CSSE

13 Incremental Development Data Challenges
Breakage effects on previous increments Modified, added, deleted SLOC: need Code Count with diff tool Accounting for breakage effort Charged to current increment or I&T budget (IDPD) IDPD effects may differ by type of software “Breakage ESLOC” added to next increment Hard to track phase and activity distributions Hard to spread initial requirements and architecture effort Size and effort reporting Often reported cumulatively Subtracting previous increment size may miss deleted code Time-certain development Which features completed? (Fully? Partly? Deferred?) 4/27/2017 USC-CSSE

14 “Equivalent SLOC” Paradoxes
Not a measure of software size Not a measure of software effort Not a measure of delivered software capability A quantity derived from software component sizes and reuse factors that helps estimate effort Once a product or increment is developed, its ESLOC loses its identity Its size expands into full SLOC Can apply reuse factors to this to determine an ESLOC quantity for the next increment But this has no relation to the product’s size 4/27/2017 USC-CSSE

15 COCOMO II Database Productivity Increases
Two productivity increasing trends exist: 1970 – 1994 and 1995 – 2009 productivity trends largely explained by cost drivers and scale factors Post-2000 productivity trends not explained by cost drivers and scale factors SLOC per PM Five-year Periods 4/27/2017 USC-CSSE

16 Constant A Decreases Over Post-2000 Period
Calibrate the constant A while stationing B = 0.91 Constant A is the inverse of adjusted productivity adjusts the productivity with SF’s and EM’s Constant A decreases over the periods 50% decrease over Post-2000 period Productivity is not fully characterized by SF’s and EM’s What factors can explain the phenomenon? 4/27/2017 USC-CSSE

17 Candidate Explanation Hypotheses
Productivity has doubled over the last 40 years But scale factors and effort multipliers did not fully characterize this increase Hypotheses/questions for explanation Is standard for rating personnel factors being raised? E.g., relative to “national average” Was generated code counted as new code? E.g., model-driven development Was reused code counted as new code? Are the ranges of some cost drivers not large enough? Improvement in tools (TOOL) only contributes to 20% reduction in effort Are more lightweight projects being reported? Documentation relative to life-cycle needs 4/27/2017 USC-CSSE

18 Summary Current and future trends create challenges for systems and software data collection and analysis Metrics and “productivity:” “equivalent” size; requirements/design/product/value metrics; productivity growth and decline phenomena Cost drivers: effects of complexity, volatility, architecture Alternative processes: rapid/agile; systems of systems; evolutionary development Model integration: systems and software; cost, schedule, and quality; costs and benefits Updated systems and software data definitions and estimation methods needed for good management Being addressed in Nov 4-5 workshops 4/27/2017 USC-CSSE

19 Cost Driver Rating Scales and Effects
Application Complexity Difficulty and Constraints scales Architecture, Criticality, and Volatility Effects Architecture effects as function of product size Added effects of criticality and volatility 4/27/2017 USC-CSSE

20 Candidate AFCAA Difficulty Scale
Difficulty would be described in terms of required software reliability, database size, product complexity, integration complexity, information assurance, real-time requirements, different levels of developmental risks, etc. 4/27/2017 USC-CSSE

21 Candidate AFCAA Constraints Scale
Dimensions of constraints include electrical power, computing capacity, storage capacity, repair capability, platform volatility, physical environment accessibility, etc. 4/27/2017 USC-CSSE

22 Added Cost of Weak Architecting Calibration of COCOMO II Architecture and Risk Resolution factor to 161 project data points 4/27/2017 USC-CSSE

23 Effect of Size on Software Effort Sweet Spots
4/27/2017 USC-CSSE

24 Effect of Volatility and Criticality on Sweet Spots
4/27/2017 USC-CSSE

25 Summary Current and future trends create challenges for systems and software data collection and analysis Metrics and “productivity:” “equivalent” size; requirements/design/product/value metrics; productivity growth and decline phenomena Cost drivers: effects of complexity, volatility, architecture Alternative processes: rapid/agile; systems of systems; evolutionary development Model integration: systems and software; cost, schedule, and quality; costs and benefits Updated systems and software data definitions and estimation methods needed for good management Being addressed in Nov 4-5 workshops 4/27/2017 USC-CSSE

26 Estimation for Alternative Processes
Agile Methods Planning Poker/Wideband Delphi Yesterday’s Weather Adjustment: Agile COCOMO II Evolutionary Development Schedule/Cost/Quality as Independent Variable Incremental Development Productivity Decline Systems of Systems Hybrid Methods 4/27/2017 USC-CSSE

27 Planning Poker/Wideband Delphi
Stakeholders formulate story to be developed Developers choose and show cards indicating their estimated ideal person-weeks to develop story Card values: 1,2,3,5,8,13,20,30, 50,100 If card values are about the same, use the median as the estimated effort If card values vary significantly, discuss why some estimates are high and some low Re-vote after discussion Generally, values will converge, and the median can be used as the estimated effort 4/27/2017 USC-CSSE

28 Agile COCOMO II Adjusting agile “yesterday’s weather” estimates
Agile COCOMO II is a web-based software cost estimation tool that enables you to adjust your estimates by analogy through identifying the factors that will be changing and by how much. Step 1 Estimate Cost:   Estimate Effort:   Analogy Parameter Project Name: Baseline Value:  (Dollars) (Person - Month)(Dollars / Function Point) (Dollars / Lines of Code) (Function Points / Person-Months) (Lines of Code / Person-Months)(Ideal-Person-Weeks / Iteration) Current Project Function Points Current Project Size (SLOC):  (Lines of Code) Current Labor Rate:  (Dollars / Person-Month) Current Labor Rate (for Ideal-Person-Weeks):  (Dollars / Ideal-Person-Week) Current Iteration Number: 4/27/2017 USC-CSSE

29 Incremental Development Forms
Type Examples Pros Cons Cost Estimation Evolutionary Sequential Small: Agile Large: Evolutionary Development Adaptability to change; rapid fielding Easiest-first; late, costly breakage Small: Planning-poker-type Large: Parametric with IDPD Prespecified Sequential Platform base plus PPPIs Prespecifiable full-capability requirements Emergent requirements or rapid change COINCOMO with no increment overlap Overlapped Evolutionary Product lines with ultrafast change Modular product line Cross-increment breakage Parametric with IDPD and Requirements Volatility Rebaselining Evolutionary Mainstream product lines; Systems of systems High assurance with rapid change Highly coupled systems with very rapid change COINCOMO, IDPD for development; COSYSMO for rebaselining Time phasing terms: Scoping; Architecting; Developing; Producing; Operating (SADPO) Prespecified Sequential: SA; DPO1; DPO2; DPO3; … Evolutionary Sequential: SADPO1; SADPO2; SADPO3; … Evolutionary Overlapped: SADPO1; SADPO2; SADPO3; … Evolutionary Concurrent: SA; D1 ; PO1… SA2; D2 ; PO2… SA3; D3; PO3 … 4/27/2017 USC-CSSE

30 Evolutionary Development Implications
Total Package Procurement doesn’t work Can’t determine requirements and cost up front Need significant, sustained systems engineering effort Need best-effort up-front architecting for evolution Can’t dismiss systems engineers after Preliminary Design Review Feature set size becomes dependent variable Add or drop borderline-priority features to meet schedule or cost Implies prioritizing, architecting steps in SAIV process model Safer than trying to maintain a risk reserve 4/27/2017 USC-CSSE

31 Future DoD Challenges: Systems of Systems
Source Selection ● ● Valuation Exploration Architecting Develop Operation System A System B System C System x LCO-type Proposal & Feasibility Info Candidate Supplier/ Strategic Partner n Strategic Partner 1 SoS-Level FCR1 DCR1 OCR1 Rebaseline/ Adjustment FCR1 OCR2    OCRx1 FCRB DCRB OCRB1 FCRA DCRA FCRC DCRC OCRC1 OCRx2 OCRx3 OCRx4 OCRx5 OCRC2 OCRB2 OCRA1 4/27/2017 USC-CSSE

32 Conceptual SoS SE Effort Profile
SoS SE activities focus on three somewhat independent activities, performed by relatively independent teams A given SoS SE team may be responsible for one, two, or all activity areas Some SoS programs may have more than one organization performing SoS SE activities 4/27/2017 USC-CSSE

33 SoS SE Cost Model SoSs supported by cost model
Strategically-oriented stakeholders interested in tradeoffs and costs Long-range architectural vision for SoS Developed and integrated by an SoS SE team System component independence Size drivers and cost factors Based on product characteristics, processes that impact SoS SE team effort, and SoS SE personnel experience and capabilities Planning, Requirements Management, and Architecting Source Selection and Supplier Oversight SoS Integration and Testing Size Drivers SoS SE Effort Cost Factors Calibration 4/27/2017 USC-CSSE

34 Comparison of SE and SoSE Cost Model Parameters
Parameter Aspects COSYSMO COSOSIMO Size drivers # of system requirements # of system interfaces # operational scenarios # algorithms # of SoS requirements # of SoS interface protocols # of constituent systems # of constituent system organizations “Product” characteristics Size/complexity/volatility Requirements understanding Architecture understanding Level of service requirements # of recursive levels in design Migration complexity Technology risk #/ diversity of platforms/installations Level of documentation Component system maturity and stability Component system readiness Process characteristics Process capability Multi-site coordination Tool support Maturity of processes Cost/schedule compatibility SoS risk resolution People characteristics Stakeholder team cohesion Personnel/team capability Personnel experience/continuity SoS team capability 4/27/2017 USC-CSSE

35 Summary Current and future trends create challenges for systems and software data collection and analysis Metrics and “productivity:” “equivalent” size; requirements/design/product/value metrics; productivity growth and decline phenomena Cost drivers: effects of complexity, volatility, architecture Alternative processes: rapid/agile; systems of systems; evolutionary development Model integration: systems and software; cost, schedule, and quality; costs and benefits Updated systems and software data definitions and estimation methods needed for good management Being addressed in Nov 4-5 workshops 4/27/2017 USC-CSSE

36 Reasoning about the Value of Dependability – iDAVE
iDAVE: Information Dependability Attribute Value Estimator Use iDAVE model to estimate and track software dependability ROI Help determine how much dependability is enough Help analyze and select the most cost-effective combination of software dependability techniques Use estimates as a basis for tracking performance Integrates cost estimation (COCOMO II), quality estimation (COQUALMO), value estimation relationships 4/27/2017 USC-CSSE

37 iDAVE Model Framework 4/27/2017 USC-CSSE

38 Examples of Utility Functions: Response Time
Value Time Real-Time Control; Event Support Value Time Mission Planning, Competitive Time-to-Market Critical Region Value Time Event Prediction - Weather; Software Size Value Time Data Archiving Priced Quality of Service 4/27/2017 USC-CSSE

39 Tradeoffs Among Cost, Schedule, and Reliability: COCOMO II
(RELY, MTBF (hours)) For 100-KSLOC set of features Can “pick all three” with 77-KSLOC set of features -- Cost/Schedule/RELY: “pick any two” points 4/27/2017 USC-CSSE

40 The SAIV* Process Model
1. Shared vision and expectations management 2. Feature prioritization 3. Schedule range estimation and core-capability determination - Top-priority features achievable within fixed schedule with 90% confidence 4. Architecting for ease of adding or dropping borderline-priority features - And for accommodating past-IOC directions of growth 5. Incremental development - Core capability as increment 1 6. Change and progress monitoring and control - Add or drop borderline-priority features to meet schedule *Schedule As Independent Variable; Feature set as dependent variable Also works for cost, schedule/cost/quality as independent variable 4/27/2017 USC-CSSE

41 How Much Testing is Enough
How Much Testing is Enough? - Early Startup: Risk due to low dependability - Commercial: Risk due to low dependability - High Finance: Risk due to low dependability - Risk due to market share erosion Sweet Spot COCOMO II: 12 22 34 54 Added % test time COQUALMO: 1.0 .475 .24 .125 0.06 P(L) Early Startup: .33 .19 .11 .06 .03 S(L) Commercial: .56 .32 .18 .10 High Finance: 3.0 1.68 .96 .54 .30 Market Risk: .008 .027 .09 REm 4/27/2017 USC-CSSE

42 Related Additional Measurement Challenges
Tracking progress of rebaselining, V&V teams No global plans; individual changes or software drops Earlier test preparation: surrogates, scenarios, testbeds Tracking content of time-certain increments Deferred or partial capabilities; effects across system Trend analysis of emerging risks INCOSE Leading Indicators; SERC Effectiveness Measures Contributions to systems effectiveness Measures of Effectiveness models, parameters Systems of systems progress, risk, change tracking Consistent measurement flow-up, flow-down, flow-across 4/27/2017 USC-CSSE

43 Some data definition topics for discussion In SW Metrics Unification workshop Nov 4-5
Ways to treat data elements COTS, other OTS (open source; services; GOTS; reuse; legacy code) Other size units (function points object points, use case points, etc.) Generated code: counting generator directives Requirements volatility Rolling up CSCIs into systems and systems of systems Cost model inputs and outputs (e.g., submitting estimate files) Scope issues Cost drivers, Scale factors Reuse parameters: Software Understanding , Programmer Unfamiliarity Phases included: hardware-software integration; systems of systems integration, transition, maintenance WBS elements and labor categories included Parallel software WBS How to involve various stakeholders Government, industry, commercial cost estimation organizations 4/27/2017 USC-CSSE

44 Summary Current and future trends create challenges for systems and software data collection and analysis Metrics and “productivity:” “equivalent” size; requirements/design/product/value metrics; productivity growth and decline phenomena Cost drivers: effects of complexity, volatility, architecture Alternative processes: rapid/agile; systems of systems; evolutionary development Model integration: systems and software; cost, schedule, and quality; costs and benefits Updated systems and software data definitions and estimation methods needed for good management Being addressed in Nov 4-5 workshops 4/27/2017 USC-CSSE

45 References Boehm, B., “Some Future Trends and Implications for Systems and Software Engineering Processes”, Systems Engineering 9(1), pp. 1-19, 2006. Boehm, B., and Lane, J., “Using the ICM to Integrate System Acquisition, Systems Engineering, and Software Engineering,” CrossTalk, October 2007, pp. 4-9. Boehm, B., Brown, A.W.. Clark, B., Madachy, R., Reifer, D., et al., Software Cost Estimation with COCOMO II, Prentice Hall, 2000. Dahmann, J. (2007); “Systems of Systems Challenges for Systems Engineering”, Systems and Software Technology Conference, June 2007. Department of Defense (DoD), Instruction , Operation of the Defense Acquisition System, December 2008. Galorath, D., and Evans, M., Software Sizing, Estimation, and Risk Management, Auerbach, 2006. Lane, J. and Boehm, B., “Modern Tools to Support DoD Software-Intensive System of Systems Cost Estimation, DACS State of the Art Report, also Tech Report USC-CSSE Lane, J., Valerdi, R., “Synthesizing System-of-Systems Concepts for Use in Cost Modeling,” Systems Engineering, Vol. 10, No. 4, December 2007. Madachy, R., “Cost Model Comparison,” Proceedings 21st, COCOMO/SCM Forum, November, 2006, Northrop, L., et al., Ultra-Large-Scale Systems: The Software Challenge of the Future, Software Engineering Institute, 2006. Reifer, D., “Let the Numbers Do the Talking,” CrossTalk, March 2002, pp. 4-8. Stutzke, R., Estimating Software-Intensive Systems, Addison Wesley, 2005. Valerdi, R, Systems Engineering Cost Estimation with COSYSMO, Wiley, 2010 (to appear) USC-CSSE Tech Reports, 4/27/2017 USC-CSSE

46 Backup Charts 4/27/2017 USC-CSSE

47 COSYSMO Operational Concept
# Requirements # Interfaces # Scenarios # Algorithms + Volatility Factor Size Drivers COSYSMO Effort Effort Multipliers Application factors 8 factors Team factors 6 factors Schedule driver Calibration WBS guided by ISO/IEC 15288 4/27/2017 USC-CSSE

48 4. Rate Cost Drivers - Application

49 COSYSMO Change Impact Analysis – I – Added SysE Effort for Going to 3 Versions
Size: Number, complexity, volatility, reuse of system requirements, interfaces, algorithms, scenarios (elements) 13 Versions: add 3-6% per increment for number of elements add 2-4% per increment for volatility Exercise Prep.: add 3-6% per increment for number of elements add 3-6% per increment for volatility Most significant cost drivers (effort multipliers) Migration complexity: 1.10 – 1.20 (versions) Multisite coordination: 1.10 – 1.20 (versions, exercise prep.) Tool support: 0.75 – 0.87 (due to exercise prep.) Architecture complexity: 1.05 – 1.10 (multiple baselines) Requirements understanding: 1.05 – 1.10 for increments 1,2; 1.0 for increment 3; for increment 4 4/27/2017 USC-CSSE

50 COSYSMO Change Impact Analysis – II – Added SysE Effort for Going to 3 Versions
Cost Element Incr. 1 Incr. 2 Incr. 3 Incr. 4 Size 1.11 – 1.22 1.22 – 1.44 1.33 – 1.66 1.44 – 1.88 Effort Product 1.00 – 1.52 0.96 – 1.38 0.86 – 1.31 Effort Range 1.11 – 1.85 1.22 – 2.19 1.27 – 2.29 1.23 – 2.46 Arithmetic Mean 1.48 1.70 1.78 1.84 Geometric Mean 1.43 1.63 1.71 1.74 4/27/2017 USC-CSSE

51 COSYSMO Requirements Counting Challenge
Estimates made in early stages Relatively few high-level design-to requirements Calibration performed on completed projects Relatively many low-level test-to requirements Need to know expansion factors between levels Best model: Cockburn definition levels Cloud, kite, sea level, fish, clam Expansion factors vary by application area, size One large company: Magic Number 7 Small e-services projects: more like 3:1, fewer lower levels Survey form available to capture your experience 4/27/2017 USC-CSSE

52 Increment N Transition/O&M
Achieving Agility and High Assurance -I Using timeboxed or time-certain development Precise costing unnecessary; feature set as dependent variable Short Development Increments Rapid Change Foreseeable Change (Plan) Short, Stabilized Development Of Increment N Increment N Transition/O&M Increment N Baseline ICM Stage II: Increment View The ICM is organized to simultaneously address the conflicting challenges of rapid change and high assurance of dependability. It also addresses the need for rapid fielding of incremental capabilities with a minimum of rework. For high assurance, the development of each increment should be short, stable, and provided with a validated baseline architecture and set of requirements and development plans. The architecture should accommodate any foreseeable changes in the requirements; the next chart shows how the unforeseeable changes are handled. High Assurance Stable Development Increments 4/27/2017 USC-CSSE

53 Evolutionary Concurrent: Incremental Commitment Model
Unforeseeable Change (Adapt) Rapid Change Agile Rebaselining for Future Increments Future Increment Baselines Short Development Increments Deferrals Foreseeable Change (Plan) Short, Stabilized Development of Increment N Increment N Transition/ Operations and Maintenance Increment N Baseline ICM Stage II: More Detailed Increment View The need to deliver high-assurance incremental capabilities on short fixed schedules means that each increment needs to be kept as stable as possible. This is particularly the case for large, complex systems and systems of systems, in which a high level of rebaselining traffic can easily lead to chaos. In keeping with the use of the spiral model as a risk-driven process model generator, the risks of destabilizing the development process make this portion of the project into a waterfall-like build-to-specification subset of the spiral model activities. The need for high assurance of each increment also makes it cost-effective to invest in a team of appropriately skilled personnel to continuously verify and validate the increment as it is being developed. However, “deferring the change traffic” does not imply deferring its change impact analysis, change negotiation, and rebaselining until the beginning of the next increment. With a single development team and rapid rates of change, this would require a team optimized to develop to stable plans and specifications to spend much of the next increment’s scarce calendar time performing tasks much better suited to agile teams. The appropriate metaphor for addressing rapid change is not a build-to-specification metaphor or a purchasing-agent metaphor but an adaptive “command-control-intelligence-surveillance-reconnaissance” (C2ISR) metaphor. It involves an agile team performing the first three activities of the C2ISR “Observe, Orient, Decide, Act” (OODA) loop for the next increments, while the plan-driven development team is performing the “Act” activity for the current increment. “Observing” involves monitoring changes in relevant technology and COTS products, in the competitive marketplace, in external interoperating systems and in the environment; and monitoring progress on the current increment to identify slowdowns and likely scope deferrals. “Orienting” involves performing change impact analyses, risk analyses, and tradeoff analyses to assess candidate rebaselining options for the upcoming increments. “Deciding” involves stakeholder renegotiation of the content of upcoming increments, architecture rebaselining, and the degree of COTS upgrading to be done to prepare for the next increment. It also involves updating the future increments’ Feasibility Rationales to ensure that their renegotiated scopes and solutions can be achieved within their budgets and schedules. A successful rebaseline means that the plan-driven development team can hit the ground running at the beginning of the “Act” phase of developing the next increment, and the agile team can hit the ground running on rebaselining definitions of the increments beyond. Stable Development Increments High Assurance Artifacts Concerns Future V&V Resources Current V&V Resources Verification and Validation (V&V) of Increment N Continuous V&V 4/27/2017 USC-CSSE

54 Effect of Unvalidated Requirements -15 Month Architecture Rework Delay
Arch. A: Custom many cache processors Arch. B: Modified Client-Server 1 2 3 4 5 Response Time (sec) Original Spec After Prototyping Available budget 4/27/2017 USC-CSSE


Download ppt "Barry Boehm, USC-CSSE Fall 2011"

Similar presentations


Ads by Google