Cycle Time Analytics Making decisions using Lead Time and Cycle Time to avoid needing estimates for every story Troy Slides at bit.ly/agilesim.

Slides:



Advertisements
Similar presentations
Value-at-Risk: A Risk Estimating Tool for Management
Advertisements

Cycle Time Mathematics Presented at KLRAT 2013
Design of Experiments Lecture I
SE503 Advanced Project Management Dr. Ahmed Sameh, Ph.D. Professor, CS & IS Project Uncertainty Management.
Copyright © 2009 Pearson Education, Inc. Chapter 29 Multiple Regression.
Materials for Lecture 11 Chapters 3 and 6 Chapter 16 Section 4.0 and 5.0 Lecture 11 Pseudo Random LHC.xls Lecture 11 Validation Tests.xls Next 4 slides.
1 / 27 CS 709B Advanced Software Project Management and Development Software Estimation - I Based on Chapters 1-3 of the book [McConnell 2006] Steve McConnell,
G. Alonso, D. Kossmann Systems Group
OPSM 639, C. Akkan1 Defining Risk Risk is –the undesirable events, their chances of occurring and their consequences. Some risk can be identified before.
Statistical Decision Making
1 Schedule Risk Assessment (SRA) Overview July 2013 NAVY CEVM.
Propagation of Error Ch En 475 Unit Operations. Quantifying variables (i.e. answering a question with a number) 1. Directly measure the variable. - referred.
Realism in Assessment of Effort Estimation Uncertainty: It Matters How You Ask By Magne Jorgensen IEEE Transactions on Software Engineering Vol. 30, No.
Chapter 12 - Forecasting Forecasting is important in the business decision-making process in which a current choice or decision has future implications:
Chapter 13 Forecasting.
1 BA 555 Practical Business Analysis Review of Statistics Confidence Interval Estimation Hypothesis Testing Linear Regression Analysis Introduction Case.
McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. 3 Forecasting.
Statistics: Unlocking the Power of Data Lock 5 STAT 101 Dr. Kari Lock Morgan Simple Linear Regression SECTIONS 9.3 Confidence and prediction intervals.
Monté Carlo Simulation MGS 3100 – Chapter 9. Simulation Defined A computer-based model used to run experiments on a real system.  Typically done on a.
Slides 13b: Time-Series Models; Measuring Forecast Error
Bootstrapping applied to t-tests
Standard Error of the Mean
Hydrologic Statistics
The Importance of Forecasting in POM
B. RAMAMURTHY EAP#2: Data Mining, Statistical Analysis and Predictive Analytics for Automotive Domain CSE651C, B. Ramamurthy 1 6/28/2014.
Introduction to Monte-carlo Analysis for software development
BIOSTAT - 2 The final averages for the last 200 students who took this course are Are you worried?
Understanding the Variability of Your Data: Dependent Variable Two "Sources" of Variability in DV (Response Variable) –Independent (Predictor/Explanatory)
Lecture 12 Statistical Inference (Estimation) Point and Interval estimation By Aziza Munir.
The AIE Monte Carlo Tool The AIE Monte Carlo tool is an Excel spreadsheet and a set of supporting macros. It is the main tool used in AIE analysis of a.
The AIE Monte Carlo Tool The AIE Monte Carlo tool is an Excel spreadsheet and a set of supporting macros. It is the main tool used in AIE analysis of a.
CSC444F'05Lecture 51 The Stochastic Capacity Constraint.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Slide
Statistical Analysis. Statistics u Description –Describes the data –Mean –Median –Mode u Inferential –Allows prediction from the sample to the population.
Various topics Petter Mostad Overview Epidemiology Study types / data types Econometrics Time series data More about sampling –Estimation.
DAVIS AQUILANO CHASE PowerPoint Presentation by Charlie Cook F O U R T H E D I T I O N Forecasting © The McGraw-Hill Companies, Inc., 2003 chapter 9.
Inference We want to know how often students in a medium-size college go to the mall in a given year. We interview an SRS of n = 10. If we interviewed.
Outline of Chapter 9: Using Simulation to Solve Decision Problems Real world decisions are often too complex to be analyzed effectively using influence.
This material is approved for public release. Distribution is limited by the Software Engineering Institute to attendees. Sponsored by the U.S. Department.
Copyright ©2016 Cengage Learning. All Rights Reserved
Propagation of Error Ch En 475 Unit Operations. Quantifying variables (i.e. answering a question with a number) 1. Directly measure the variable. - referred.
Reserve Variability – Session II: Who Is Doing What? Mark R. Shapland, FCAS, ASA, MAAA Casualty Actuarial Society Spring Meeting San Juan, Puerto Rico.
Quantitative Project Risk Analysis 1 Intaver Institute Inc. 303, 6707, Elbow Drive S.W., Calgary AB Canada T2V 0E5
Hubbard Decision Research The Applied Information Economics Company Follow-up Bootstrap Case Study.
Statistics Presentation Ch En 475 Unit Operations.
McGraw-Hill/Irwin Copyright © 2010 by The McGraw-Hill Companies, Inc. All rights reserved. 3 Forecasting.
Simulation is the process of studying the behavior of a real system by using a model that replicates the system under different scenarios. A simulation.
Robust Estimators.
Inference: Probabilities and Distributions Feb , 2012.
IMPORTANCE OF STATISTICS MR.CHITHRAVEL.V ASST.PROFESSOR ACN.
ANOVA, Regression and Multiple Regression March
Yazd University, Electrical and Computer Engineering Department Course Title: Advanced Software Engineering By: Mohammad Ali Zare Chahooki The Project.
Statistics Presentation Ch En 475 Unit Operations.
From Wikipedia: “Parametric statistics is a branch of statistics that assumes (that) data come from a type of probability distribution and makes inferences.
3-1Forecasting William J. Stevenson Operations Management 8 th edition.
Develop Schedule is the Process of analyzing activity sequences, durations, resource requirements, and schedule constraints to create the project schedule.
Establishing baselines Detecting a Trend What to do following a Trend How to re-baseline Life Cycle of a Trend.
Computer Simulation Henry C. Co Technology and Operations Management,
Market-Risk Measurement
Prepared by Lloyd R. Jaisingh
Risk Mgt and the use of derivatives
Estimating with PROBE II
Project Management for Software Engineers (Summer 2017)
Quantitative Project Risk Analysis
Johanna Rothman Report Your Project State Chapter 14
Chapter 11 – Project Dashboard
Agile Philly: Estimating Vs Forecasting Using a Monte Carlo Tool
Risk Adjusted Project Schedules
Psych 231: Research Methods in Psychology
DESIGN OF EXPERIMENTS by R. C. Baker
Presentation transcript:

Cycle Time Analytics Making decisions using Lead Time and Cycle Time to avoid needing estimates for every story Troy Slides at bit.ly/agilesim LKCE 2013 – Modern Management Methods

@t_Magennis slides at bit.ly/agilesim 2

Actual Maximum Actual Minimum Q. What is the chance of the 4 th sample being between the range seen after the first three samples? (no duplicates, uniform distribution, picked at slides at bit.ly/agilesim

Actual Maximum Actual Minimum Highest sample Lowest sample Q. What is the chance of the 4 th sample being between the range seen after the first three samples? (no duplicates, uniform distribution, picked at random) ? ? ? slides at bit.ly/agilesim

Actual Maximum Actual Minimum % chance higher than highest seen 25% lower than highest and higher than second highest 25% higher than lowest and lower than second lowest 25% lower than lowest seen Highest sample Lowest sample Q. What is the chance of the 4 th sample being between the range seen after the first three samples? (no duplicates, uniform distribution, picked at random) A. 50% % = (1 - (1 / n – 1)) * slides at bit.ly/agilesim

Actual Maximum Actual Minimum % chance higher than highest seen 5% lower than lowest seen Highest sample Lowest sample Q. What is the chance of the 12 th sample being between the range seen after the first three samples? (no duplicates, uniform distribution, picked at random) ? ? A. 90% % = (1 - (1 / n – 1)) * slides at bit.ly/agilesim

# Prior SamplesPrediction Next Sample Within Prior Sample Range 350% 467% 575% 680% 783% 886% 988% 1089% 1190% 1291% 1392% 1593% 1794% slides at bit.ly/agilesim

8 Four people arrange a restaurant booking after work Q. What is the chance they arrive on-time to be slides at bit.ly/agilesim

Commercial in confidence 9 Person 1Person 2Person 3Person 4 1 in 16 EVERYONE is ON-TIME 15 TIMES more likely at least on person is late

10

70% Chance you need to Forecast (You just performed a probabilistic forecast! 3.5 cases out of 5) Estimate or #NoEstimate?

12 Oh, over 3 months, so we’ll do Project Y instead We estimated every task for project x and have determined it will take days Perfect information is rarely sought or slides at bit.ly/agilesim

“Value” eg. Cost of Not Doing Product 1 Product 2 Product 3 Complete Order? “Time” Rem. Build Time/Effort Economic Prioritization – same time, different value

“Time” Rem. Build Time/Effort “Value” eg. Cost of Not Doing Product 1 Product 2 Product 3 Complete Order? Economic Prioritization – same value different time

W.S.J.F. = Prioritization Heuristic to optimize cashflow “Do Highest First” Cost of Delay Duration or Time Delayed (often, the remaining time to complete) You are here to learn this Suggested Economic Prioritization Learn this here

Forecasts are attempts to answer questions about future events. They are an estimate within a stated uncertainty 16 85% Change of 15 th August 2013 Definitely Greater than $1,000,000 At Least 2 slides at bit.ly/agilesim

There is NO single forecast result Uncertainty In = Uncertainty Out There will always be many possible results, some more likely and this is the key to proper slides at bit.ly/agilesim

18 Probabilistic Forecasting combines many uncertain inputs to find many possible outcomes, and what outcomes are more likely than others Time to Complete Backlog 50% Possible Outcomes slides at bit.ly/agilesim

19 Did the Obama 2012 Campaign Fund Advertising to Achieve 50% Chance of Re-election? Time to Complete Backlog 85% Possible Outcomes 15% slides at bit.ly/agilesim

Task Uncertainty – Summing Variance 20 Source attribution: Aidan Lyon, Department of Philosophy. University of Maryland, College Park. “Why Normal Distributions Occur” slides at bit.ly/agilesim

Use with attribution 21

Staff Dev Cost Cost of Delay Decision Induced Uncertainty 22 Forecast Completion Date JulyAugustSeptemberOctoberNovemberDecember Planned / Due Date Actual Date Every choice we make changes the slides at bit.ly/agilesim

How I Present Cycle Time Forecasts 23 Current Team - 1 DevsDescription 5 Jul29 JulFeature 1 25 Jul16 AugFeature 2 3 Aug9 SepFeature 3 9 Sep4 OctFeature 4 2 Oct2 NovFeature 5 10 Nov7 JanFeature 6 8 Jan8 MarFeature 7 Project Staffing Options Dev Staff Test Staff All Features Complete > 85% CI Full time staff 738-Mar-2014 Current team 838-Jan Tester8410-Dec Devs10426-Oct Tester10515-Oct-2013 What are my staffing options? When will my feature slides at bit.ly/agilesim

MODELING AND CYCLE TIME What is modelling and how to use cycle time 24

A model is a tool used to mimic a real world process Models are tools for low-cost slides at bit.ly/agilesim

Success Hinges on Being Wrong Early Initial Goal – To enable decisions of staff size, cost and date for prioritization – Accuracy: Eventual duration within the forecast set Next Goal – To mimic the real system accurately “enough” to perform reliable experiments (sensitivity testing) – To give the earliest warning of actual versus plan diff Early action have bigger impact – Accuracy: Getting better all the time (matching actual) slides at bit.ly/agilesim

Depth of Forecasting models Commercial in confidence 27 Simple Diagnostic

Simple Cycle Time Model 28 Amount of Work (# stories) Lead Time or Cycle Time Parallel Work in Proc. (WIP) Random Chance / Risk / slides at bit.ly/agilesim

Capturing Cycle Time and WIP Use with attribution 29 StoryStart DateCompleted Date Cycle Time (days) 11 Jan Jan Jan Jan Jan Jan Jan Jan Feb Jan Jan Jan Jan Jan Jan Jan 2013 Date “Complete” – Date “Started” 14

Capturing Cycle Time and WIP Use with attribution 30 StoryStart DateCompleted Date Cycle Time (days) 11 Jan Jan Jan Jan Jan Jan Jan Jan Feb Jan Jan Jan Jan Jan Jan Jan 2013 DateWIP 1 Jan 3 Jan 4 Jan 5 Jan 6 Jan 7 Jan 8 Jan 9 Jan 10 Jan … 15 Jan Count of Started, but Not completed 5

Capturing Cycle Time and WIP Use with attribution 31 StoryStart DateCompleted Date Cycle Time (days) 11 Jan Jan Jan Jan Jan Jan Jan Jan Feb Jan Jan Jan Jan Jan Jan Jan 2013 DateWIP 1 Jan1 3 Jan2 4 Jan2 5 Jan3 6 Jan4 7 Jan5 8 Jan5 9 Jan5 10 Jan7 …… 15 Jan7

32 Note: Cycle time sometimes referred to as Lead Time but not going to discuss this now! Cycle Time for Story/Epic = Date “Complete” – Date “Started” If you have a captured start and complete date you can calculate cycle time and WIP. Work in Process at Date x = Count of (“Started” < Date x & “Completed” > Date slides at bit.ly/agilesim

Sum Random Numbers Sum: ….. Historical Story Cycle Time Trend Days To Complete Basic Cycle Time Forecast Monte Carlo Process 1. Gather historical story lead-times 2. Build a set of random numbers based on pattern 3. Sum a random number for each remaining story to build a potential outcome 4. Repeat many times to find the likelihood (odds) to build a pattern of likelihood outcomes More often Less often

34 Backlog Feature 1 Feature 2 Feature 3 1. Historical Cycle Time 3. The Work (Backlog) 2. Planned Resources/ WIP Monte Carlo Analysis = Process to Combine Multiple Uncertain Measurements / Estimates 4. Historical Scope Creep Rate (optional) 5. Historical Defect Rate and Cycle Times (optional) 6. slides at bit.ly/agilesim

slides at bit.ly/agilesim

Commercial in confidence 36 How certain based on model forecast Further calculations to make economic tradeoffs

Commercial in confidence 37

38 Y-Axis: Number of Completed Stories Project Complete Likelihood Range of complete stories probability 0 to 50% 50 to 75% > 75% X-Axis: slides at bit.ly/agilesim

FORECASTING STRATEGIES 39

January February MarchAprilMayJune July August September The Future The Past 1. Model Baseline using historically known truths 2. Test Model against historically known truths 3. Forecast When you have historical data

Compare Model vs Actual Often

@t_Magennis slides at bit.ly/agilesim 42 Range of complete probability Actual results to compare if model is predictable

Forecast Trend Report Card 43 WatchGREATWatchOkBadWatchWORRYWatch Cycle Time Trend versus Model ↘↘↗↗↘↘↗↗ ACTIVE WIP Actual vs Model ↘↗↘↗↘↗↘↗ Story Count Trend versus Model ↘↘↘↘↗↗↗↗ Arrow Legend: Green: Heading in beneficial direction Red: Heading in damaging direction For “Watch” – the WIP and story count trends need to be analyzed. They are 1:1 ratio of compensating importance, and may offset each slides at bit.ly/agilesim

Forecast Trend Report Card 44 Last ReviewThis ReviewActions Cycle Time Trend →↗↘→↗↘→↗↘→↗↘ Avg. WIP Actual vs Model →↗↘→↗↘→↗↘→↗↘ Story Count Trend →↗↘→↗↘→↗↘→↗↘ Future Feature Difficulty vs Prior slides at bit.ly/agilesim

When you have no historical data January February MarchApril May June July August September The slides at bit.ly/agilesim

Simple Cycle Time Model 46 Amount of Work (# stories, scope creep, defects) Cycle Time (range guess or samples) Parallel Work in Proc. (WIP on slides at bit.ly/agilesim

Data Needed for Cycle Time Model Cycle Time – Range guess or history Amount of Work – Number Stories (per Feature) – Scope Creep – Defect Rate Concurrent WIP – Number of cards across all lanes on the board Commercial in confidence 47

48 If we understand how cycle time is statistically distributed, then an initial guess of maximum allows an inference to be made Alternatives - Borrow a similar project’s data Borrow industry data Fake it until you make it… (AKA guess slides at bit.ly/agilesim

49 Note: Histogram from actual slides at bit.ly/agilesim

Commercial in confidence 50 Exponential Distribution (Weibull shape = 1) The person who gets the work can complete the work Teams with no external dependencies Teams doing repetitive work E.g. DevOps, Database teams,

Commercial in confidence 51 Weibull Distribution (shape = 1.5) Typical dev team ranges between 1.2 and 1.8

Commercial in confidence 52 Rayleigh Distribution (Weibull shape = 2) Teams with MANY external dependencies Teams that have many delays and re-work. E.g. Test teams

53 Scale – How Wide in Range. Related to the Upper Bound. *Rough* Guess: (High – Low) / 4 Shape – How Fat the distribution. 1.5 is a good starting point. Location – The Lower slides at bit.ly/agilesim

What Distribution To Use... No Data at All, or Less than < 11 Samples (why 11?) (why 11?) – Uniform Range with Boundaries Guessed (safest) – Weibull Range with Boundaries Guessed (likely) 11 to 30 Samples – Uniform Range with Boundaries at 5 th and 95 th CI – Weibull Range with Boundaries at 5 th and 95 th CI More than 30 Samples – Use historical data as bootstrap reference – Curve Fitting software slides at bit.ly/agilesim

Questions… Download the slides (soon) and software at Contact me – – Read: 55

CURRENT FORECASTING PRACTICES Why traditional forecasting practices fail to deliver and why this is the only outcome possible… Commercial in confidence 56

Total Story Lead Time 30 days Development Time 5 Days (~ 15%) Testing Time 2 Days Defect Rework 2 Days Release / DevOps Time 1 Day Blocked and Waiting Time 9 Days Waiting Time 3 Days Waiting Time 8 Days

Total Story Lead Time 30 days Development Time 5 Days (~ 15%) Testing Time 2 Days Defect Rework 2 Days Release / DevOps Time 1 Day Blocked and Waiting Time 9 Days Waiting Time 3 Days Waiting Time 8 Days

Total Story Lead Time 30 days Story / Feature Inception 5 Days Waiting in Backlog 25 days System Regression Testing & Staging 5 Days Waiting for Release Window 5 Days “Active Development” 30 days Pre Work 30 days Post Work 10 days

Total Story Lead Time 30 days Story / Feature Inception 5 Days Waiting in Backlog 25 days System Regression Testing & Staging 5 Days Waiting for Release Window 5 Days “Active Development” 30 days Pre Work 30 days Post Work 10 days 9 days (70 total) approx 13%

Commercial in confidence 61 Any forecasting process that solely relates feature requirements to time will be unreliable > 85% of time in feature delivery has NO relationship to the feature/story being developed and released and cannot be estimated in advance

ESTIMATING DISTRIBUTIONS USING HISTORIC DATA How to use historic data to estimate the different distributions Commercial in confidence 63

What Distribution To Use... No Data at All, or Less than < 11 Samples (why 11?) (why 11?) – Uniform Range with Boundaries Guessed (safest) – Weibull Range with Boundaries Guessed (likely) 11 to 50 Samples – Uniform Range with Boundaries at 5 th and 95 th CI – Weibull Range with Boundaries at 5 th and 95 th CI – Bootstapping (Random Sampling with Replacement) More than 100 Samples – Use historical data at random without replacement – Curve Fitting Commercial in confidence 64

Tools EasyFit from $499http://mathwave.com R Freehttp://cran.r-project.org Statistics feature of KanbanSim Free Excel of course Monte Carlo features of KanbanSim $ Google for Monte Carlo Simulation Tools – Oracle, Palisade, HubbardResearch (he may be here, so a big recommendation!) Commercial in confidence 65

Sampling at Random Strategies If you pick what samples to use, you bias the prediction… Strategies for proper random sampling – – Use something you know is random (dice, darts) – Pick two groups using your chosen technique and compute your prediction separately and compare – Don’t pre-filter to remove “outliers” – Don’t sort the data, in fact randomize more if possible Commercial in confidence 66

Commercial in confidence 67 Concurrent WIP Sample : Find the smallest and the biggest or take at least 11 samples to be 90% sure of range Estimating Concurrent Effort from Cumulative Flow Chart

Commercial in confidence 68 Scale – How Wide in Range. Related to the Upper Bound. *Rough* Guess: (High – Low) / 4 Shape – How Fat the distribution. 1.5 is a good starting point. Location – The Lower Bound

20%40% 60%80%

Commercial in confidence 70 Scope Creep Over Time Look at the rate new scope is added over time

PREDICTION INTERVALS What is the likelihood the next sample I take will be within the range of previous samples

Actual Maximum Actual Minimum Q. What is the chance of the 4 th sample being between the range seen after the first three samples? (no duplicates, uniform distribution, picked at random) 4

Actual Maximum Actual Minimum Highest sample Lowest sample Q. What is the chance of the 4 th sample being between the range seen after the first three samples? (no duplicates, uniform distribution, picked at random) ? ? ? ?

Actual Maximum Actual Minimum % chance higher than highest seen 25% lower than highest and higher than second highest 25% higher than lowest and lower than second lowest 25% lower than lowest seen Highest sample Lowest sample Q. What is the chance of the 4 th sample being between the range seen after the first three samples? (no duplicates, uniform distribution, picked at random) A. 50% % = (1 - (1 / n – 1)) * 100

Actual Maximum Actual Minimum % chance higher than highest seen 5% lower than lowest seen Highest sample Lowest sample Q. What is the chance of the 12 th sample being between the range seen after the first three samples? (no duplicates, uniform distribution, picked at random) ? ? A. 90% % = (1 - (1 / n – 1)) *

# Prior SamplesPrediction Next Sample Within Prior Sample Range 350% 467% 575% 680% 783% 886% 988% 1089% 1190% 1291% 1392% 1593% 1794% 2095%

THE SHAPE OF CYCLE TIME What distribution fits cycle time data and why… Commercial in confidence 77

Commercial in confidence 78 Note: Histogram from actual data

Commercial in confidence 79

Why Weibull Now for some Math – I know, I’m excited too! Simple Model All units of work between 1 and 3 days A unit of work can be a task, story, feature, project Base Scope of 50 units of work – Always Normal 5 Delays / Risks, each with – 25% Likelihood of occurring – 10 units of work (same as 20% scope increase each)

Normal, or it will be after a few thousand more simulations

Base + 1 Delay

Base + 2 Delays

Base + 3 Delays

Base + 4 Delays

Base + 5 Delays

ASSESSING AND COMMUNICATING RISK Commercial in confidence 87

Speaking Risk To Executives Buy them a copy of “Flaw of Averages” Show them you are tracking & managing risk Do – “We are 95% certain of hitting date x” – “With 1 week of analysis, that may drop to date y” – “We identified risk x, y & z that we will track weekly” Don’t – Give them a date without likelihood “February 29 th 2013” – Give them a date without risk factors considered “To do the backlog of features, February 29 th, 2013”

We spend all our time estimating here **Major risk events have the predominate role in deciding where deliver actually occurs ** Plan Performance Issues External Vendor Delay

Risk likelihood changes constantly th Confidence Interval

Risk likelihood changes constantly th Confidence Interval

Risk likelihood changes constantly th Confidence Interval

Risk likelihood changes constantly th Confidence Interval

All Factors are Related: Little’s Law Gives us an intuitive approach to understand relationships between model inputs Helps us understand model inputs are correlated and need to be paired Helps us identify if we are trending long on actual versus model Commercial in confidence 94 Significant source of error: Updating data for one factor and not the others

Models Start with Assumptions Cycle Time – Guessed – Last releases performance – Another teams performance Amount of Work – # Stories guessed – Scope creep guessed – Similar sizing distribution to last release assumed Concurrent WIP – Guessed – Measured from last release – Similar process to last release assumed Commercial in confidence 95

Commercial in confidence 96 Forecast Completion Date JulyAugustSeptemberOctoberNovemberDecember Planned / Due Date Actual Date C Cost to Develop Staff Option A Staff Option C Staff Option B Staff C : $ Staff B : $$ Staff A : $$$$$$$$ Actual Date BActual Date A

Can’t I just use Average Cycle Time? Commercial in confidence 97 Forecast (Days) Median525 Mean670 MC >50%677 MC >85%793 MC >95%851 Projection of cycle time to complete 25 stories in a sequential fashion based upon historical cycle time data

Commercial in confidence 98 Uncertain Number 1 Uncertain Number 2 Step 1: Generate Random Numbers Step 2: Combine (Sum or other formula) Step 3: Analyze Results

Commercial in confidence 99 Forecast Completion Date JulyAugustSeptemberOctoberNovemberDecember Planned / Due DateActual Date Cost to Develop ($) Cost of Delay ($$$)

Story / Feature Inception 5 Days Waiting in Backlog 25 days System Regression Testing & Staging 5 Days Waiting for Release Window 5 Days “Active Development” 30 days Total Cycle Time

Story / Feature Inception 5 Days Waiting in Backlog 25 days System Regression Testing & Staging 5 Days Waiting for Release Window 5 Days “Active Development” 30 days Test Dev Lead Time

Commercial in confidence 102 Backlog Feature 1 Feature 2 Feature 3 1. Historical Cycle Time 2. Planned Resources/ Effort 3. The Work (Backlog) 5. Historical Defect Rate & Cycle Times 4. Historical Scope Creep Rate A Process to Combine Multiple Uncertain Measurements / Estimates is Needed (optional) Design Develop Test Design Develop Test