Presentation is loading. Please wait.

Presentation is loading. Please wait.

Cycle Time Analytics Making decisions using Lead Time and Cycle Time to avoid needing estimates for every story Troy Slides at bit.ly/agilesim.

Similar presentations


Presentation on theme: "Cycle Time Analytics Making decisions using Lead Time and Cycle Time to avoid needing estimates for every story Troy Slides at bit.ly/agilesim."— Presentation transcript:

1 Cycle Time Analytics Making decisions using Lead Time and Cycle Time to avoid needing estimates for every story Troy Magennis @t_magennis Slides at bit.ly/agilesim LKCE 2013 – Modern Management Methods

2 @t_Magennis slides at bit.ly/agilesim 2

3 Actual Maximum Actual Minimum 1 3 2 4 Q. What is the chance of the 4 th sample being between the range seen after the first three samples? (no duplicates, uniform distribution, picked at random) @t_Magennis slides at bit.ly/agilesim

4 Actual Maximum Actual Minimum 1 3 2 4 Highest sample Lowest sample Q. What is the chance of the 4 th sample being between the range seen after the first three samples? (no duplicates, uniform distribution, picked at random) ? ? ? ? @t_Magennis slides at bit.ly/agilesim

5 Actual Maximum Actual Minimum 1 3 2 4 25% chance higher than highest seen 25% lower than highest and higher than second highest 25% higher than lowest and lower than second lowest 25% lower than lowest seen Highest sample Lowest sample Q. What is the chance of the 4 th sample being between the range seen after the first three samples? (no duplicates, uniform distribution, picked at random) A. 50% % = (1 - (1 / n – 1)) * 100 @t_Magennis slides at bit.ly/agilesim

6 Actual Maximum Actual Minimum 1 3 2 12 5% chance higher than highest seen 5% lower than lowest seen Highest sample Lowest sample Q. What is the chance of the 12 th sample being between the range seen after the first three samples? (no duplicates, uniform distribution, picked at random) ? ? A. 90% % = (1 - (1 / n – 1)) * 100 4 5 6 7 8 9 10 11 @t_Magennis slides at bit.ly/agilesim

7 # Prior SamplesPrediction Next Sample Within Prior Sample Range 350% 467% 575% 680% 783% 886% 988% 1089% 1190% 1291% 1392% 1593% 1794% 2095% @t_Magennis slides at bit.ly/agilesim

8 8 Four people arrange a restaurant booking after work Q. What is the chance they arrive on-time to be seated? @t_Magennis slides at bit.ly/agilesim

9 Commercial in confidence 9 Person 1Person 2Person 3Person 4 1 in 16 EVERYONE is ON-TIME 15 TIMES more likely at least on person is late

10 10

11 70% Chance you need to Forecast (You just performed a probabilistic forecast! 3.5 cases out of 5) Estimate or #NoEstimate?

12 12 Oh, over 3 months, so we’ll do Project Y instead We estimated every task for project x and have determined it will take 213.5 days Perfect information is rarely sought or needed @t_Magennis slides at bit.ly/agilesim

13 “Value” eg. Cost of Not Doing Product 1 Product 2 Product 3 Complete Order? 3 2 1 “Time” Rem. Build Time/Effort Economic Prioritization – same time, different value

14 “Time” Rem. Build Time/Effort “Value” eg. Cost of Not Doing Product 1 Product 2 Product 3 Complete Order? 1 2 3 Economic Prioritization – same value different time

15 W.S.J.F. = Prioritization Heuristic to optimize cashflow “Do Highest First” Cost of Delay Duration or Time Delayed (often, the remaining time to complete) You are here to learn this Suggested Economic Prioritization Learn this here

16 Forecasts are attempts to answer questions about future events. They are an estimate within a stated uncertainty 16 85% Change of 15 th August 2013 Definitely Greater than $1,000,000 At Least 2 Teams @t_Magennis slides at bit.ly/agilesim

17 There is NO single forecast result Uncertainty In = Uncertainty Out There will always be many possible results, some more likely and this is the key to proper forecasting @t_Magennis slides at bit.ly/agilesim

18 18 Probabilistic Forecasting combines many uncertain inputs to find many possible outcomes, and what outcomes are more likely than others Time to Complete Backlog 50% Possible Outcomes Likelihood @t_Magennis slides at bit.ly/agilesim

19 19 Did the Obama 2012 Campaign Fund Advertising to Achieve 50% Chance of Re-election? Time to Complete Backlog 85% Possible Outcomes 15% Likelihood @t_Magennis slides at bit.ly/agilesim

20 Task Uncertainty – Summing Variance 20 Source attribution: Aidan Lyon, Department of Philosophy. University of Maryland, College Park. “Why Normal Distributions Occur” http://aidanlyon.com/sites/default/files/Lyon-normal_distributions.pdfhttp://aidanlyon.com/sites/default/files/Lyon-normal_distributions.pdf 12 3 4 @t_Magennis slides at bit.ly/agilesim

21 Use with attribution 21

22 Staff Dev Cost Cost of Delay Decision Induced Uncertainty 22 Forecast Completion Date JulyAugustSeptemberOctoberNovemberDecember Planned / Due Date Actual Date Every choice we make changes the outcome @t_Magennis slides at bit.ly/agilesim

23 How I Present Cycle Time Forecasts 23 Current Team - 1 DevsDescription 5 Jul29 JulFeature 1 25 Jul16 AugFeature 2 3 Aug9 SepFeature 3 9 Sep4 OctFeature 4 2 Oct2 NovFeature 5 10 Nov7 JanFeature 6 8 Jan8 MarFeature 7 Project Staffing Options Dev Staff Test Staff All Features Complete > 85% CI Full time staff 738-Mar-2014 Current team 838-Jan-2014 + 1 Tester8410-Dec-2013 + 2 Devs10426-Oct-2013 + 1Tester10515-Oct-2013 What are my staffing options? When will my feature ship? @t_Magennis slides at bit.ly/agilesim

24 MODELING AND CYCLE TIME What is modelling and how to use cycle time 24

25 A model is a tool used to mimic a real world process Models are tools for low-cost experimentation @t_Magennis slides at bit.ly/agilesim

26 Success Hinges on Being Wrong Early Initial Goal – To enable decisions of staff size, cost and date for prioritization – Accuracy: Eventual duration within the forecast set Next Goal – To mimic the real system accurately “enough” to perform reliable experiments (sensitivity testing) – To give the earliest warning of actual versus plan diff Early action have bigger impact – Accuracy: Getting better all the time (matching actual) 26 @t_Magennis slides at bit.ly/agilesim

27 Depth of Forecasting models Commercial in confidence 27 Simple Diagnostic

28 Simple Cycle Time Model 28 Amount of Work (# stories) Lead Time or Cycle Time Parallel Work in Proc. (WIP) Random Chance / Risk / Stupidity @t_Magennis slides at bit.ly/agilesim

29 Capturing Cycle Time and WIP Use with attribution 29 StoryStart DateCompleted Date Cycle Time (days) 11 Jan 201315 Jan 2013 25 Jan 201312 Jan 2013 35 Jan 2013 46 Jan 2013 53 Jan 20137 Jan 2013 6 18 Feb 2013 710 Jan 201322 Jan 2013 810 Jan 201318 Jan 2013 913 Jan 201326 Jan 2013 1015 Jan 2013 Date “Complete” – Date “Started” 14

30 Capturing Cycle Time and WIP Use with attribution 30 StoryStart DateCompleted Date Cycle Time (days) 11 Jan 201315 Jan 2013 25 Jan 201312 Jan 2013 35 Jan 2013 46 Jan 2013 53 Jan 20137 Jan 2013 6 18 Feb 2013 710 Jan 201322 Jan 2013 810 Jan 201318 Jan 2013 913 Jan 201326 Jan 2013 1015 Jan 2013 DateWIP 1 Jan 3 Jan 4 Jan 5 Jan 6 Jan 7 Jan 8 Jan 9 Jan 10 Jan … 15 Jan Count of Started, but Not completed 5

31 Capturing Cycle Time and WIP Use with attribution 31 StoryStart DateCompleted Date Cycle Time (days) 11 Jan 201315 Jan 201314 25 Jan 201312 Jan 20137 35 Jan 2013 46 Jan 2013 53 Jan 20137 Jan 20134 6 18 Feb 201342 710 Jan 201322 Jan 201312 810 Jan 201318 Jan 20138 913 Jan 201326 Jan 201313 1015 Jan 2013 DateWIP 1 Jan1 3 Jan2 4 Jan2 5 Jan3 6 Jan4 7 Jan5 8 Jan5 9 Jan5 10 Jan7 …… 15 Jan7

32 32 Note: Cycle time sometimes referred to as Lead Time but not going to discuss this now! Cycle Time for Story/Epic = Date “Complete” – Date “Started” If you have a captured start and complete date you can calculate cycle time and WIP. Work in Process at Date x = Count of (“Started” < Date x & “Completed” > Date x) @t_Magennis slides at bit.ly/agilesim

33 Sum Random Numbers 25 11 29 43 34 26 31 45 22 27 31 43 65 45 8 7 34 73 54 48 19 12 24 27 21 3 9 20 23 29 187410295 Sum: ….. Historical Story Cycle Time Trend Days To Complete Basic Cycle Time Forecast Monte Carlo Process 1. Gather historical story lead-times 2. Build a set of random numbers based on pattern 3. Sum a random number for each remaining story to build a potential outcome 4. Repeat many times to find the likelihood (odds) to build a pattern of likelihood outcomes More often Less often

34 34 Backlog Feature 1 Feature 2 Feature 3 1. Historical Cycle Time 3. The Work (Backlog) 2. Planned Resources/ WIP Monte Carlo Analysis = Process to Combine Multiple Uncertain Measurements / Estimates 4. Historical Scope Creep Rate (optional) 5. Historical Defect Rate and Cycle Times (optional) 6. Phases @t_Magennis slides at bit.ly/agilesim

35 35 @t_Magennis slides at bit.ly/agilesim

36 Commercial in confidence 36 How certain based on model forecast Further calculations to make economic tradeoffs

37 Commercial in confidence 37

38 38 Y-Axis: Number of Completed Stories Project Complete Likelihood Range of complete stories probability 0 to 50% 50 to 75% > 75% X-Axis: Date @t_Magennis slides at bit.ly/agilesim

39 FORECASTING STRATEGIES 39

40 January February MarchAprilMayJune July August September The Future The Past 1. Model Baseline using historically known truths 2. Test Model against historically known truths 3. Forecast When you have historical data

41 Compare Model vs Actual Often

42 @t_Magennis slides at bit.ly/agilesim 42 Range of complete probability Actual results to compare if model is predictable

43 Forecast Trend Report Card 43 WatchGREATWatchOkBadWatchWORRYWatch Cycle Time Trend versus Model ↘↘↗↗↘↘↗↗ ACTIVE WIP Actual vs Model ↘↗↘↗↘↗↘↗ Story Count Trend versus Model ↘↘↘↘↗↗↗↗ Arrow Legend: Green: Heading in beneficial direction Red: Heading in damaging direction For “Watch” – the WIP and story count trends need to be analyzed. They are 1:1 ratio of compensating importance, and may offset each other. @t_Magennis slides at bit.ly/agilesim

44 Forecast Trend Report Card 44 Last ReviewThis ReviewActions Cycle Time Trend →↗↘→↗↘→↗↘→↗↘ Avg. WIP Actual vs Model →↗↘→↗↘→↗↘→↗↘ Story Count Trend →↗↘→↗↘→↗↘→↗↘ Future Feature Difficulty vs Prior →↗↘→↗↘→↗↘→↗↘ @t_Magennis slides at bit.ly/agilesim

45 When you have no historical data January February MarchApril May June July August September The Future @t_Magennis slides at bit.ly/agilesim

46 Simple Cycle Time Model 46 Amount of Work (# stories, scope creep, defects) Cycle Time (range guess or samples) Parallel Work in Proc. (WIP on board) @t_Magennis slides at bit.ly/agilesim

47 Data Needed for Cycle Time Model Cycle Time – Range guess or history Amount of Work – Number Stories (per Feature) – Scope Creep – Defect Rate Concurrent WIP – Number of cards across all lanes on the board Commercial in confidence 47

48 48 If we understand how cycle time is statistically distributed, then an initial guess of maximum allows an inference to be made Alternatives - Borrow a similar project’s data Borrow industry data Fake it until you make it… (AKA guess range) @t_Magennis slides at bit.ly/agilesim

49 49 Note: Histogram from actual data @t_Magennis slides at bit.ly/agilesim

50 Commercial in confidence 50 Exponential Distribution (Weibull shape = 1) The person who gets the work can complete the work Teams with no external dependencies Teams doing repetitive work E.g. DevOps, Database teams,

51 Commercial in confidence 51 Weibull Distribution (shape = 1.5) Typical dev team ranges between 1.2 and 1.8

52 Commercial in confidence 52 Rayleigh Distribution (Weibull shape = 2) Teams with MANY external dependencies Teams that have many delays and re-work. E.g. Test teams

53 53 Scale – How Wide in Range. Related to the Upper Bound. *Rough* Guess: (High – Low) / 4 Shape – How Fat the distribution. 1.5 is a good starting point. Location – The Lower Bound @t_Magennis slides at bit.ly/agilesim

54 What Distribution To Use... No Data at All, or Less than < 11 Samples (why 11?) (why 11?) – Uniform Range with Boundaries Guessed (safest) – Weibull Range with Boundaries Guessed (likely) 11 to 30 Samples – Uniform Range with Boundaries at 5 th and 95 th CI – Weibull Range with Boundaries at 5 th and 95 th CI More than 30 Samples – Use historical data as bootstrap reference – Curve Fitting software 54 @t_Magennis slides at bit.ly/agilesim

55 Questions… Download the slides (soon) and software at http://bit.ly/agilesim http://bit.ly/agilesim Contact me – Email: troy.Magennis@focusedobjective.comtroy.Magennis@focusedobjective.com – Twitter: @t_Magennis Read: 55

56 CURRENT FORECASTING PRACTICES Why traditional forecasting practices fail to deliver and why this is the only outcome possible… Commercial in confidence 56

57 Total Story Lead Time 30 days Development Time 5 Days (~ 15%) Testing Time 2 Days Defect Rework 2 Days Release / DevOps Time 1 Day Blocked and Waiting Time 9 Days Waiting Time 3 Days Waiting Time 8 Days

58 Total Story Lead Time 30 days Development Time 5 Days (~ 15%) Testing Time 2 Days Defect Rework 2 Days Release / DevOps Time 1 Day Blocked and Waiting Time 9 Days Waiting Time 3 Days Waiting Time 8 Days

59 Total Story Lead Time 30 days Story / Feature Inception 5 Days Waiting in Backlog 25 days System Regression Testing & Staging 5 Days Waiting for Release Window 5 Days “Active Development” 30 days Pre Work 30 days Post Work 10 days

60 Total Story Lead Time 30 days Story / Feature Inception 5 Days Waiting in Backlog 25 days System Regression Testing & Staging 5 Days Waiting for Release Window 5 Days “Active Development” 30 days Pre Work 30 days Post Work 10 days 9 days (70 total) approx 13%

61 Commercial in confidence 61 Any forecasting process that solely relates feature requirements to time will be unreliable > 85% of time in feature delivery has NO relationship to the feature/story being developed and released and cannot be estimated in advance

62

63 ESTIMATING DISTRIBUTIONS USING HISTORIC DATA How to use historic data to estimate the different distributions Commercial in confidence 63

64 What Distribution To Use... No Data at All, or Less than < 11 Samples (why 11?) (why 11?) – Uniform Range with Boundaries Guessed (safest) – Weibull Range with Boundaries Guessed (likely) 11 to 50 Samples – Uniform Range with Boundaries at 5 th and 95 th CI – Weibull Range with Boundaries at 5 th and 95 th CI – Bootstapping (Random Sampling with Replacement) More than 100 Samples – Use historical data at random without replacement – Curve Fitting Commercial in confidence 64

65 Tools EasyFit from http://mathwave.com $499http://mathwave.com R http://cran.r-project.org Freehttp://cran.r-project.org Statistics feature of KanbanSim http://focusedobjective.com Free http://focusedobjective.com Excel of course Monte Carlo features of KanbanSim $495-995 Google for Monte Carlo Simulation Tools – Oracle, Palisade, HubbardResearch (he may be here, so a big recommendation!) Commercial in confidence 65

66 Sampling at Random Strategies If you pick what samples to use, you bias the prediction… Strategies for proper random sampling – – Use something you know is random (dice, darts) – Pick two groups using your chosen technique and compute your prediction separately and compare – Don’t pre-filter to remove “outliers” – Don’t sort the data, in fact randomize more if possible Commercial in confidence 66

67 Commercial in confidence 67 Concurrent WIP Sample : Find the smallest and the biggest or take at least 11 samples to be 90% sure of range Estimating Concurrent Effort from Cumulative Flow Chart

68 Commercial in confidence 68 Scale – How Wide in Range. Related to the Upper Bound. *Rough* Guess: (High – Low) / 4 Shape – How Fat the distribution. 1.5 is a good starting point. Location – The Lower Bound

69 20%40% 60%80%

70 Commercial in confidence 70 Scope Creep Over Time Look at the rate new scope is added over time

71 PREDICTION INTERVALS What is the likelihood the next sample I take will be within the range of previous samples

72 Actual Maximum Actual Minimum 1 3 2 Q. What is the chance of the 4 th sample being between the range seen after the first three samples? (no duplicates, uniform distribution, picked at random) 4

73 Actual Maximum Actual Minimum 1 3 2 4 Highest sample Lowest sample Q. What is the chance of the 4 th sample being between the range seen after the first three samples? (no duplicates, uniform distribution, picked at random) ? ? ? ?

74 Actual Maximum Actual Minimum 1 3 2 4 25% chance higher than highest seen 25% lower than highest and higher than second highest 25% higher than lowest and lower than second lowest 25% lower than lowest seen Highest sample Lowest sample Q. What is the chance of the 4 th sample being between the range seen after the first three samples? (no duplicates, uniform distribution, picked at random) A. 50% % = (1 - (1 / n – 1)) * 100

75 Actual Maximum Actual Minimum 1 3 2 12 5% chance higher than highest seen 5% lower than lowest seen Highest sample Lowest sample Q. What is the chance of the 12 th sample being between the range seen after the first three samples? (no duplicates, uniform distribution, picked at random) ? ? A. 90% % = (1 - (1 / n – 1)) * 100 4 5 6 7 8 9 10 11

76 # Prior SamplesPrediction Next Sample Within Prior Sample Range 350% 467% 575% 680% 783% 886% 988% 1089% 1190% 1291% 1392% 1593% 1794% 2095%

77 THE SHAPE OF CYCLE TIME What distribution fits cycle time data and why… Commercial in confidence 77

78 Commercial in confidence 78 Note: Histogram from actual data

79 Commercial in confidence 79

80 Why Weibull Now for some Math – I know, I’m excited too! Simple Model All units of work between 1 and 3 days A unit of work can be a task, story, feature, project Base Scope of 50 units of work – Always Normal 5 Delays / Risks, each with – 25% Likelihood of occurring – 10 units of work (same as 20% scope increase each)

81 Normal, or it will be after a few thousand more simulations

82 Base + 1 Delay

83 Base + 2 Delays

84 Base + 3 Delays

85 Base + 4 Delays

86 Base + 5 Delays

87 ASSESSING AND COMMUNICATING RISK Commercial in confidence 87

88 Speaking Risk To Executives Buy them a copy of “Flaw of Averages” Show them you are tracking & managing risk Do – “We are 95% certain of hitting date x” – “With 1 week of analysis, that may drop to date y” – “We identified risk x, y & z that we will track weekly” Don’t – Give them a date without likelihood “February 29 th 2013” – Give them a date without risk factors considered “To do the backlog of features, February 29 th, 2013”

89 We spend all our time estimating here 1 2 3 **Major risk events have the predominate role in deciding where deliver actually occurs ** Plan Performance Issues External Vendor Delay

90 Risk likelihood changes constantly 123 95 th Confidence Interval

91 Risk likelihood changes constantly 123 95 th Confidence Interval

92 Risk likelihood changes constantly 123 95 th Confidence Interval

93 Risk likelihood changes constantly 123 95 th Confidence Interval

94 All Factors are Related: Little’s Law Gives us an intuitive approach to understand relationships between model inputs Helps us understand model inputs are correlated and need to be paired Helps us identify if we are trending long on actual versus model Commercial in confidence 94 Significant source of error: Updating data for one factor and not the others

95 Models Start with Assumptions Cycle Time – Guessed – Last releases performance – Another teams performance Amount of Work – # Stories guessed – Scope creep guessed – Similar sizing distribution to last release assumed Concurrent WIP – Guessed – Measured from last release – Similar process to last release assumed Commercial in confidence 95

96 Commercial in confidence 96 Forecast Completion Date JulyAugustSeptemberOctoberNovemberDecember Planned / Due Date Actual Date C Cost to Develop Staff Option A Staff Option C Staff Option B Staff C : $ Staff B : $$ Staff A : $$$$$$$$ Actual Date BActual Date A

97 Can’t I just use Average Cycle Time? Commercial in confidence 97 Forecast (Days) Median525 Mean670 MC >50%677 MC >85%793 MC >95%851 Projection of cycle time to complete 25 stories in a sequential fashion based upon historical cycle time data

98 Commercial in confidence 98 Uncertain Number 1 Uncertain Number 2 Step 1: Generate Random Numbers 25 11 29 43 34 26 31 45 22 27 31 43 65 45 8 7 34 73 54 48 56 54 85 88 42 33 65 118 76 75 Step 2: Combine (Sum or other formula) Step 3: Analyze Results

99 Commercial in confidence 99 Forecast Completion Date JulyAugustSeptemberOctoberNovemberDecember Planned / Due DateActual Date Cost to Develop ($) Cost of Delay ($$$)

100 Story / Feature Inception 5 Days Waiting in Backlog 25 days System Regression Testing & Staging 5 Days Waiting for Release Window 5 Days “Active Development” 30 days Total Cycle Time

101 Story / Feature Inception 5 Days Waiting in Backlog 25 days System Regression Testing & Staging 5 Days Waiting for Release Window 5 Days “Active Development” 30 days Test Dev Lead Time

102 Commercial in confidence 102 Backlog Feature 1 Feature 2 Feature 3 1. Historical Cycle Time 2. Planned Resources/ Effort 3. The Work (Backlog) 5. Historical Defect Rate & Cycle Times 4. Historical Scope Creep Rate A Process to Combine Multiple Uncertain Measurements / Estimates is Needed (optional) Design Develop Test Design Develop Test


Download ppt "Cycle Time Analytics Making decisions using Lead Time and Cycle Time to avoid needing estimates for every story Troy Slides at bit.ly/agilesim."

Similar presentations


Ads by Google