Presentation is loading. Please wait.

Presentation is loading. Please wait.

Session 7: Evaluating forecasts Demand Forecasting and Planning in Crisis 30-31 July, Shanghai Joseph Ogrodowczyk, Ph.D.

Similar presentations


Presentation on theme: "Session 7: Evaluating forecasts Demand Forecasting and Planning in Crisis 30-31 July, Shanghai Joseph Ogrodowczyk, Ph.D."— Presentation transcript:

1 Session 7: Evaluating forecasts Demand Forecasting and Planning in Crisis 30-31 July, Shanghai Joseph Ogrodowczyk, Ph.D.

2 Session 7 Joseph Ogrodowczyk, Ph.D. Demand Forecasting and Planning in Crisis 30-31 July, Shanghai 2 Evaluating forecasts Session agenda  Background  Measures of accuracy  Cost of forecast error  Activity: Produce forecast error calculations for the forecasts made on Day 1

3 Session 7 Joseph Ogrodowczyk Ph.D. Demand Forecasting and Planning in Crisis 30-31 July, Shanghai 3 Evaluating forecasts Background  How do we measure the accuracy of our forecasts? How do we know which forecasts were good and which need improvement? Error can be calculated across products within a given time period or across time periods for a given product  The following examples are for one product over multiple time periods  Two topics of forecast evaluation 1. How accurate was the forecast? 2. What was the cost of being wrong?

4 Session 7 Joseph Ogrodowczyk Ph.D. Demand Forecasting and Planning in Crisis 30-31 July, Shanghai 4 Evaluating forecasts Background  Definitions for evaluation: Forecast period: The time increment for which the forecast is produced (month, week, quarter) Forecast bucket: The time increment being forecasted (period, month, quarter) Forecast horizon: The time increment including all forecast buckets being forecasted (12 months, 8 quarters) Forecast lag: The time between when the forecast is produced and the bucket that is forecasted Forecast snapshot: the specific combination of period, horizon, bucket, and lag associated with a forecast

5 Session 7 Joseph Ogrodowczyk Ph.D. Demand Forecasting and Planning in Crisis 30-31 July, Shanghai 5 Evaluating forecasts Background  Sources of error Data: Missing or omitted data, mislabeled data Assumptions: Seasonality is not constant, trend changes are unanticipated, experts have insufficient information Model: Wrong choice of model type (judgment, statistical), correct model type and misspecified model (missing variables or too many variables), did not account for outliers Measures of accuracy  Point error  Average error  Trend of error

6 Session 7 Joseph Ogrodowczyk Ph.D. Demand Forecasting and Planning in Crisis 30-31 July, Shanghai 6 Evaluating forecasts Measures of accuracy  Point error Error: The difference between the forecasted quantity and the actual demand quantity Squared error: The square of the error Percent error: The error relative to the actual demand quantity  Denominator of actuals answers the question: How did well did we predict actual demand?  Denominator of forecast answers the question: How much were we wrong relative to what we said we would do? Absolute error: The absolute value of the error Absolute percent error: The absolute value of the error relative to the actual demand quantity

7 Session 7 Joseph Ogrodowczyk Ph.D. Demand Forecasting and Planning in Crisis 30-31 July, Shanghai 7 Evaluating forecasts Measures of accuracy  Point error Data from Session 4, Naïve one-step model One product over multiple time periods

8 Session 7 Joseph Ogrodowczyk Ph.D. Demand Forecasting and Planning in Crisis 30-31 July, Shanghai 8 Evaluating forecasts Measures of accuracy  Average error Mean square error (MSE): Sum of the squared errors Root mean square error (RMSE): Square root of the MSE Mean percent error (MPE): Average of the percent errors Mean absolute error (MAE): Average of the absolute errors Mean absolute percent error (MAPE): Average of the APE Weighted mean absolute percent error (WMAPE): Weighted average of the APE

9 Session 7 Joseph Ogrodowczyk Ph.D. Demand Forecasting and Planning in Crisis 30-31 July, Shanghai 9 Evaluating forecasts Measures of accuracy  Average error One product over multiple time periods

10 Session 7 Joseph Ogrodowczyk Ph.D. Demand Forecasting and Planning in Crisis 30-31 July, Shanghai 10 Evaluating forecasts Measures of accuracy  Average error Weighted mean absolute percent error (WMAPE)  Introduced as a method for overcoming inconsistencies in the MAPE  All time periods, regardless of the quantity of sales, have equal ability to affect MAPE  A 12% APE for a period in which 10 units were sold has no more importance than a 12% APE for a period in which 100K units were sold  Weight each APE calculation by the respective quantity WMAPE=

11 Session 7 Joseph Ogrodowczyk Ph.D. Demand Forecasting and Planning in Crisis 30-31 July, Shanghai 11 Evaluating forecasts Measures of accuracy  Average error Weighted mean absolute percent error (WMAPE)  In Session 4, we used a naïve one-step model and forecasted January 2008 using December 2007 data.  Forecast was 88.9 units and actual demand was 88.2  Absolute percent error (APE) = |F-A|/A = |88.9-88.2|/88.2 =.74%  Multiply.74% by 88.2 (actual demand) =.66 .66% is the weighted error value for the January forecast WMAPE=

12 Session 7 Joseph Ogrodowczyk Ph.D. Demand Forecasting and Planning in Crisis 30-31 July, Shanghai 12 Evaluating forecasts Measures of accuracy  Average error Weighted mean absolute percent error (WMAPE)

13 Session 7 Joseph Ogrodowczyk Ph.D. Demand Forecasting and Planning in Crisis 30-31 July, Shanghai 13 Evaluating forecasts Measures of accuracy  Trend of error Point error calculations and average error calculations are static  They are calculated for a set time interval Additional information can be obtained by tracking these calculations over time  How does the error change over time?  Also called the forecast bias  Statistical analysis can be performed on the trending data  Mean, standard deviation, coefficient of variation

14 Session 7 Joseph Ogrodowczyk Ph.D. Demand Forecasting and Planning in Crisis 30-31 July, Shanghai 14 Evaluating forecasts Measures of accuracy  Trend of error Two suggested methods  Track a statistic through time (3 month MAPE)  Compare time intervals (Q1 against Q2)  Example is the 2008 naïve one-step forecast

15 Session 7 Joseph Ogrodowczyk Ph.D. Demand Forecasting and Planning in Crisis 30-31 July, Shanghai 15 Evaluating forecasts Cost of forecast error  Accuracy measures do not contain the costs associated with forecast error  Two methods for incorporating costs Calculate costs based on percent error and differentiating between over- and under-forecasting Calculate costs based on a loss function dependent on safety stock levels, lost sales, and service levels

16 Session 7 Joseph Ogrodowczyk Ph.D. Demand Forecasting and Planning in Crisis 30-31 July, Shanghai 16 Evaluating forecasts Cost of forecast error  Incorporating costs Error differentiation  Costs are calculated according to the mathematical sign of the percent error (+ or -)  Costs of under-forecasting can be reflected in loss of sales, loss of related goods, increased production costs, increased shipment costs, etc.  Shipment and production costs are associated with production and expediting additional units to meet demand  Costs of over-forecasting can be reflected in excess inventory, increased obsolescence, increased firesale items, etc.

17 Session 7 Joseph Ogrodowczyk Ph.D. Demand Forecasting and Planning in Crisis 30-31 July, Shanghai 17 Evaluating forecasts Cost of forecast error  Incorporating costs Loss function  A cost of forecast error metric (CFE) can be used to quantify the loss associated with both under- and over-forecasting  Loss function based on the mean absolute error (MAE)  First part of CFE calculates the necessary unit requirements to maintain a specified service level  This is balanced against the volume of lost sales and associated cost of stock-outs  Plotting a graph of cost of error against different service levels can supply information with regards to the service level corresponding to the lowest cost of forecast error

18 Session 7 Joseph Ogrodowczyk Ph.D. Demand Forecasting and Planning in Crisis 30-31 July, Shanghai 18 Evaluating forecasts Cost of forecast error  Final notes Cost of error helps to guide forecast improvement process  These costs can be company specific and can be explored through understanding the implications of shortages and surpluses of products  The specific mathematical calculations are beyond the scope of this workshop  Applying costs to forecast errors will always require assumptions within the models  Recommend explicitly writing assumptions  Changing assumptions will lead to changes in the costs of the errors and can produce a range of estimated costs

19 Session 7 Joseph Ogrodowczyk Ph.D. Demand Forecasting and Planning in Crisis 30-31 July, Shanghai 19 Evaluating forecasts References  Jain, Chaman L. and Jack Malehorn. 2005. Practical Guide to Business Forecasting (2nd Ed.). Flushing, New York: Graceway Publishing Inc.  Catt, Peter Maurice. 2007. Assessing the cost of forecast error: A practical example. Foresight. Summer: 5-10.


Download ppt "Session 7: Evaluating forecasts Demand Forecasting and Planning in Crisis 30-31 July, Shanghai Joseph Ogrodowczyk, Ph.D."

Similar presentations


Ads by Google