Presentation is loading. Please wait.

Presentation is loading. Please wait.

BABS 502 Judgment in Forecasting and Forecast Accuracy Lecture 2 February 28, 2008.

Similar presentations


Presentation on theme: "BABS 502 Judgment in Forecasting and Forecast Accuracy Lecture 2 February 28, 2008."— Presentation transcript:

1 BABS 502 Judgment in Forecasting and Forecast Accuracy Lecture 2 February 28, 2008

2 (c) Martin L. Puterman2 Where should we use judgment in forecasting? When there’s no data Determining how much data to use Choosing a forecasting model When dealing with structural changes Adjusting outliers Adjusting quantitative forecasts when; –the consequences are great –they don’t make sense –additional information is available that is not included in the model –the penalties differ between over and under forecasting

3 (c) Martin L. Puterman3 Why are judgmental forecasts so poor? Forecasting is hard Forecasters have biases Recent events are given more weight Reliance on anecdotal information Ignoring data Justification of sunk costs Difficult to process large amounts of data Difficult to determine which data is usual and relevant Financial markets are efficient

4 (c) Martin L. Puterman4 Judgmental Forecasting Critique Manager’s forecasts tend to be overly optimistic. Studies shows they are outperformed by statistical models. Case Study (Makadakis et. al. p.493) Mangers produced different forecasts from the same data –when told it was their companies product and when told it was a competitors product or –when told it was a new, mature or old product. Most neutral parties forecasting from same data used extrapolation (trend model) and overestimated accuracy of the forecast.

5 (c) Martin L. Puterman5 Empirical evidence suggests that “expert” forecasts do not perform well Financial forecasts: Analysts and mutual funds consistently under perform markets or fail to predict market turns. Mutual fund advisor incentive systems encourage erratic behavior. Sales force forecasts: Influenced by incentives. Rewards for exceeding targets so set low targets or forecasts. Sales managers set high targets for motivation. In general such forecasts rely on anecdotal information So use inexpensive experts!

6 (c) Martin L. Puterman6 So why use judgment? In some cases you have to.  Little data available  Far into the future.  Rapid technological or marketplace change  New products  Relationships between predictors change  Adjustment of outliers  Consequence great

7 (c) Martin L. Puterman7 Consensus Forecasting Informal discussion and argument (committee?) Weighted average of all forecasters Delphi method –form individual forecasts and deliver to oracle –oracle summarizes forecasts and returns average, high, low, etc to individuals + reasons –individuals revise their forecasts –repeat process

8 (c) Martin L. Puterman8 How to improve judgmental forecasting Review historical data Practice; calibrate performance and give feedback Provide ranges –high (95 th percentile) –midpoint (median) –low point (5 th percentile) Provide scenarios with probabilities Clearly separate out forecasting and decision making

9 (c) Martin L. Puterman9 These calculations assume that we are forecasting k periods ahead and can assess its quality using a holdout sample Y t+k = Value in Period t+k; F t (k) = k period ahead forecast in period t n = forecast horizon Period t+k forecast error = Actual - Forecast = Y t+k - F t (k) Mean squared error (MSE) –Square the individual forecast errors over next n periods –Sum the squared errors and divide by n Accuracy Measures

10 (c) Martin L. Puterman10 Mean absolute error (MAE) –Take absolute values of forecast errors –Sum absolute values and divide by n Mean absolute percent error (MAPE) –Take absolute values of forecast percent errors –Sum percent errors and divide by n Forecasting Accuracy Measures

11 (c) Martin L. Puterman11 Assessing Out of Sample Forecasts


Download ppt "BABS 502 Judgment in Forecasting and Forecast Accuracy Lecture 2 February 28, 2008."

Similar presentations


Ads by Google