Presentation is loading. Please wait.

Presentation is loading. Please wait.

National Institute of Economic and Social Research “Consensus estimates of forecast uncertainty: working out how little we know ” James Mitchell NIESR.

Similar presentations


Presentation on theme: "National Institute of Economic and Social Research “Consensus estimates of forecast uncertainty: working out how little we know ” James Mitchell NIESR."— Presentation transcript:

1 National Institute of Economic and Social Research “Consensus estimates of forecast uncertainty: working out how little we know ” James Mitchell NIESR June 2005

2 ESRC Social Science Week Thanks to the ESRC for financial support Thanks to Stephen Hall (co-author) Our ESRC “output”… three papers: 1.“Density forecast combination” http://www.niesr.ac.uk/pdf/ssw200605a.pdf 2. “Optimal density forecast combination” http://www.niesr.ac.uk/pdf/ssw200605b.pdf 3. “Evaluating, comparing and combining density forecasts using the KLIC with an application to the Bank of England and NIESR ‘fan’ charts of inflation” http://www.niesr.ac.uk/pdf/ssw200605c.pdf

3 Forecast Uncertainty How do we judge if forecasts are any good? How should a forecaster best acknowledge how little they really know ? Surely they are not always surprised when their forecast proves “wrong” ? How should we judge if one forecast is better than another ? Why is it misleading to say one forecast is “better” than another simply if it turns out closer to the subsequent outturn ? Can we do better if we take some kind of average across competing forecasts of the same event ?

4 Forecasting: state of the art…dismal science The importance of forecasts: forward looking policy Point forecasts are better seen as the central points of ranges of uncertainty It is not a question of one point forecast proving right and another proving wrong, despite what politicians may say Users may not be surprised when inflation is higher than forecast. Indeed they may not be very surprised if it’s much higher

5 Density Forecasts Increased attention is now given to providing measures of uncertainty associated with forecasts Measures of uncertainty surrounding a point forecast can enhance its usefulness; affect policy response; essential with non-quadratic loss functions So called “density” forecasts are being used increasingly since they provide commentators with a full impression of forecast uncertainty They provide an estimate of the probability distribution of a variable’s possible future values

6 Production of density forecasts Subjective or model-based Ex post we will see that we can evaluate the quality of the assumed density, combined density, forecast The “fan” chart: the Bank of England uses an asymmetric density forecast based on a two- piece normal distribution: upside vs. downside risk NIESR uses a normal density with variance estimated from the historical forecast errors: how far back should they look?

7 Bank of England “fan” chart for CPI Inflation: Inflation Report May 2005

8 Evaluation of Density Forecasts Evaluate density forecasts statistically using the “probability integral transform” (pit); analogous to evaluation of point forecasts using RMSE The pit’s z it for the density forecast g it of y t (say, inflation) are z it are i.i.d. uniform (or via a CDF transform, normal) when the density forecast is correct

9 Consensus forecasts It is widely appreciated that combination forecasts normally outperform any single forecast There are debates about why this happens - All forecasts are wrong but in different ways

10 Consensus estimates of forecast uncertainty A natural question to ask is, would a combined density forecast also work better? This raises a number of issues: 1. How should we combine density forecasts 2. How should we evaluate the combined density 3. How should we test individual densities against each other

11 Combining density forecasts The early OR approaches Consider N forecasts made by N experts (i=1,…,N) of a variable y t. If their (continuous) density forecasts are g it then the linear opinion pool is

12 Combined density forecasts How do we determine the weights w i ? –Equal weights –Optimal combination: mimic the optimal combination of point forecasts The combined density can have distinct characteristics from those of the individual forecasters; e.g. if all the densities are normal, but with different means and variances, then the combined density is mixture normal. But what if the true density is normal? Indirect combination: moment by moment Bayesian and copula based combination

13 The Bank of England and NIESR density forecasts of inflation

14 Does density forecast combination work? In-sample and out-of-sample experiments Combined density forecasts can but need not help Combining the Bank and NIESR density forecasts we find a weight of zero on NIESR Combining Bank and time-series forecasts we find a weight of 0.73 on the time-series forecast and an improvement in accuracy

15 The tool-kit available to those willing to admit they may get it wrong The Kullback-Leibler Information Criterion (KLIC) offers a unified statistical tool to evaluate, compare and combine density forecasts The KLIC distance between the true density f(y) and the forecast density g(y) is: Existing density forecast evaluation tests based on the pit’s implicitly test KLIC=0 but without having to know f(.) KLIC can be used to test which density forecast is best: extension of Diebold-Mariano test Basis for Bayesian Model Averaging

16 Conclusions Producers of forecasts should be encouraged to indicate how uncertain they are. This is an admission of strength not weakness. Facilitates better policy-making Combining density forecasts appears promising Users require a tool-kit to evaluate, compare and combine density forecasts. This will enable us to work out how little we know and improve the reliability of this forecast


Download ppt "National Institute of Economic and Social Research “Consensus estimates of forecast uncertainty: working out how little we know ” James Mitchell NIESR."

Similar presentations


Ads by Google