National Institute of Economic and Social Research “Consensus estimates of forecast uncertainty: working out how little we know ” James Mitchell NIESR.

Slides:



Advertisements
Similar presentations
Probability Distributions CSLU 2850.Lo1 Spring 2008 Cameron McInally Fordham University May contain work from the Creative Commons.
Advertisements

Economics 20 - Prof. Anderson1 Stationary Stochastic Process A stochastic process is stationary if for every collection of time indices 1 ≤ t 1 < …< t.
Integration of sensory modalities
Uncertain population forecasts Nico Keilman Department of Economics, University of Oslo.
Central Limit Theorem.
Regression Analysis. Unscheduled Maintenance Issue: l 36 flight squadrons l Each experiences unscheduled maintenance actions (UMAs) l UMAs costs $1000.
OVERVIEW OF LOAD FORECASTING METHODOLOGY Northeast Utilities Economic & Load Forecasting Dept. May 1, 2008 UConn/NU Operations Management.
Using Statistics in Research Psych 231: Research Methods in Psychology.
Inferences About Means of Two Independent Samples Chapter 11 Homework: 1, 2, 3, 4, 6, 7.
SUMS OF RANDOM VARIABLES Changfei Chen. Sums of Random Variables Let be a sequence of random variables, and let be their sum:
Results 2 (cont’d) c) Long term observational data on the duration of effective response Observational data on n=50 has EVSI = £867 d) Collect data on.
Statistical Background
Copyright © 2014 by McGraw-Hill Higher Education. All rights reserved. Essentials of Business Statistics: Communicating with Numbers By Sanjiv Jaggia and.
Chapter 7 Probability and Samples: The Distribution of Sample Means
The t Tests Independent Samples.
Lecture II-2: Probability Review
OECD Short-Term Economic Statistics Working PartyJune Analysis of revisions for short-term economic statistics Richard McKenzie OECD OECD Short.
The ECB Survey of Professional Forecasters Luca Onorante European Central Bank* (updated from A. Meyler and I.Rubene) October 2009 *The views and opinions.
Helsinki University of Technology Systems Analysis Laboratory 1 London Business School Management Science and Operations 1 London Business School Management.
Statistical Hypothesis Testing. Suppose you have a random variable X ( number of vehicle accidents in a year, stock market returns, time between el nino.
Forecasting inflation; The Fan Chart CCBS/HKMA May 2004.
Investment Analysis and Portfolio management Lecture: 24 Course Code: MBF702.
Statistical Power The ability to find a difference when one really exists.
Essentials of Marketing Research
Introductory Statistics for Laboratorians dealing with High Throughput Data sets Centers for Disease Control.
Health and Disease in Populations 2001 Sources of variation (2) Jane Hutton (Paul Burton)
Lecture 12 Statistical Inference (Estimation) Point and Interval estimation By Aziza Munir.
1 G Lect 6b G Lecture 6b Generalizing from tests of quantitative variables to tests of categorical variables Testing a hypothesis about a.
Monetary Policy Challenges Posed by Asset Price Booms Stephen G. Cecchetti Rosenberg Professor of Global Finance.
Theory of Probability Statistics for Business and Economics.
User Study Evaluation Human-Computer Interaction.
Introduction Osborn. Daubert is a benchmark!!!: Daubert (1993)- Judges are the “gatekeepers” of scientific evidence. Must determine if the science is.
Statistics (cont.) Psych 231: Research Methods in Psychology.
Chapter 7 Probability and Samples: The Distribution of Sample Means
National Institute of Economic and Social Research Combining forecast densities from VARs with uncertain instabilities Anne Sofie Jore (Norges Bank) James.
Forecasting Chapter 9. Copyright © 2013 Pearson Education, Inc. publishing as Prentice Hall Define Forecast.
Evaluating Credit Risk Models Using Loss Density Forecasts: A Synopsis Amanda K. Geck Undergraduate Student Department of Computational and Applied Mathematics.
1 G Lect 2M Examples of Correlation Random variables and manipulated variables Thinking about joint distributions Thinking about marginal distributions:
Analyzing Statistical Inferences How to Not Know Null.
Quantitative Project Risk Analysis 1 Intaver Institute Inc. 303, 6707, Elbow Drive S.W., Calgary AB Canada T2V 0E5
June 11, 2008Stat Lecture 10 - Review1 Midterm review Chapters 1-5 Statistics Lecture 10.
Statistics : Statistical Inference Krishna.V.Palem Kenneth and Audrey Kennedy Professor of Computing Department of Computer Science, Rice University 1.
Chapter 2 Statistical Background. 2.3 Random Variables and Probability Distributions A variable X is said to be a random variable (rv) if for every real.
Optimal revision of uncertain estimates in project portfolio selection Eeva Vilkkumaa, Juuso Liesiö, Ahti Salo Department of Mathematics and Systems Analysis,
National Institute of Economic and Social Research The Euro-area recession and nowcasting GDP growth using statistical models Gian Luigi Mazzi (Eurostat),
Federal Planning Bureau Economic analyses and forecasts Increasing uncertainties? A post-mortem on the Federal Planning Bureau’s medium-term.
Machine Design Under Uncertainty. Outline Uncertainty in mechanical components Why consider uncertainty Basics of uncertainty Uncertainty analysis for.
4. Marketing research After carefully studying this chapter, you should be able to: Define marketing research; Identify and explain the major forms of.
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
BUSINESS ANALYSIS AND FINANCIAL POLICY UPSA – LEVEL 300 Mr. Charles Barnor, Mr. Danaa Nantogma and Mr. K. Fosu-Boateng 1.
Presented by Macroeconomic Policy Division June 2015 Addis Ababa, Ethiopia Economic Forecasts.
11 Ahti Salo, Juuso Liesiö and Eeva Vilkkumaa Department of Mathematics and Systems Analysis Aalto University School of Science and Technology P.O. Box.
Incorporating Uncertainties into Economic Forecasts: an Application to Forecasting Economic Activity in Croatia Dario Rukelj Ministry of Finance of the.
Lecture 8: Measurement Errors 1. Objectives List some sources of measurement errors. Classify measurement errors into systematic and random errors. Study.
A tail of two distributions: Or, what should energy price forecasters try to forecast (and how)? George J. Gilboy MIT Center for International Studies.
Hypothesis Testing and Statistical Significance
Inferential Statistics Psych 231: Research Methods in Psychology.
Demand Management and Forecasting Chapter 11 Portions Copyright © 2010 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin.
Chapter 11 – With Woodruff Modications Demand Management and Forecasting Copyright © 2010 by The McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/Irwin.
March-14 Central Bank of Egypt 1 Strategic Asset Allocation.
Sampling and Sampling Distribution
Forecasting Exchange Rates
Economic Capital and RAROC
Ana Galvão, Anthony Garratt and James Mitchell. September 2017
Lecture 10: Observers and Kalman Filters
Statistical Methods For Engineers
February 2010 CPI Fan Chart 1 The Bank’s fan charts express the uncertainty about the future Forecast is conditional on the assumed policy path. In this.
Integration of sensory modalities
Psych 231: Research Methods in Psychology
Mathematical Foundations of BME Reza Shadmehr
Presentation transcript:

National Institute of Economic and Social Research “Consensus estimates of forecast uncertainty: working out how little we know ” James Mitchell NIESR June 2005

ESRC Social Science Week Thanks to the ESRC for financial support Thanks to Stephen Hall (co-author) Our ESRC “output”… three papers: 1.“Density forecast combination” 2. “Optimal density forecast combination” 3. “Evaluating, comparing and combining density forecasts using the KLIC with an application to the Bank of England and NIESR ‘fan’ charts of inflation”

Forecast Uncertainty How do we judge if forecasts are any good? How should a forecaster best acknowledge how little they really know ? Surely they are not always surprised when their forecast proves “wrong” ? How should we judge if one forecast is better than another ? Why is it misleading to say one forecast is “better” than another simply if it turns out closer to the subsequent outturn ? Can we do better if we take some kind of average across competing forecasts of the same event ?

Forecasting: state of the art…dismal science The importance of forecasts: forward looking policy Point forecasts are better seen as the central points of ranges of uncertainty It is not a question of one point forecast proving right and another proving wrong, despite what politicians may say Users may not be surprised when inflation is higher than forecast. Indeed they may not be very surprised if it’s much higher

Density Forecasts Increased attention is now given to providing measures of uncertainty associated with forecasts Measures of uncertainty surrounding a point forecast can enhance its usefulness; affect policy response; essential with non-quadratic loss functions So called “density” forecasts are being used increasingly since they provide commentators with a full impression of forecast uncertainty They provide an estimate of the probability distribution of a variable’s possible future values

Production of density forecasts Subjective or model-based Ex post we will see that we can evaluate the quality of the assumed density, combined density, forecast The “fan” chart: the Bank of England uses an asymmetric density forecast based on a two- piece normal distribution: upside vs. downside risk NIESR uses a normal density with variance estimated from the historical forecast errors: how far back should they look?

Bank of England “fan” chart for CPI Inflation: Inflation Report May 2005

Evaluation of Density Forecasts Evaluate density forecasts statistically using the “probability integral transform” (pit); analogous to evaluation of point forecasts using RMSE The pit’s z it for the density forecast g it of y t (say, inflation) are z it are i.i.d. uniform (or via a CDF transform, normal) when the density forecast is correct

Consensus forecasts It is widely appreciated that combination forecasts normally outperform any single forecast There are debates about why this happens - All forecasts are wrong but in different ways

Consensus estimates of forecast uncertainty A natural question to ask is, would a combined density forecast also work better? This raises a number of issues: 1. How should we combine density forecasts 2. How should we evaluate the combined density 3. How should we test individual densities against each other

Combining density forecasts The early OR approaches Consider N forecasts made by N experts (i=1,…,N) of a variable y t. If their (continuous) density forecasts are g it then the linear opinion pool is

Combined density forecasts How do we determine the weights w i ? –Equal weights –Optimal combination: mimic the optimal combination of point forecasts The combined density can have distinct characteristics from those of the individual forecasters; e.g. if all the densities are normal, but with different means and variances, then the combined density is mixture normal. But what if the true density is normal? Indirect combination: moment by moment Bayesian and copula based combination

The Bank of England and NIESR density forecasts of inflation

Does density forecast combination work? In-sample and out-of-sample experiments Combined density forecasts can but need not help Combining the Bank and NIESR density forecasts we find a weight of zero on NIESR Combining Bank and time-series forecasts we find a weight of 0.73 on the time-series forecast and an improvement in accuracy

The tool-kit available to those willing to admit they may get it wrong The Kullback-Leibler Information Criterion (KLIC) offers a unified statistical tool to evaluate, compare and combine density forecasts The KLIC distance between the true density f(y) and the forecast density g(y) is: Existing density forecast evaluation tests based on the pit’s implicitly test KLIC=0 but without having to know f(.) KLIC can be used to test which density forecast is best: extension of Diebold-Mariano test Basis for Bayesian Model Averaging

Conclusions Producers of forecasts should be encouraged to indicate how uncertain they are. This is an admission of strength not weakness. Facilitates better policy-making Combining density forecasts appears promising Users require a tool-kit to evaluate, compare and combine density forecasts. This will enable us to work out how little we know and improve the reliability of this forecast