Predictive distributions

Slides:



Advertisements
Similar presentations
Introduction to Monte Carlo Markov chain (MCMC) methods
Advertisements

MCMC estimation in MlwiN
Probabilistic models Jouni Tuomisto THL. Outline Deterministic models with probabilistic parameters Hierarchical Bayesian models Bayesian belief nets.
Bayesian inference of normal distribution
CHAPTER 21 Inferential Statistical Analysis. Understanding probability The idea of probability is central to inferential statistics. It means the chance.
Bayesian Estimation in MARK
Part 24: Bayesian Estimation 24-1/35 Econometrics I Professor William Greene Stern School of Business Department of Economics.
Bayesian statistics – MCMC techniques
Computing the Posterior Probability The posterior probability distribution contains the complete information concerning the parameters, but need often.
Resampling techniques Why resampling? Jacknife Cross-validation Bootstrap Examples of application of bootstrap.
Bootstrap in Finance Esther Ruiz and Maria Rosa Nieto (A. Rodríguez, J. Romo and L. Pascual) Department of Statistics UNIVERSIDAD CARLOS III DE MADRID.
8-1 Introduction In the previous chapter we illustrated how a parameter can be estimated from sample data. However, it is important to understand how.
14 Vector Autoregressions, Unit Roots, and Cointegration.
Overview G. Jogesh Babu. Probability theory Probability is all about flip of a coin Conditional probability & Bayes theorem (Bayesian analysis) Expectation,
Introduction to MCMC and BUGS. Computational problems More parameters -> even more parameter combinations Exact computation and grid approximation become.
Priors, Normal Models, Computing Posteriors
A Beginner’s Guide to Bayesian Modelling Peter England, PhD EMB GIRO 2002.
Bayesian inference review Objective –estimate unknown parameter  based on observations y. Result is given by probability distribution. Bayesian inference.
Exam I review Understanding the meaning of the terminology we use. Quick calculations that indicate understanding of the basis of methods. Many of the.
Soft Sensor for Faulty Measurements Detection and Reconstruction in Urban Traffic Department of Adaptive systems, Institute of Information Theory and Automation,
1 Statistical Distribution Fitting Dr. Jason Merrick.
Computer Science, Software Engineering & Robotics Workshop, FGCU, April 27-28, 2012 Fault Prediction with Particle Filters by David Hatfield mentors: Dr.
Mixture Models, Monte Carlo, Bayesian Updating and Dynamic Models Mike West Computing Science and Statistics, Vol. 24, pp , 1993.
- 1 - Bayesian inference of binomial problem Estimating a probability from binomial data –Objective is to estimate unknown proportion (or probability of.
Example: Bioassay experiment Problem statement –Observations: At each level of dose, 5 animals are tested, and number of death are observed.
Additional Topics in Prediction Methodology. Introduction Predictive distribution for random variable Y 0 is meant to capture all the information about.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
Probabilistic models Jouni Tuomisto THL. Outline Deterministic models with probabilistic parameters Hierarchical Bayesian models Bayesian belief nets.
Bayes Theorem. Prior Probabilities On way to party, you ask “Has Karl already had too many beers?” Your prior probabilities are 20% yes, 80% no.
K. Ensor, STAT Spring 2005 Estimation of AR models Assume for now mean is 0. Estimate parameters of the model, including the noise variace. –Least.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
Bayesian Modelling Harry R. Erwin, PhD School of Computing and Technology University of Sunderland.
Bootstrapping James G. Anderson, Ph.D. Purdue University.
Density Estimation in R Ha Le and Nikolaos Sarafianos COSC 7362 – Advanced Machine Learning Professor: Dr. Christoph F. Eick 1.
Jump to first page Bayesian Approach FOR MIXED MODEL Bioep740 Final Paper Presentation By Qiang Ling.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
Overview G. Jogesh Babu. R Programming environment Introduction to R programming language R is an integrated suite of software facilities for data manipulation,
Canadian Bioinformatics Workshops
Computing with R & Bayesian Statistical Inference P548: Intro Bayesian Stats with Psych Applications Instructor: John Miyamoto 01/11/2016: Lecture 02-1.
Hierarchical Models. Conceptual: What are we talking about? – What makes a statistical model hierarchical? – How does that fit into population analysis?
Bayesian Estimation and Confidence Intervals Lecture XXII.
K. Ensor, STAT Spring 2004 Volatility Volatility – conditional variance of the process –Don’t observe this quantity directly (only one observation.
Generalization Performance of Exchange Monte Carlo Method for Normal Mixture Models Kenji Nagata, Sumio Watanabe Tokyo Institute of Technology.
Markov Chain Monte Carlo in R
Bayesian Semi-Parametric Multiple Shrinkage
Bayesian Estimation and Confidence Intervals
MCMC Output & Metropolis-Hastings Algorithm Part I
Prepared by Lloyd R. Jaisingh
Bayesian Generalized Product Partition Model
Models.
Model Inference and Averaging
Linear Mixed Models in JMP Pro
Linear and generalized linear mixed effects models
Ungraded quiz Unit 6.
Remember that our objective is for some density f(y|) for observations where y and  are vectors of data and parameters,  being sampled from a prior.
More about Posterior Distributions
Bootstrap - Example Suppose we have an estimator of a parameter and we want to express its accuracy by its standard error but its sampling distribution.
Sampling Distribution
Sampling Distribution
Multidimensional Integration Part I
Ch13 Empirical Methods.
Econometrics Chengyuan Yin School of Mathematics.
Learning From Observed Data
Bayesian Statistics on a Shoestring Assaf Oron, May 2008
Why does the autocorrelation matter when making inferences?
CH2 Time series.
Yalchin Efendiev Texas A&M University
Probabilistic Surrogate Models
Uncertainty Propagation
Presentation transcript:

Predictive distributions Spring 2005 Predictive distributions Consider a well fit AR model of order p. When we make the statement A 95% prediction interval is … We assume, generally, that The model is perfectly specified The parameters are estimated without error How could we come up with forecast bounds that incorporate the estimation error? Incorporate estimation error in our theoretical development of the predictors and their distributional properties. Come up with this needed predictive distribution through resampling methods. Simulate multiple future forecasts that take draws from the sampling distribution of our estimators. Spring 2005 K. Ensor, STAT 421 K. Ensor, Stat 421

Resampling Block bootstrap Stationary bootstrap Model based bootstrap Resample blocks of the time series and reconfigure the TS Stationary bootstrap blocks are of random length Model based bootstrap Resample the residuals Rebuild the process and then fit model again Spring 2005 K. Ensor, STAT 421

Bayesian Analysis of Time Series View our parameters (and even our model) as random variables. Prior distribution of parameters Distribution of series given the parameters Posterior distribution of parameters Our objective is to come up with the posterior distribution of the parameters and the future values of the process. Spring 2005 K. Ensor, STAT 421

Homework for next week Simulate a realization from an autoregressive model – AR(1) with parameters (.8) and noise variance 1 Fit the model Obtain forecasts 1 time unit in the future Plot the predictive distribution of these forecasts Develop a predictive distribution that accounts for the model estimation error (for fixed order) Through simulation Through resampling Plot the three densities on the same graph Spring 2005 K. Ensor, STAT 421

Bayesian Vector Autogregressive Models Prior distribution for parameters: Likelihood of X given the parameter Want the posterior of  given X Spring 2005 K. Ensor, STAT 421

The prior Conjugate prior Example A prior such that the distributional form of the prior is the same as that of the posterior. Conceptually good idea Yields closed form solutions to the posterior Example Normal prior and Normal likelihood yields Normal posterior Spring 2005 K. Ensor, STAT 421

Normal  Normal = Normal Spring 2005 K. Ensor, STAT 421

Mean and variance of posterior distribution Spring 2005 K. Ensor, STAT 421

Posterior – always this easy? NO In general, it is difficult to find this posterior distribution in closed form. Use computational techniques to simulate the posterior distribution (MCMC, Metropolis-Hasting algorithm) Break down the multivariate integral problem into a series of conditional hierarchical integrations which are done via Monte Carlo methods. Forecasts? Obtained via simulation as well See page 439 Spring 2005 K. Ensor, STAT 421

Example – Policy data Bayesian VAR forecast Vector AR forecast Spring 2005 K. Ensor, STAT 421

Set prior standard deviation for AR parameter to a small value Spring 2005 K. Ensor, STAT 421

Homework for next week Work through Example 10.1 using the Finmetrics BVAR routines Prepare a demonstration of your results that includes A description of the prior distributions assumed Graphical display of the posterior distributions And appropriate forecasts for the series. Spring 2005 K. Ensor, STAT 421

Caution The posterior depends on the choice of the prior. Thus the results should be considered in light of the knowledge imposed by the prior distribution. There is a class of “noninformative” priors that place little to no information on the prior parameters. This is a commercial for STAT 423. Spring 2005 K. Ensor, STAT 421