Dynamic Models, Autocorrelation and Forecasting ECON 6002 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s notes.

Slides:



Advertisements
Similar presentations
Autocorrelation Functions and ARIMA Modelling
Advertisements

Autocorrelation and Heteroskedasticity
Panel Data Models Prepared by Vera Tabakova, East Carolina University.
Nonstationary Time Series Data and Cointegration
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
ECON 4551 Econometrics II Memorial University of Newfoundland
Using SAS for Time Series Data
Part II – TIME SERIES ANALYSIS C5 ARIMA (Box-Jenkins) Models
Nonstationary Time Series Data and Cointegration Prepared by Vera Tabakova, East Carolina University.
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Lecture #9 Autocorrelation Serial Correlation
Nonstationary Time Series Data and Cointegration ECON 6002 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s notes.
Chapter 11 Autocorrelation.
Time Series Building 1. Model Identification
Heteroskedasticity Prepared by Vera Tabakova, East Carolina University.
The Multiple Regression Model Prepared by Vera Tabakova, East Carolina University.
The Simple Linear Regression Model: Specification and Estimation
Linear Methods for Regression Dept. Computer Science & Engineering, Shanghai Jiao Tong University.
Economics Prof. Buckles1 Time Series Data y t =  0 +  1 x t  k x tk + u t 1. Basic Analysis.
Economics 20 - Prof. Anderson1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 6. Heteroskedasticity.
Data Sources The most sophisticated forecasting model will fail if it is applied to unreliable data Data should be reliable and accurate Data should be.
Additional Topics in Regression Analysis
Prediction and model selection
ARIMA Forecasting Lecture 7 and 8 - March 14-16, 2011
Chapter 11 Multiple Regression.
Topic 3: Regression.
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Linear Regression Models Powerful modeling technique Tease out relationships between “independent” variables and 1 “dependent” variable Models not perfect…need.
Autocorrelation Lecture 18 Lecture 18.
12 Autocorrelation Serial Correlation exists when errors are correlated across periods -One source of serial correlation is misspecification of the model.
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University ECON 4550 Econometrics Memorial University of Newfoundland.
Hypothesis Testing in Linear Regression Analysis
Regression Method.
STAT 497 LECTURE NOTES 2.
What does it mean? The variance of the error term is not constant
Regression with Time Series Data:
Pure Serial Correlation
Random Regressors and Moment Based Estimation Prepared by Vera Tabakova, East Carolina University.
Various topics Petter Mostad Overview Epidemiology Study types / data types Econometrics Time series data More about sampling –Estimation.
Copyright © 2014 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill Education.
Autocorrelation in Time Series KNNL – Chapter 12.
Welcome to Econ 420 Applied Regression Analysis Study Guide Week Thirteen.
Heteroskedasticity Adapted from Vera Tabakova’s notes ECON 4551 Econometrics II Memorial University of Newfoundland.
Heteroskedasticity ECON 4550 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s notes.
K. Ensor, STAT Spring 2004 Memory characterization of a process How would the ACF behave for a process with no memory? What is a short memory series?
The Simple Linear Regression Model: Specification and Estimation ECON 4550 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s.
Problems with the Durbin-Watson test
Principles of Econometrics, 4t h EditionPage 1 Chapter 8: Heteroskedasticity Chapter 8 Heteroskedasticity Walter R. Paczkowski Rutgers University.
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University ECON 4550 Econometrics Memorial University of Newfoundland.
Chap 8 Heteroskedasticity
Chap 9 Regression with Time Series Data: Stationary Variables
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University.
7-1 MGMG 522 : Session #7 Serial Correlation (Ch. 9)
Heteroskedasticity ECON 6002 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s notes.
EC 827 Module 2 Forecasting a Single Variable from its own History.
Vera Tabakova, East Carolina University
REGRESSION DIAGNOSTIC III: AUTOCORRELATION
Vera Tabakova, East Carolina University
Dynamic Models, Autocorrelation and Forecasting
Nonstationary Time Series Data and Cointegration
Pure Serial Correlation
Autocorrelation.
Serial Correlation and Heteroskedasticity in Time Series Regressions
Review of Statistical Inference
Interval Estimation and Hypothesis Testing
Serial Correlation and Heteroscedasticity in
Tutorial 1: Misspecification
Heteroskedasticity.
Autocorrelation.
Serial Correlation and Heteroscedasticity in
Presentation transcript:

Dynamic Models, Autocorrelation and Forecasting ECON 6002 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s notes

 9.1 Introduction  9.2 Lags in the Error Term: Autocorrelation  9.3 Estimating an AR(1) Error Model  9.4 Testing for Autocorrelation  9.5 An Introduction to Forecasting: Autoregressive Models  9.6 Finite Distributed Lags  9.7 Autoregressive Distributed Lag Models

Figure 9.1

 The model so far assumes that the observations are not correlated with one another.  This is believable if one has drawn a random sample, but less likely if one has drawn observations sequentially in time  Time series observations, which are drawn at regular intervals, usually embody a structure where time is an important component. Principles of Econometrics, 3rd Edition

 If we cannot completely model this structure in the regression function itself, then the remainder spills over into the unobserved component of the statistical model (its error) and this causes the errors be correlated with one another. Principles of Econometrics, 3rd Edition

Three ways to view the dynamics:

Figure 9.2(a) Time Series of a Stationary Variable Assume Stationarity For now

Figure 9.2(b) Time Series of a Nonstationary Variable that is ‘Slow Turning’ or ‘Wandering’

Figure 9.2(c) Time Series of a Nonstationary Variable that ‘Trends’

9.2.1 Area Response Model for Sugar Cane

Assume also

OK OK, constant Not zero!!!

Aka autocorrelation coefficient

Figure 9.3 Least Squares Residuals Plotted Against Time

Constant variance Null expectation of x and y

The existence of AR(1) errors implies:  The least squares estimator is still a linear and unbiased estimator, but it is no longer best. There is another estimator with a smaller variance.  The standard errors usually computed for the least squares estimator are incorrect. Confidence intervals and hypothesis tests that use these standard errors may be misleading.

 As with heteroskedastic errors, you can salvage OLS when your data are autocorrelated.  Now you can use an estimator of standard errors that is robust to both heteroskedasticity and autocorrelation proposed by Newey and West.  This estimator is sometimes called HAC, which stands for heteroskedasticity autocorrelated consistent.

 HAC is not as automatic as the heteroskedasticity consistent (HC) estimator.  With autocorrelation you have to specify how far away in time the autocorrelation is likely to be significant  The autocorrelated errors over the chosen time window are averaged in the computation of the HAC standard errors; you have to specify how many periods over which to average and how much weight to assign each residual in that average.  That weighted average is called a kernel and the number of errors to average is called bandwidth.

 Usually, you choose a method of averaging (Bartlett kernel or Parzen kernel) and a bandwidth (nw1, nw2 or some integer). Often your software defaults to the Bartlett kernel and a bandwidth computed based on the sample size, N.  Trade-off: Larger bandwidths reduce bias (good) as well as precision (bad). Smaller bandwidths exclude more relevant autocorrelations (and hence have more bias), but use more observations to increase precision (smaller variance).  Choose a bandwidth large enough to contain the largest autocorrelations. The choice will ultimately depend on the frequency of observation and the length of time it takes for your system to adjust to shocks.

Sugar cane example The two sets of standard errors, along with the estimated equation are: The 95% confidence intervals for β 2 are:

Now this transformed nonlinear model has errors that are uncorrelated over time

It can be shown that nonlinear least squares estimation of (9.24) is equivalent to using an iterative generalized least squares estimator called the Cochrane-Orcutt procedure. Details are provided in Appendix 9A.

 The nonlinear least squares estimator only requires that the errors be stable (not necessarily stationary).  Other methods commonly used make stronger demands on the data, namely that the errors be covariance stationary.  Furthermore, the nonlinear least squares estimator gives you an unconditional estimate of the autocorrelation parameter,, and yields a simple t-test of the hypothesis of no serial correlation.  Monte Carlo studies show that it performs well in small samples as well.

 But nonlinear least squares requires more computational power than linear estimation, though this is not much of a constraint these days.  Nonlinear least squares (and other nonlinear estimators) use numerical methods rather than analytical ones to find the minimum of your sum of squared errors objective function. The routines that do this are iterative.

We should then test the above restrictions

9.4.1Residual Correlogram

9.4.1Residual Correlogram (more practically…)

Figure 9.4 Correlogram for Least Squares Residuals from Sugar Cane Example

For this nonlinear model, then, the residuals should be uncorrelated

Figure 9.5 Correlogram for Nonlinear Least Squares Residuals from Sugar Cane Example

The other way to determine whether or not your residuals are autocorrelated is to use an LM (Lagrange multiplier) test. For autocorrelation, this test is based on an auxiliary regression where lagged OLS residuals are added to the original regression equation. If the coefficient on the lagged residual is significant then you conclude that the model is autocorrelated.

We can derive the auxiliary regression for the LM test:

Centered around zero, so the power of this test now would come from The lagged error, which is what we care about…

This was the Breusch-Godfrey LM test for autocorrelation and it is distributed chi-sq In STATA: regress la lp estat bgodfrey

Adding lagged values of y can serve to eliminate the error autocorrelation In general, p should be large enough so that vt is white noise

Figure 9.6 Correlogram for Least Squares Residuals from AR(3) Model for Inflation Helps decide How many lags To include in the model

This model is just a generalization of the ones previously discussed. In this model you include lags of the dependent variable (autoregressive) and the contemporaneous and lagged values of independent variables as regressors (distributed lags). The acronym is ARDL(p,q) where p is the maximum distributed lag and q is the maximum autoregressive lag

Figure 9.7 Correlogram for Least Squares Residuals from Finite Distributed Lag Model

Figure 9.8 Correlogram for Least Squares Residuals from Autoregressive Distributed Lag Model

Figure 9.9 Distributed Lag Weights for Autoregressive Distributed Lag Model

Slide 9-54 Principles of Econometrics, 3rd Edition  autocorrelation  autoregressive distributed lag models  autoregressive error  autoregressive model  correlogram  delay multiplier  distributed lag weight  dynamic models  finite distributed lag  forecast error  forecasting  HAC standard errors  impact multiplier  infinite distributed lag  interim multiplier  lag length  lagged dependent variable  LM test  nonlinear least squares  sample autocorrelation function  standard error of forecast error  total multiplier  form of LM test

Slide 9-55 Principles of Econometrics, 3rd Edition

Slide 9-56 Principles of Econometrics, 3rd Edition (9A.2) (9A.1)

Slide 9-57 Principles of Econometrics, 3rd Edition (9A.4) (9A.3)

Slide 9-58 Principles of Econometrics, 3rd Edition (9A.5) (9A.6)

Slide 9-59 Principles of Econometrics, 3rd Edition

Slide 9-60 Principles of Econometrics, 3rd Edition (9B.1)

Slide 9-61 Principles of Econometrics, 3rd Edition (9B.2)

Slide 9-62 Principles of Econometrics, 3rd Edition (9B.3)

Figure 9A.1: Principles of Econometrics, 3rd Edition Slide 9-63

Figure 9A.2: Principles of Econometrics, 3rd Edition Slide 9-64

The Durbin-Watson bounds test.   if the test is inconclusive. Principles of Econometrics, 3rd Edition Slide 9-65

Slide 9-66 Principles of Econometrics, 3rd Edition

Slide 9-67 Principles of Econometrics, 3rd Edition (9C.2) (9C.1)

Slide 9-68 Principles of Econometrics, 3rd Edition (9C.3)

Slide 9-69 Principles of Econometrics, 3rd Edition (9C.4)

Slide 9-70 Principles of Econometrics, 3rd Edition

Slide 9-71 Principles of Econometrics, 3rd Edition (9C.6) (9C.5)

Slide 9-72 Principles of Econometrics, 3rd Edition (9D.2) (9D.1)

Figure 9A.3: Exponential Smoothing Forecasts for two alternative values of α Principles of Econometrics, 3rd Edition Slide 9-73