11-2 Learning Objectives Understand the autoregressive structure of the error term Understand methods for detecting autocorrelation Understand how to correct for autocorrelation Understand unit roots and cointegration
11-3 What is Autocorrelation?
11-4 No Autocorrelation
11-5 Positive Autocorrelation
11-6 Negative Autocorrelation
11-7 The Issues And Consequences Associated With Autocorrelation Problem: Autocorrelation violates time-series assumption T6, which states that the error terms must not be correlated across time-periods. Consequences: Under autocorrelation parameter estimates are unbiased. Parameter estimates are not minimum variance among all unbiased estimators. Estimated standard errors are incorrect and all measures of precision based on the estimated standard errors are also incorrect.
11-8 Goals of this Chapter
11-9 An Important Caveat before Continuing With more advanced statistical packages, many researchers include a very simple command asking their chosen statistical program to provides standard error estimates that automatically correct for autocorrelation (Newey-West standard errors) Even though correcting for autocorrelation is straightforward, it important to first work through the more “old-school” examples that we do below before learning how to calculate Newey-West standard errors.
11-10 Understand the Autoregressive Structure Of the Error Term
11-11 Understand the Autoregressive Structure Of the Error Term
11-12 Understand Methods For Detecting Heteroskedasticity Informal methods - Graphs Formal methods using statistical tests - Durbin-Watson test - Regression test
11-13 Informal Method Either graph: (1)The residuals against each independent variable… (2)The residuals squared over time (3)The residuals against lagged residuals and look for a pattern in the observations. If a pattern exists then that is evidence of autocorrelation.
11-14 Regression of Export Volume in England on Exchange Rate from 1930 to 2009
11-15 Notice how positive residuals tend to follow positive residuals and negative residuals tend to follow negative residuals.
11-16 This residual plot is obtained by checking the residual plot option in Excel when running a regression. As in the previous slide, notice how there is a pattern between the residuals and the independent variable.
11-17 The primary drawback of the informal method is that it is not clear how much of a pattern needs to exist to lead us to the conclusion that the model suffers from autocorrelation. This leads us to the need for formal tests of autocorrelation.
11-18 Formal Methods for Detecting Autocorrelation
11-19 Testing for Autocorrelation (1)Durbin-Watson Test (2)Regression Test
11-20 Durbin-Watson Test for AR(1)
11-21 Durbin-Watson Test for AR(1)
11-22 Durbin-Watson Test for AR(1) Potential Issues: (1) The test cannot be performed in models with lagged dependent variables. (2) The test can only be performed on models in which the suspected autocorrelation takes the form of AR(1). (3) The errors must be normally distributed. (4) The model must include an intercept. (5) There is an inconclusive region.
11-23 Inconclusive Region of the Durbin- Watson Test
11-24 Durbin-Watson Test Example
11-25 Regression Test for AR(1)
11-26 Regression Test for AR(1) Why It Works: Autocorrelation of the form AR(1) exists if the current period errors are correlated with immediate prior period errors. Hence, if a regression of the current period residuals on the residuals lagged one period yields a statistically significant coefficient, we would conclude that the errors are correlated and that an AR(1) process does exist.
11-27 Regression Test for AR(1) for Trade Volume Data Dependent Variable is Residuals The individual significant of the lagged residuals is much less than 0.05 (or 0.01 for that matter) so we reject the null hypothesis of no AR(1) and conclude the model suffers from first order autocorrelation.
11-28 Regression Test for AR(2) for Trade Volume Data Dependent Variable is Residuals The significance F of the joint significance of the lagged residuals is much less than 0.05 (or 0.01 for that matter) so we reject the null hypothesis of no AR(2) and conclude the model suffers from second order autocorrelation.
11-29 Correcting for Autocorrelation (1)Cochran-Orcutt (2)Prais-Winsten (3)Newey-West autocorrelation and heteroskedastic consistent standard errors
11-30 Cochran-Orcutt Correction for AR(1) Process
11-31 Cochran-Orcutt Transformation for AR(1) Process
11-32 Cochran-Orcutt Correction for AR(1) Process
11-33 Cochran-Orcutt Correction in STATA How to do it: First declare the data to be time series data using the command tsset time Then use the command prais y x1 x2, corc
11-34 Cochran-Orcutt Correction in STATA
11-35 Prais-Winsten Correction for AR(1) Process
11-36 Prais-Winsten Correction in STATA How to do it: First declare the data to be time series data using the command tsset time Then use the command prais y x1 x2
11-37 Prais-Winsten Correction in STATA
11-38 Newey-West Standard Errors The preferred method to correct for autocorrelation is to use Newey-West autocorrelation and heteroskedastic consistent standard errors. The coefficient estimates are still unbiased so the only thing that needs to be corrected are the standard errors. In STATA, the command is newey y x1 x2 x3
11-39 STATA Results with Newey-West Standard Errors
11-40 What is a Unit Root?
11-41 Using the Dickey-Fuller Test to test for a Unit Root
11-42 STATA Results of Dickey-Fuller Test on Export Volume Because the test statistic is less than the critical value, we fail to reject the null hypothesis and conclude these data suffer from a unit root.
11-43 What to Do if the Data Suffer from a Unit Root?