Presentation is loading. Please wait.

Presentation is loading. Please wait.

Copyright © 2014 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill Education.

Similar presentations


Presentation on theme: "Copyright © 2014 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill Education."— Presentation transcript:

1 Copyright © 2014 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill Education. Chapter 11 Autocorrelation

2 11-2 Learning Objectives Understand the autoregressive structure of the error term Understand methods for detecting autocorrelation Understand how to correct for autocorrelation Understand unit roots and cointegration

3 11-3 What is Autocorrelation?

4 11-4 No Autocorrelation

5 11-5 Positive Autocorrelation

6 11-6 Negative Autocorrelation

7 11-7 The Issues And Consequences Associated With Autocorrelation Problem: Autocorrelation violates time-series assumption T6, which states that the error terms must not be correlated across time-periods. Consequences: Under autocorrelation parameter estimates are unbiased. Parameter estimates are not minimum variance among all unbiased estimators. Estimated standard errors are incorrect and all measures of precision based on the estimated standard errors are also incorrect.

8 11-8 Goals of this Chapter

9 11-9 An Important Caveat before Continuing With more advanced statistical packages, many researchers include a very simple command asking their chosen statistical program to provides standard error estimates that automatically correct for autocorrelation (Newey-West standard errors) Even though correcting for autocorrelation is straightforward, it important to first work through the more “old-school” examples that we do below before learning how to calculate Newey-West standard errors.

10 11-10 Understand the Autoregressive Structure Of the Error Term

11 11-11 Understand the Autoregressive Structure Of the Error Term

12 11-12 Understand Methods For Detecting Heteroskedasticity Informal methods - Graphs Formal methods using statistical tests - Durbin-Watson test - Regression test

13 11-13 Informal Method Either graph: (1)The residuals against each independent variable… (2)The residuals squared over time (3)The residuals against lagged residuals and look for a pattern in the observations. If a pattern exists then that is evidence of autocorrelation.

14 11-14 Regression of Export Volume in England on Exchange Rate from 1930 to 2009

15 11-15 Notice how positive residuals tend to follow positive residuals and negative residuals tend to follow negative residuals.

16 11-16 This residual plot is obtained by checking the residual plot option in Excel when running a regression. As in the previous slide, notice how there is a pattern between the residuals and the independent variable.

17 11-17 The primary drawback of the informal method is that it is not clear how much of a pattern needs to exist to lead us to the conclusion that the model suffers from autocorrelation. This leads us to the need for formal tests of autocorrelation.

18 11-18 Formal Methods for Detecting Autocorrelation

19 11-19 Testing for Autocorrelation (1)Durbin-Watson Test (2)Regression Test

20 11-20 Durbin-Watson Test for AR(1)

21 11-21 Durbin-Watson Test for AR(1)

22 11-22 Durbin-Watson Test for AR(1) Potential Issues: (1) The test cannot be performed in models with lagged dependent variables. (2) The test can only be performed on models in which the suspected autocorrelation takes the form of AR(1). (3) The errors must be normally distributed. (4) The model must include an intercept. (5) There is an inconclusive region.

23 11-23 Inconclusive Region of the Durbin- Watson Test

24 11-24 Durbin-Watson Test Example

25 11-25 Regression Test for AR(1)

26 11-26 Regression Test for AR(1) Why It Works: Autocorrelation of the form AR(1) exists if the current period errors are correlated with immediate prior period errors. Hence, if a regression of the current period residuals on the residuals lagged one period yields a statistically significant coefficient, we would conclude that the errors are correlated and that an AR(1) process does exist.

27 11-27 Regression Test for AR(1) for Trade Volume Data Dependent Variable is Residuals The individual significant of the lagged residuals is much less than 0.05 (or 0.01 for that matter) so we reject the null hypothesis of no AR(1) and conclude the model suffers from first order autocorrelation.

28 11-28 Regression Test for AR(2) for Trade Volume Data Dependent Variable is Residuals The significance F of the joint significance of the lagged residuals is much less than 0.05 (or 0.01 for that matter) so we reject the null hypothesis of no AR(2) and conclude the model suffers from second order autocorrelation.

29 11-29 Correcting for Autocorrelation (1)Cochran-Orcutt (2)Prais-Winsten (3)Newey-West autocorrelation and heteroskedastic consistent standard errors

30 11-30 Cochran-Orcutt Correction for AR(1) Process

31 11-31 Cochran-Orcutt Transformation for AR(1) Process

32 11-32 Cochran-Orcutt Correction for AR(1) Process

33 11-33 Cochran-Orcutt Correction in STATA How to do it: First declare the data to be time series data using the command tsset time Then use the command prais y x1 x2, corc

34 11-34 Cochran-Orcutt Correction in STATA

35 11-35 Prais-Winsten Correction for AR(1) Process

36 11-36 Prais-Winsten Correction in STATA How to do it: First declare the data to be time series data using the command tsset time Then use the command prais y x1 x2

37 11-37 Prais-Winsten Correction in STATA

38 11-38 Newey-West Standard Errors The preferred method to correct for autocorrelation is to use Newey-West autocorrelation and heteroskedastic consistent standard errors. The coefficient estimates are still unbiased so the only thing that needs to be corrected are the standard errors. In STATA, the command is newey y x1 x2 x3

39 11-39 STATA Results with Newey-West Standard Errors

40 11-40 What is a Unit Root?

41 11-41 Using the Dickey-Fuller Test to test for a Unit Root

42 11-42 STATA Results of Dickey-Fuller Test on Export Volume Because the test statistic is less than the critical value, we fail to reject the null hypothesis and conclude these data suffer from a unit root.

43 11-43 What to Do if the Data Suffer from a Unit Root?


Download ppt "Copyright © 2014 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill Education."

Similar presentations


Ads by Google