Presentation is loading. Please wait.

Presentation is loading. Please wait.

REGRESSION DIAGNOSTIC III: AUTOCORRELATION

Similar presentations


Presentation on theme: "REGRESSION DIAGNOSTIC III: AUTOCORRELATION"— Presentation transcript:

1 REGRESSION DIAGNOSTIC III: AUTOCORRELATION
CHAPTER 6 REGRESSION DIAGNOSTIC III: AUTOCORRELATION Damodar Gujarati Econometrics by Example, second edition

2 AUTOCORRELATION One of the assumptions of the classical linear regression (CLRM) is that the covariance between ui, the error term for observation i, and uj, the error term for observation j, is zero. Reasons for autocorrelation include: The possible strong correlation between the shock in time t with the shock in time t+1 More common in time series data Damodar Gujarati Econometrics by Example, second edition

3 CONSEQUENCES If autocorrelation exists, several consequences ensue:
The OLS estimators are still unbiased and consistent. They are still normally distributed in large samples. They are no longer efficient, meaning that they are no longer BLUE. In most cases standard errors are underestimated. Thus, the hypothesis-testing procedure becomes suspect, since the estimated standard errors may not be reliable, even asymptotically (i.e. in large samples). Damodar Gujarati Econometrics by Example, second edition

4 DETECTION OF AUTOCORRELATION
Graphical method Plot the values of the residuals, et, chronologically If discernible pattern exists, autocorrelation likely a problem Durbin-Watson test Breusch-Godfrey (BG) test Damodar Gujarati Econometrics by Example, second edition

5 DURBIN-WATSON (d) TEST
The Durbin-Watson d statistic is defined as: Damodar Gujarati Econometrics by Example, second edition

6 DURBIN-WATSON (d) TEST ASSUMPTIONS
Assumptions are: 1. The regression model includes an intercept term. 2. The regressors are fixed in repeated sampling. 3. The error term follows the first-order autoregressive (AR1) scheme: where ρ (rho) is the coefficient of autocorrelation, a value between -1 and 1. 4. The error term is normally distributed. 5. The regressors do not include the lagged value(s) of the dependent variable, Yt. Damodar Gujarati Econometrics by Example, second edition

7 DURBIN-WATSON (d) TEST (CONT.)
Two critical values of the d statistic, dL and dU, called the lower and upper limits, are established. The decision rules are as follows: 1. If d < dL, there probably is evidence of positive autocorrelation. 2. If d > dU, there probably is no evidence of positive autocorrelation. 3. If dL < d < dU, no definite conclusion about positive autocorrelation. 4. If dU < d < 4 - dU, probably there is no evidence of positive or negative autocorrelation. 5. If 4 - dU < d < 4 - dL, no definite conclusion about negative autocorrelation. 6. If 4 - dL < d < 4, there probably is evidence of negative autocorrelation. d value always lies between 0 and 4. The closer it is to zero, the greater is the evidence of positive autocorrelation, and the closer it is to 4, the greater is the evidence of negative autocorrelation. If d is about 2, there is no evidence of positive or negative (first) order autocorrelation. Damodar Gujarati Econometrics by Example, second edition

8 BREUSCH-GODFREY (BG) TEST
This test allows for: (1) Lagged values of the dependent variables to be included as regressors (2) Higher-order autoregressive schemes, such as AR(2), AR(3), etc. (3) Moving average terms of the error term, such as ut-1, ut-2, etc. The error term in the main equation follows the following AR(p) autoregressive structure: The null hypothesis of no serial correlation is: Damodar Gujarati Econometrics by Example, second edition

9 BREUSCH-GODFREY (BG) TEST (CONT.)
The BG test involves the following steps: Regress et, the residuals from our main regression, on the regressors in the model and the p autoregressive terms given in the equation on the previous slide, and obtain R2 from this auxiliary regression. If the sample size is large, BG have shown that: (n – p)R2 ~ X2p That is, in large samples, (n – p) times R2 follows the chi-square distribution with p degrees of freedom.  Rejection of the null hypothesis implies evidence of autocorrelation. As an alternative, we can use the F value obtained from the auxiliary regression. This F value has (p , n-k-p) degrees of freedom in the numerator and denominator, respectively, where k represents the number of parameters in the auxiliary regression (including the intercept term). Damodar Gujarati Econometrics by Example, second edition

10 DURBIN ALTERNATIVE TEST OF AUTOCORRELATION
Alternative test that takes into account the lagged dependent variables Provides a formal test of the null hypothesis of serially uncorrelated disturbances against the alternative of autocorrelation of order p Post-estimation command in Stata: estat durbinalt, lags(p) Damodar Gujarati Econometrics by Example, second edition

11 REMEDIAL MEASURES First-Difference Transformation
If autocorrelation is of AR(1) type, we have: Assume ρ=1 and run first-difference model (taking first difference of dependent variable and all regressors) Generalized Transformation Estimate value of ρ through regression of residual on lagged residual and use value to run transformed regression Newey-West Method Generates HAC (heteroscedasticity and autocorrelation consistent) standard errors Model Evaluation Damodar Gujarati Econometrics by Example, second edition


Download ppt "REGRESSION DIAGNOSTIC III: AUTOCORRELATION"

Similar presentations


Ads by Google