Download presentation

Presentation is loading. Please wait.

Published byDania Hearl Modified over 2 years ago

1
Christopher Dougherty EC220 - Introduction to econometrics (chapter 12) Slideshow: testing for autocorrelation Original citation: Dougherty, C. (2012) EC220 - Introduction to econometrics (chapter 12). [Teaching Resource] © 2012 The Author This version available at: Available in LSE Learning Resources Online: May 2012 This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License. This license allows the user to remix, tweak, and build upon the work even for commercial purposes, as long as the user credits the author and licenses their new creations under the identical terms.

2
Simple autoregression of the residuals 1 TESTS FOR AUTOCORRELATION We will initially confine the discussion of the tests for autocorrelation to its most common form, the AR(1) process. If the disturbance term follows the AR(1) process, it is reasonable to hypothesize that, as an approximation, the residuals will conform to a similar process. error

3
Simple autoregression of the residuals 2 TESTS FOR AUTOCORRELATION After all, provided that the conditions for the consistency of the OLS estimators are satisfied, as the sample size becomes large, the regression parameters will approach their true values, the location of the regression line will converge on the true relationship, and the residuals will coincide with the values of the disturbance term. error

4
Simple autoregression of the residuals 3 TESTS FOR AUTOCORRELATION Hence a regression of e t on e t–1 is sufficient, at least in large samples. Of course, there is the issue that, in this regression, e t–1 is a lagged dependent variable, but that does not matter in large samples. error

5
4 TESTS FOR AUTOCORRELATION This is illustrated with the simulation shown in the figure. The true model is as shown, with u t being generated as an AR (1) process with = 0.7. Simple autoregression of the residuals

6
5 TESTS FOR AUTOCORRELATION Simple autoregression of the residuals The values of the parameters in the model for Y t make no difference to the distributions of the estimator of.

7
6 TESTS FOR AUTOCORRELATION Simple autoregression of the residuals As can be seen, when e t is regressed on e t–1, the distribution of the estimator of is left skewed and heavily biased downwards for T = 25. The mean of the distribution is T mean

8
7 TESTS FOR AUTOCORRELATION Simple autoregression of the residuals T mean However, as the sample size increases, the downwards bias diminishes and it is clear that it is converging on 0.7 as the sample becomes large. Inference in finite samples will be approximate, given the autoregressive nature of the regression.

9
8 TESTS FOR AUTOCORRELATION The simple estimator of the autocorrelation coefficient depends on Assumption C.7 part (2) being satisfied when the original model (the model for Y t ) is fitted. Generally, one might expect this not to be the case. Breusch–Godfrey test

10
9 TESTS FOR AUTOCORRELATION If the original model contains a lagged dependent variable as a regressor, or violates Assumption C.7 part (2) in any other way, the estimates of the parameters will be inconsistent if the disturbance term is subject to autocorrelation. Breusch–Godfrey test

11
10 TESTS FOR AUTOCORRELATION As a repercussion, a simple regression of e t on e t–1 will produce an inconsistent estimate of. The solution is to include all of the explanatory variables in the original model in the residuals autoregression. Breusch–Godfrey test

12
11 TESTS FOR AUTOCORRELATION If the original model is the first equation where, say, one of the X variables is Y t–1, then the residuals regression would be the second equation. Breusch–Godfrey test

13
12 TESTS FOR AUTOCORRELATION The idea is that, by including the X variables, one is controlling for the effects of any endogeneity on the residuals. Breusch–Godfrey test

14
13 TESTS FOR AUTOCORRELATION The underlying theory is complex and relates to maximum-likelihood estimation, as does the test statistic. The test is known as the Breusch–Godfrey test. Breusch–Godfrey test

15
14 TESTS FOR AUTOCORRELATION Several asymptotically-equivalent versions of the test have been proposed. The most popular involves the computation of the lagrange multiplier statistic nR 2 when the residuals regression is fitted, n being the actual number of observations in the regression. Breusch–Godfrey test Test statistic: nR 2, distributed as 2 (1) when testing for first-order autocorrelation

16
15 TESTS FOR AUTOCORRELATION Asymptotically, under the null hypothesis of no autocorrelation, nR 2 is distributed as a chi- squared statistic with one degree of freedom. Breusch–Godfrey test Test statistic: nR 2, distributed as 2 (1) when testing for first-order autocorrelation

17
16 TESTS FOR AUTOCORRELATION A simple t test on the coefficient of e t–1 has also been proposed, again with asymptotic validity. Breusch–Godfrey test Alternatively, simple t test on coefficient of e t–1

18
17 TESTS FOR AUTOCORRELATION The procedure can be extended to test for higher order autocorrelation. If AR(q) autocorrelation is suspected, the residuals regression includes q lagged residuals. Breusch–Godfrey test

19
18 TESTS FOR AUTOCORRELATION For the lagrange multiplier version of the test, the test statistic remains nR 2 (with n smaller than before, the inclusion of the additional lagged residuals leading to a further loss of initial observations). Breusch–Godfrey test Test statistic: nR 2, distributed as 2 (q)

20
19 TESTS FOR AUTOCORRELATION Under the null hypothesis of no autocorrelation, nR 2 has a chi-squared distribution with q degrees of freedom. Breusch–Godfrey test Test statistic: nR 2, distributed as 2 (q)

21
20 TESTS FOR AUTOCORRELATION The t test version becomes an F test comparing RSS for the residuals regression with RSS for the same specification without the residual terms. Again, the test is valid only asymptotically. Breusch–Godfrey test Alternatively, F test on the lagged residuals H 0 : 1 =... = q = 0, H 1 : not H 0

22
21 TESTS FOR AUTOCORRELATION The lagrange multiplier version of the test has been shown to be asymptotically valid for the case of MA(q) moving average autocorrelation. Breusch–Godfrey test Test statistic: nR 2, distributed as 2 (q), valid also for MA(q) autocorrelation

23
22 TESTS FOR AUTOCORRELATION The first major test to be developed and popularised for the detection of autocorrelation was the Durbin–Watson test for AR(1) autocorrelation based on the Durbin–Watson d statistic calculated from the residuals using the expression shown. Durbin–Watson test

24
23 It can be shown that in large samples d tends to 2 – 2, where is the parameter in the AR(1) relationship u t = u t–1 + t. TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation

25
24 If there is no autocorrelation, is 0 and d should be distributed randomly around 2. TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation

26
25 If there is severe positive autocorrelation, will be near 1 and d will be near 0. TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation

27
26 Likewise, if there is severe positive autocorrelation, will be near –1 and d will be near 4. TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation

28
27 Thus d behaves as illustrated graphically above. TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation 240 positive autocorrelation negative autocorrelation no autocorrelation

29
28 To perform the Durbin–Watson test, we define critical values of d. The null hypothesis is H 0 : = 0 (no autocorrelation). If d lies between these values, we do not reject the null hypothesis. TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation 240d crit positive autocorrelation negative autocorrelation no autocorrelation d crit

30
29 The critical values, at any significance level, depend on the number of observations in the sample and the number of explanatory variables. TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation 240d crit positive autocorrelation negative autocorrelation no autocorrelation d crit

31
30 Unfortunately, they also depend on the actual data for the explanatory variables in the sample, and thus vary from sample to sample. TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation 240d crit positive autocorrelation negative autocorrelation no autocorrelation d crit

32
31 However Durbin and Watson determined upper and lower bounds, d U and d L, for the critical values, and these are presented in standard tables. 240dLdL dUdU d crit positive autocorrelation negative autocorrelation no autocorrelation d crit TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation

33
32 240dLdL dUdU d crit positive autocorrelation negative autocorrelation no autocorrelation d crit TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation If d is less than d L, it must also be less than the critical value of d for positive autocorrelation, and so we would reject the null hypothesis and conclude that there is positive autocorrelation.

34
33 240dLdL dUdU d crit positive autocorrelation negative autocorrelation no autocorrelation d crit TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation If d is above than d U, it must also be above the critical value of d, and so we would not reject the null hypothesis. (Of course, if it were above 2, we should consider testing for negative autocorrelation instead.)

35
34 240dLdL dUdU d crit positive autocorrelation negative autocorrelation no autocorrelation d crit TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation If d lies between d L and d U, we cannot tell whether it is above or below the critical value and so the test is indeterminate.

36
35 TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation Here are d L and d U for 45 observations and two explanatory variables, at the 5% significance level (n = 45, k = 3, 5% level) 240dLdL dUdU positive autocorrelation negative autocorrelation no autocorrelation

37
36 TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation (n = 45, k = 3, 5% level) 240dLdL dUdU positive autocorrelation negative autocorrelation no autocorrelation There are similar bounds for the critical value in the case of negative autocorrelation. They are not given in the standard tables because negative autocorrelation is uncommon, but it is easy to calculate them because are they are located symmetrically to the right of

38
37 TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation (n = 45, k = 3, 5% level) 240dLdL dUdU positive autocorrelation negative autocorrelation no autocorrelation So if d < 1.43, we reject the null hypothesis and conclude that there is positive autocorrelation.

39
38 TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation (n = 45, k = 3, 5% level) 240dLdL dUdU positive autocorrelation negative autocorrelation no autocorrelation If 1.43 < d < 1.62, the test is indeterminate and we do not come to any conclusion.

40
39 TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation (n = 45, k = 3, 5% level) 240dLdL dUdU positive autocorrelation negative autocorrelation no autocorrelation If 1.62 < d < 2.38, we do not reject the null hypothesis of no autocorrelation.

41
40 TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation (n = 45, k = 3, 5% level) 240dLdL dUdU positive autocorrelation negative autocorrelation no autocorrelation If 2.38 < d < 2.57, we do not come to any conclusion.

42
41 TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation (n = 45, k = 3, 5% level) 240dLdL dUdU positive autocorrelation negative autocorrelation no autocorrelation If d > 2.57, we conclude that there is significant negative autocorrelation.

43
42 TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation 240 positive autocorrelation negative autocorrelation no autocorrelation Here are the bounds for the critical values for the 1% test, again with 45 observations and two explanatory variables. dLdL dUdU (n = 45, k = 3, 1% level)

44
43 TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation 240 positive autocorrelation negative autocorrelation no autocorrelation dLdL dUdU (n = 45, k = 3, 1% level) The Durbin-Watson test is valid only when all the explanatory variables are deterministic. This is in practice a serious limitation since usually interactions and dynamics in a system of equations cause Assumption C.7 part (2) to be violated.

45
44 TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation 240 positive autocorrelation negative autocorrelation no autocorrelation dLdL dUdU (n = 45, k = 3, 1% level) In particular, if the lagged dependent variable is used as a regressor, the statistic is biased towards 2 and therefore will tend to under-reject the null hypothesis. It is also restricted to testing for AR(1) autocorrelation.

46
45 TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation 240 positive autocorrelation negative autocorrelation no autocorrelation dLdL dUdU (n = 45, k = 3, 1% level) Despite these shortcomings, it remains a popular test and some major applications produce the d statistic automatically as part of the standard regression output.

47
46 TESTS FOR AUTOCORRELATION Durbin–Watson test In large samples No autocorrelation Severe positive autocorrelation Severe negative autocorrelation 240 positive autocorrelation negative autocorrelation no autocorrelation dLdL dUdU (n = 45, k = 3, 1% level) It does have the appeal of the test statistic being part of standard regression output. Further, it is appropriate for finite samples, subject to the zone of indeterminacy and the deterministic regressor requirement.

48
47 Durbin proposed two tests for the case where the use of the lagged dependent variable as a regressor made the original Durbin–Watson test inapplicable. One was a precursor to the Breusch–Godrey test. Durbins h test TESTS FOR AUTOCORRELATION

49
48 The other is the Durbin h test, appropriate for the detection of AR(1) autocorrelation. Durbins h test TESTS FOR AUTOCORRELATION

50
49 The Durbin h statistic is defined as shown, where is an estimate of in the AR(1) process, is an estimate of the variance of the coefficient of the lagged dependent variable, and n is the number of observations in the regression. Durbins h test TESTS FOR AUTOCORRELATION

51
50 There are various ways in which one might estimate but, since this test is valid only for large samples, it does not matter which is used. The most convenient is to take advantage of the fact that d tends to 2 – 2 in large samples. The estimator is then 1 – 0.5d. Durbins h test TESTS FOR AUTOCORRELATION

52
51 The estimate of the variance of the coefficient of the lagged dependent variable is obtained by squaring its standard error. Durbins h test TESTS FOR AUTOCORRELATION

53
52 Thus h can be calculated from the usual regression results. In large samples, under the null hypothesis of no autocorrelation, h is distributed as a normal variable with zero mean and unit variance. Durbins h test TESTS FOR AUTOCORRELATION

54
53 An occasional problem with this test is that the h statistic cannot be computed if n is greater than 1, which can happen if the sample size is not very large. Durbins h test TESTS FOR AUTOCORRELATION

55
54 An even worse problem occurs when n is near to, but less than, 1. In such a situation the h statistic could be enormous, without there being any problem of autocorrelation. Durbins h test TESTS FOR AUTOCORRELATION

56
55 The output shown in the table gives the result of a logarithmic regression of expenditure on food on disposable personal income and the relative price of food. ============================================================ Dependent Variable: LGFOOD Method: Least Squares Sample: Included observations: 45 ============================================================ Variable Coefficient Std. Error t-Statistic Prob. ============================================================ C LGDPI LGPRFOOD ============================================================ R-squared Mean dependent var Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood Hannan-Quinn crite F-statistic Durbin-Watson stat Prob(F-statistic) ============================================================ TESTS FOR AUTOCORRELATION

57
56 The plot of the residuals is shown. All the tests indicate highly significant autocorrelation. TESTS FOR AUTOCORRELATION Residuals, static logarithmic regression for FOOD

58
57 ============================================================ Dependent Variable: ELGFOOD Method: Least Squares Sample(adjusted): Included observations: 44 after adjusting endpoints ============================================================ Variable CoefficientStd. Errort-Statistic Prob. ============================================================ ELGFOOD(-1) ============================================================ R-squared Mean dependent var 3.28E-05 Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood Durbin-Watson stat ============================================================ TESTS FOR AUTOCORRELATION ELGFOOD in the regression above is the residual from the LGFOOD regression. A simple regression of ELGFOOD on ELGFOOD(–1) yields a coefficient of 0.79 with standard error 0.11.

59
58 ============================================================ Dependent Variable: ELGFOOD Method: Least Squares Sample(adjusted): Included observations: 44 after adjusting endpoints ============================================================ Variable CoefficientStd. Errort-Statistic Prob. ============================================================ ELGFOOD(-1) ============================================================ R-squared Mean dependent var 3.28E-05 Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood Durbin-Watson stat ============================================================ TESTS FOR AUTOCORRELATION Technical note for EViews users: EViews places the residuals from the most recent regression in a pseudo-variable called resid. resid cannot be used directly. So the residuals were saved as ELGFOOD using the genr command: genr ELGFOOD = resid

60
59 Adding an intercept, LGDPI and LGPRFOOD to the specification, the coefficient of the lagged residuals becomes 0.81 with standard error R 2 is , so nR 2 is ============================================================ Dependent Variable: ELGFOOD Method: Least Squares Sample(adjusted): Included observations: 44 after adjusting endpoints ============================================================ Variable CoefficientStd. Errort-Statistic Prob. ============================================================ C LGDPI -7.36E LGPRFOOD ELGFOOD(-1) ============================================================ R-squared Mean dependent var 3.28E-05 Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood F-statistic Durbin-Watson stat Prob(F-statistic) ============================================================ TESTS FOR AUTOCORRELATION

61
60 ============================================================ Dependent Variable: ELGFOOD Method: Least Squares Sample(adjusted): Included observations: 44 after adjusting endpoints ============================================================ Variable CoefficientStd. Errort-Statistic Prob. ============================================================ C LGDPI -7.36E LGPRFOOD ELGFOOD(-1) ============================================================ R-squared Mean dependent var 3.28E-05 Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood F-statistic Durbin-Watson stat Prob(F-statistic) ============================================================ TESTS FOR AUTOCORRELATION (Note that here n = 44. There are 45 observations in the regression in Table 12.1, and one fewer in the residuals regression.) The critical value of chi-squared with one degree of freedom at the 0.1 percent level is

62
61 Technical note for EViews users: one can perform the test simply by following the LGFOOD regression with the command auto(1). EViews allows itself to use resid directly. ============================================================ Breusch-Godfrey Serial Correlation LM Test: ============================================================ F-statistic Probability Obs*R-squared Probability ============================================================ Test Equation: Dependent Variable: RESID Method: Least Squares Presample missing value lagged residuals set to zero. ============================================================ Variable CoefficientStd. Errort-Statistic Prob. ============================================================ C LGDPI 9.50E LGPRFOOD RESID(-1) ============================================================ R-squared Mean dependent var-1.85E-18 Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood F-statistic Durbin-Watson stat Prob(F-statistic) ============================================================ TESTS FOR AUTOCORRELATION

63
62 The argument in the auto command relates to the order of autocorrelation being tested. At the moment we are concerned only with first-order autocorrelation. This is why the command is auto(1). ============================================================ Breusch-Godfrey Serial Correlation LM Test: ============================================================ F-statistic Probability Obs*R-squared Probability ============================================================ Test Equation: Dependent Variable: RESID Method: Least Squares Presample missing value lagged residuals set to zero. ============================================================ Variable CoefficientStd. Errort-Statistic Prob. ============================================================ C LGDPI 9.50E LGPRFOOD RESID(-1) ============================================================ R-squared Mean dependent var-1.85E-18 Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood F-statistic Durbin-Watson stat Prob(F-statistic) ============================================================ TESTS FOR AUTOCORRELATION

64
63 When we performed the test, resid(–1), and hence ELGFOOD(–1), were not defined for the first observation in the sample, so we had 44 observations from 1960 to ============================================================ Breusch-Godfrey Serial Correlation LM Test: ============================================================ F-statistic Probability Obs*R-squared Probability ============================================================ Test Equation: Dependent Variable: RESID Method: Least Squares Presample missing value lagged residuals set to zero. ============================================================ Variable CoefficientStd. Errort-Statistic Prob. ============================================================ C LGDPI 9.50E LGPRFOOD RESID(-1) ============================================================ R-squared Mean dependent var-1.85E-18 Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood F-statistic Durbin-Watson stat Prob(F-statistic) ============================================================ TESTS FOR AUTOCORRELATION

65
64 EViews uses the first observation by assigning a value of zero to the first observation for resid(–1). Hence the test results are very slightly different. ============================================================ Breusch-Godfrey Serial Correlation LM Test: ============================================================ F-statistic Probability Obs*R-squared Probability ============================================================ Test Equation: Dependent Variable: RESID Method: Least Squares Presample missing value lagged residuals set to zero. ============================================================ Variable CoefficientStd. Errort-Statistic Prob. ============================================================ C LGDPI 9.50E LGPRFOOD RESID(-1) ============================================================ R-squared Mean dependent var-1.85E-18 Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood F-statistic Durbin-Watson stat Prob(F-statistic) ============================================================ TESTS FOR AUTOCORRELATION

66
65 ============================================================ Dependent Variable: ELGFOOD Method: Least Squares Sample(adjusted): Included observations: 44 after adjusting endpoints ============================================================ Variable CoefficientStd. Errort-Statistic Prob. ============================================================ C LGDPI -7.36E LGPRFOOD ELGFOOD(-1) ============================================================ R-squared Mean dependent var 3.28E-05 Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood F-statistic Durbin-Watson stat Prob(F-statistic) ============================================================ TESTS FOR AUTOCORRELATION We can also perform the test with a t test on the coefficient of the lagged variable.

67
66 Here is the corresponding output using the auto command built into EViews. The test is presented as an F statistic. Of course, when there is only one lagged residual, the F statistic is the square of the t statistic. ============================================================ Breusch-Godfrey Serial Correlation LM Test: ============================================================ F-statistic Probability Obs*R-squared Probability ============================================================ Test Equation: Dependent Variable: RESID Method: Least Squares Presample missing value lagged residuals set to zero. ============================================================ Variable CoefficientStd. Errort-Statistic Prob. ============================================================ C LGDPI 9.50E LGPRFOOD RESID(-1) ============================================================ R-squared Mean dependent var-1.85E-18 Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood F-statistic Durbin-Watson stat Prob(F-statistic) ============================================================ TESTS FOR AUTOCORRELATION

68
67 The Durbin–Watson statistic is d L is 1.24 for a 1 percent significance test (2 explanatory variables, 45 observations). ============================================================ Dependent Variable: LGFOOD Method: Least Squares Sample: Included observations: 45 ============================================================ Variable Coefficient Std. Error t-Statistic Prob. ============================================================ C LGDPI LGPRFOOD ============================================================ R-squared Mean dependent var Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood Hannan-Quinn crite F-statistic Durbin-Watson stat Prob(F-statistic) ============================================================ TESTS FOR AUTOCORRELATION d L = 1.24 (1% level, 2 explanatory variables, 45 observations)

69
68 The Breusch–Godfrey test for higher-order autocorrelation is a straightforward extension of the first-order test. If we are testing for order q, we add q lagged residuals to the right side of the residuals regression. We will perform the test for second-order autocorrelation. TESTS FOR AUTOCORRELATION

70
69 Here is the regression for ELGFOOD with two lagged residuals. The Breusch–Godfrey test statistic is With two lagged residuals, the test statistic has a chi-squared distribution with two degrees of freedom under the null hypothesis. It is significant at the 0.1 percent level ============================================================ Dependent Variable: ELGFOOD Method: Least Squares Sample(adjusted): Included observations: 43 after adjusting endpoints ============================================================ Variable CoefficientStd. Errort-Statistic Prob. ============================================================ C LGDPI LGPRFOOD ELGFOOD(-1) ELGFOOD(-2) ============================================================ R-squared Mean dependent var Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood F-statistic Durbin-Watson stat Prob(F-statistic) ============================================================ TESTS FOR AUTOCORRELATION

71
70 We will also perform an F test, comparing the RSS with the RSS for the same regression without the lagged residuals. We know the result, because one of the t statistics is very high. ============================================================ Dependent Variable: ELGFOOD Method: Least Squares Sample(adjusted): Included observations: 43 after adjusting endpoints ============================================================ Variable CoefficientStd. Errort-Statistic Prob. ============================================================ C LGDPI LGPRFOOD ELGFOOD(-1) ELGFOOD(-2) ============================================================ R-squared Mean dependent var Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood F-statistic Durbin-Watson stat Prob(F-statistic) ============================================================ TESTS FOR AUTOCORRELATION

72
71 Here is the regression for ELGFOOD without the lagged residuals. Note that the sample period has been adjusted to 1961 to 2003, to make RSS comparable with that for the previous regression. ============================================================ Dependent Variable: ELGFOOD Method: Least Squares Sample: Included observations: 43 ============================================================ Variable CoefficientStd. Errort-Statistic Prob. ============================================================ C LGDPI LGPRFOOD ============================================================ R-squared Mean dependent var Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood F-statistic Durbin-Watson stat Prob(F-statistic) ============================================================ TESTS FOR AUTOCORRELATION

73
72 The F statistic is This is significant at the 1% level. The critical value for F(2,35) is That for F(2,38) must be slightly lower. ============================================================ Dependent Variable: ELGFOOD Method: Least Squares Sample: Included observations: 43 ============================================================ Variable CoefficientStd. Errort-Statistic Prob. ============================================================ C LGDPI LGPRFOOD ============================================================ R-squared Mean dependent var Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood F-statistic Durbin-Watson stat Prob(F-statistic) ============================================================ TESTS FOR AUTOCORRELATION

74
73 Here is the output using the auto(2) command in EViews. The conclusions for the two tests are the same. ============================================================ Breusch-Godfrey Serial Correlation LM Test: ============================================================ F-statistic Probability Obs*R-squared Probability ============================================================ Test Equation: Dependent Variable: RESID Method: Least Squares Presample missing value lagged residuals set to zero. ============================================================ Variable CoefficientStd. Errort-Statistic Prob. ============================================================ C LGDPI LGPRFOOD RESID(-1) RESID(-2) ============================================================ R-squared Mean dependent var-1.85E-18 Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood F-statistic Durbin-Watson stat Prob(F-statistic) ============================================================ TESTS FOR AUTOCORRELATION

75
74 The table above gives the result of a parallel logarithmic regression with the addition of lagged expenditure on food as an explanatory variable. Again, there is strong evidence that the specification is subject to autocorrelation. ============================================================ Dependent Variable: LGFOOD Method: Least Squares Sample (adjusted): Included observations: 44 after adjustments ============================================================ Variable Coefficient Std. Error t-Statistic Prob. ============================================================ C LGDPI LGPRFOOD LGFOOD(-1) ============================================================ R-squared Mean dependent var Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood Hannan-Quinn crite F-statistic Durbin-Watson stat Prob(F-statistic) ============================================================ TESTS FOR AUTOCORRELATION

76
75 Here is a plot of the residuals. TESTS FOR AUTOCORRELATION Residuals, ADL(1,0) logarithmic regression for FOOD

77
76 A simple regression of the residuals on the lagged residuals yields a coefficient of 0.43 with standard error We expect the estimate to be adversely affected by the presence of the lagged dependent variable in the regression for LGFOOD. ============================================================ Dependent Variable: ELGFOOD Method: Least Squares Sample(adjusted): Included observations: 43 after adjusting endpoints ============================================================ Variable CoefficientStd. Errort-Statistic Prob. ============================================================ ELGFOOD(-1) ============================================================ R-squared Mean dependent var Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood Durbin-Watson stat ============================================================ TESTS FOR AUTOCORRELATION

78
77 With an intercept, LGDPI, LGPRFOOD, and LGFOOD(–1) added to the specification, the coefficient of the lagged residuals becomes 0.60 with standard error R 2 is , so nR 2 is 10.62, not quite significant at the 0.1 percent level. (Note that here n = 43.) ============================================================ Dependent Variable: ELGFOOD Method: Least Squares Sample(adjusted): Included observations: 43 after adjusting endpoints ============================================================ Variable CoefficientStd. Errort-Statistic Prob. ============================================================ C LGDPI LGPRFOOD LGFOOD(-1) ELGFOOD(-1) ============================================================ R-squared Mean dependent var Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood F-statistic Durbin-Watson stat Prob(F-statistic) ============================================================ TESTS FOR AUTOCORRELATION

79
78 The Durbin–Watson statistic is From this one obtains an estimate of as 1 – 0.5d = The standard error of the coefficient of the lagged dependent variable is Hence the h statistic is as shown. ============================================================ Dependent Variable: LGFOOD Method: Least Squares Sample (adjusted): Included observations: 44 after adjustments ============================================================ Variable Coefficient Std. Error t-Statistic Prob. ============================================================ C LGDPI LGPRFOOD LGFOOD(-1) ============================================================ R-squared Mean dependent var Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood Hannan-Quinn crite F-statistic Durbin-Watson stat Prob(F-statistic) ============================================================ TESTS FOR AUTOCORRELATION

80
79 Under the null hypothesis of no autocorrelation, the h statistic asymptotically has a standardized normal distribution, so this value is above the critical value at the 0.1 percent level, ============================================================ Dependent Variable: LGFOOD Method: Least Squares Sample (adjusted): Included observations: 44 after adjustments ============================================================ Variable Coefficient Std. Error t-Statistic Prob. ============================================================ C LGDPI LGPRFOOD LGFOOD(-1) ============================================================ R-squared Mean dependent var Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criter Sum squared resid Schwarz criterion Log likelihood Hannan-Quinn crite F-statistic Durbin-Watson stat Prob(F-statistic) ============================================================ TESTS FOR AUTOCORRELATION

81
Copyright Christopher Dougherty These slideshows may be downloaded by anyone, anywhere for personal use. Subject to respect for copyright and, where appropriate, attribution, they may be used as a resource for teaching an econometrics course. There is no need to refer to the author. The content of this slideshow comes from Section 12.2 of C. Dougherty, Introduction to Econometrics, fourth edition 2011, Oxford University Press. Additional (free) resources for both students and instructors may be downloaded from the OUP Online Resource Centre Individuals studying econometrics on their own and who feel that they might benefit from participation in a formal course should consider the London School of Economics summer school course EC212 Introduction to Econometrics or the University of London International Programmes distance learning course 20 Elements of Econometrics

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google