Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecturer Dr. Veronika Alhanaqtah

Similar presentations


Presentation on theme: "Lecturer Dr. Veronika Alhanaqtah"— Presentation transcript:

1 Lecturer Dr. Veronika Alhanaqtah
ECONOMETRICS Lecturer Dr. Veronika Alhanaqtah

2 Topic 4.3. Possible problems in multiple linear regression estimated by OLS. Autocorrelation
Nature of autocorrelation Autocorrelation of the first order Consequences of autocorrelation Correction of autocorrelation: Robust standard errors Detecting of autocorrelation Graphical analysis of residuals The Durbin-Watson test The Breusch-Godfrey test

3 OLS assumptions (from Topic 2)
(1) The expected value of a residual is equal to zero for all observations: (2) The variance of residuals is constant (even, uniform, homoscedastic) for every observation: (3) Residuals are uncorrelated between observations (4) Residuals are independent on regressors (x) (5) Model is linear in relation to its parameters It means that beta-estimators are linear in relation to yi: where cn are values which are dependent only on regressors xi but not on the dependent variable y.

4 1. Nature of autocorrelation

5 1. Nature of autocorrelation
Summer Winter X Y

6 1. Nature of autocorrelation
Example of negative autocorrelation: Negative autocorrelation means that positive deviation is followed by the negative deviation, and vice versa. This situation can happen when we analyze the same relationship, as above, but use seasonal data (winter – summer).

7 1. Nature of autocorrelation
Reasons of autocorrelation: mistakes in model specification time lag in change of economic parameters web-effect data smoothing

8 1. Reasons of autocorrelation

9 1. Reasons of autocorrelation
Time lag. Many economic parameters are cyclical, as a consequence of undulating economic cycles. Changes doesn’t happen immediately. It takes some time or a time lag. Web-effect. In many spheres of economic activity, parameters react to changes of economic conditions with delay, or time lag. For example, supply of agricultural products react to price changes with delay (equal to the agricultural season). High price of agricultural products in the previous year, most likely, leads to effect of its overproduction in the current year, and, as a consequence, the price will decrease, and so on. In this case, deviation of residuals from each other is not accidental (or random). Data smoothing. Very often the data over some long period of time are averaged along the subintervals. To smooth a data set is to create an approximating function that attempts to capture important patterns in the data, while leaving out noise or other fine-scale structures.

10 1. Reasons of autocorrelation

11 1. Reasons of autocorrelation
Autocorrelation is examined in details: time series analysis; spatial econometrics; panel data. These are almost independent disciplines.

12 2. Autocorrelation of the first order
Autocorrelation may have very complicated structure: AR, MA, ARMA, ARIMA, VAR, VMA, VARMA, VECM, ARCH, GARCH, EGARCH, FIGARCH, TARCH, AVARCH, ZARCH, CCC, DCC, BEKK, VEC, DLM ….

13 2. Autocorrelation of the first order

14 2. Autocorrelation of the first order

15 2. Autocorrelation of the first order

16 3. Consequences of autocorrelation
Beta-estimators are still linear and unbiased. Linearity means that estimators of coefficients are still linear with respect to y. Unbiasedness means that, on average, estimators fall into unknown β-coefficient. Beta-estimators are not consistent. Consistent estimate is the one that gives rather precise value for the large samples. Beta-estimators are not efficient. It means they don’t have the least variance in comparison with other estimates (by other methods). Variance of beta-coefficients will be computed with biasedness. This biasedness is the consequence of the following fact: variance, which is not explained by the regression equation, is no longer unbiased. Variance is biased. In this case variance is highly likely underestimated, that leads to overestimated t- statistics. Consequently, if we rely upon such inferences, we might mistakenly accept as significant those coefficients which, in fact, are not significant. Standard errors of beta-coefficients are inconsistent. Even when number of observations is very large, variance of beta-coefficients will be incorrect (biased).

17 3. Consequences of autocorrelation
As a consequence of everything above, in the presence of autocorrelation: statistical inferences on t- and F-statistic (which determine significance of beta-coefficients and coefficient of determination R2) are, very likely, incorrect. Thus, prediction qualities of a model are worsened. we can use and interpret beta-coefficients (because beta- estimators are still unbiased); but standard errors are inconsistent; so we can’t construct confidence intervals for beta- coefficients and test hypothesizes about them.

18 4. Correction of autocorrelation: robust standard errors
What to do in the presence of autocorrelation? Correct standard errors! We use heteroscedascicity and autocorrelation consistent covariance matrix instead of usual matrix: We use the Newey-West estimator of the covariance matrix (1987, although there are a number of later variants). The estimator can be used to improve the OLS-regression when the variables have heteroskedasticity or autocorrelation. It is computed using software: So, the idea is to replace usual standard errors for robust standard errors (heteroscedasticity and autocorrelation consistent) where are roots of diagonal elements of a corresponding matrix.

19 4. Correction of autocorrelation: robust standard errors
Great!

20 4. Correction of autocorrelation: robust standard errors
!

21 4. Correction of autocorrelation: robust standard errors
In practice: (1) Estimate a model as usual: model<-lm(data=data, y~x+z) (2) Compute robust covariance matrix (“sandwich” package) vcovHAC(model) (3) Use robust covariance matrix for hypothesis testing (“lmtest” package) coeftest(model,vcov.=vcovHAC)

22 4. Correction of autocorrelation: robust standard errors
When is it advisable to use robust covariance matrix and robust standard errors? In cases when we suspect presence of autocorrelation and do not want to model its structure (structure of relationship between residuals). For example, in time series or in data where there is geographical closeness between observations.

23 5. Detecting autocorrelation 5.1. Graphical analysis of residuals

24 5. Detecting autocorrelation 5.2. The Durbin-Watson test

25 5. Detecting autocorrelation 5.2. The Durbin-Watson test. Algorithm
Estimate a regression model, using OLS, and obtain residuals Compute DW-statistic: Note: distribution of DW-statistic is rather sophisticated; it is impossible to obtain the table with exact critical values for all possible samples; it is only possible to compute upper and lower borders for the critical value DW. Statistical inference for the DW-test: – sample correlation of residuals, then : DW≈0: strong positive autocorrelation DW≈2: no autocorrelation DW≈4: strong negative autocorrelation Note: 1st order autocorrelation is an autocorrelation between “yesterday” and “today” observations)

26 5. Detecting autocorrelation 5.2. The Durbin-Watson test

27 5. Detecting autocorrelation 5.3. The Breusch-Godfrey test

28 5. Detecting autocorrelation 5.3. The Breusch-Godfrey test. Algorithm
Estimate a regression model, using OLS, and obtain residuals Estimate an auxiliary regression for dependent on initial regressors and compute (auxiliary coefficient of determination). Compute BG-statistic: where n is a number of observations; is an assumed order of autocorrelation: Statistical inference on BG-test: H0: (no autocorrelation) If H0 is true, then BG-statistic has 𝜒2-distribution with degrees of freedom: If then H0 is rejected. So, there is autocorrelation in the model. Note: it is advisable to prefer BG-test to DW-test. Nowadays, analogues of DW-test are used in spatial econometrics.

29 5. Detecting autocorrelation. Examples of DW- and BG-tests
Model: n=19 Questions: Is there autocorrelation in the model? What is the order of autocorrelation ( )? Assumption 1. Autocorrelation of the first order: Assumption 2. Autocorrelation of the second order: Step 1. Autocorrelation of the 1st order. DW-test. In software we computed DW=1.32. From the formula compute: There is weak correlation between “yesterday” and “today” observations, i.e. almost absence of the autocorrelation of the 1st order.

30 5. Detecting autocorrelation. Examples of DW- and BG-tests
Model: n=19 Questions: Is there autocorrelation in the model? What is the order of autocorrelation ( )? Assumption 1. Autocorrelation of the first order: Assumption 2. Autocorrelation of the second order: Step 2. Autocorrelation of the 2nd order. BG-test. In software we estimate an auxiliary regression for on initial regressors . Compute Compute test-statistic: If H0 is true (no autocorrelation): (simultaneously), then Otherwise, Ha: Find in R: qchis(0.95,df=2). Critical point: Statistical inference: so H0 (no 2nd order autocorrelation) is not rejected. There are too a few number of observations to reject H0. To sum up, we do not have enough data to reject absolute absence of autocorrelation.

31 Autocorrelation


Download ppt "Lecturer Dr. Veronika Alhanaqtah"

Similar presentations


Ads by Google