Presentation is loading. Please wait.

Presentation is loading. Please wait.

ECON 7710, 2010 10.1 Heteroskedasticity What is heteroskedasticity? What are the consequences? How is heteroskedasticity identified? How is heteroskedasticity.

Similar presentations


Presentation on theme: "ECON 7710, 2010 10.1 Heteroskedasticity What is heteroskedasticity? What are the consequences? How is heteroskedasticity identified? How is heteroskedasticity."— Presentation transcript:

1

2 ECON 7710, 2010 10.1 Heteroskedasticity What is heteroskedasticity? What are the consequences? How is heteroskedasticity identified? How is heteroskedasticity corrected? Objectives

3 ECON 7710, 2010 10.2 Main empirical model for Unit 10: foodexp i =  0 +  1 income i +  i. foodexp: Family food expenditure income : Family income Least squares estimates, US data (UE_Tab0301) Is this the best estimated equation?

4 ECON 7710, 2010 10.3 1. The Nature of Heteroskedasticity In a regression about firms, for the same mistake, million billion

5 ECON 7710, 2010 10.4 Heteroskedasticity is a problem that occurs when the error term does not have a constant variance. CLRM: Each error term comes from the same probability distribution. Assumption CLRM.5 is violated!

6 ECON 7710, 2010 10.5 Y i =  0 +  1 X 1i +  2 X 2i +  i Regression Model E(  i |X 1i,X 2i ) = 0 var(  i |X 1i,X 2i ) =   2 zero mean: homoskedasticity: cov(  i,  j |X 1i,X 2i,X 1j,X 2j ) =  i = j no autocorrelation:

7 ECON 7710, 2010 10.6 Identical distributions for observations i and j Distribution for i Distribution for j

8 ECON 7710, 2010 10.7 X2X2.. X1X1.. X3X3 X4X4 X Y f(Y) 0 Homoskedasticity Y i =  0 +  1 X i +  i var(  i |X i ) =  2 for all i Conditional Distribution

9 ECON 7710, 2010 10.8 Heteroskedasticity Y i =  0 +  1 X i +  i var(  i |X i ) =  i 2 for all i Conditional Distribution

10 ECON 7710, 2010 10.9

11 ECON 7710, 2010 10.10

12 ECON 7710, 2010 10.11 Pure heteroskedasticity Different variances of the error term. Correctly specified PRF. Impure heteroskedasticity Different variances of the error term. Specification error.

13 ECON 7710, 2010 10.12 2. Detecting Heteroscedasticity 2.1 Graphical Method Plotting foodexp against income (for one regressor) Example 1 : Food expenditure, US Data (UE_Tab0301)

14 ECON 7710, 2010 10.13 Example 1: Food expenditure, US Data, UE_Tab0301 Plotting e against income. Plotting e 2 against income.

15 ECON 7710, 2010 10.14 Example 2 : textbook data, (Woody3)

16 ECON 7710, 2010 10.15 3.2Park Test Model Y i =  0 +  1 X 1i + … +  K X Ki +  t i = 1,…,N (*) Suppose it is suspected that var(  i ) depends on Z i in the form of var(  i ) =  i 2 =  2 Z i  1 e vi ln  i 2 = ln  2 +  1 lnZ ki + v i Ho:  1 = 0 (Homoskedastic errors); H A :  1  0 (Heteroskedastic errors).

17 ECON 7710, 2010 10.16 Step 1: Estimate the equation (*) with OLS and obtain the residuals. Step 2: Regress the natural log of squared residuals on the natural log of a possible proportionality factor ln(e i 2 ) =  0 +  1 lnZ i + v i where v i is an error term satisfying all classical assumptions.

18 ECON 7710, 2010 10.17 Step 3 If the coefficient of lnZ is significantly different from zero, then it would suggest that there is heteroscedastic pattern in the residuals with respect to Z. Otherwise, homoscedastic errors cannot be rejected. Example 3: Park Test: US data (UE_Tab0301) ^ ln(e 2 ) = -7.46 + 2.07** ln(income) t (2.28) p-value (0.0284)

19 ECON 7710, 2010 10.18 Advantages of the Park test: a.The test is simple. b.It provides information about the variance structure. Limitations of the Park test: a.The distribution of the dependent variable is problematic. b.It assumes a specific functional form. c.It does not work when the variance depends on two or more variables. d.The correct variable with which to order the observations must be identified first. e.It cannot handle partitioned data.

20 ECON 7710, 2010 10.19 3.3White’s Test Model Y i =  0 +  1 X 1i +  2 X 2i +  i i = 1,…,N (*) Suppose it is suspected there may be heteroskedasticity but we are not sure of its functional form. H o : The conditional variance of  i is constant. H A : The conditional variance of  i is not constant.

21 ECON 7710, 2010 10.20 Step 1 : Estimate the equation (*) with OLS and obtain the residuals. Step 2 : Regress the squared residuals on all explanatory variables, all cross product terms and the square of each explanatory variable. e i 2 =  0 +  1 X 1i +  2 X 2i +  3 X 1i 2 +  4 X 2i 2 +  5 X 1i X 2i + v i

22 ECON 7710, 2010 10.21 Step 3 : Test the overall significance of the equation in Step 2. (df = number of regressors) Reject the hypothesis of homoskedasticity if NR 2 err > cv. Statistic = NR 2 white ~  2 df Critical value (cv) =  2 df,  Example 4: White test: US data (UE_Tab0301) ^ e 2 = 1924 – 7.4 income + 0.0088income 2* R 2 = 0.3646, N = 40, N  R 2 = 14.58 cv =  2 (2, 0.01) = 9.21.

23 ECON 7710, 2010 10.22 Advantages of the White test: a. It does not assume a specific functional form. b. It is applicable when the variance depends on two or more variables. Limitations of the White test: a.It is an large-sample test. b.It provides no information about the variance structure. c.It loses many degrees of freedom when there are many regressors. d.It cannot handle partitioned data. e.It also captures specification errors.

24 ECON 7710, 2010 10.23 3. Consequences of Heteroskedasticity If heteroskedasticity appears but OLS is used for estimation, how are the OLS estimates affected? Unaffected: OLS estimators are still linear and unbiased because, on average, overestimates are as likely as underestimates.

25 ECON 7710, 2010 10.24 3.1 OLS estimators are inefficient. Some fluctuations of the error term are attributed to the variation in independent variables. There are other linear and unbiased estimators that have smaller variances than the OLS estimator.

26 ECON 7710, 2010 10.25 3.2 Unreliable Hypothesis Testing  unreliable testing conclusion

27 ECON 7710, 2010 10.26 4. Remedies 4.1 Heteroskedasticity-Corrected Standard Errors Y i =  0 +  1 X 1i +  2 X 2i +  i heteroskedasticity: var(  i ) =  i  2 OLS estimators are unbiased. The standard errors of OLS are biased.

28 ECON 7710, 2010 10.27 A heteroskedasticity-consistent (HC) standard error of an estimated coefficient is a standard error of an estimated coefficient adjusted for heteroskedasticity. a. HC standard errors are consistent for any type of heteroskedasticity. b. Hypothesis tests are valid with HC standard errors in large samples. c. Typically, HC se > OLS se

29 ECON 7710, 2010 10.28 incorrect variance formula: Example 5: Y i =  0 +  1 X i +  i, var(  i |X i ) =  i. correct variance formula:

30 ECON 7710, 2010 10.29 HC estimator of the variance of the slope coefficient in a simple regression model Example 6 : HC Standard Errors, US data (UE_Tab0301)

31 ECON 7710, 2010 10.30 Y i =  0 +  1 X 1i +  2 X 2i +  i  i  2 = c Z i 2 The variance is assumed to be proportional to the value of Z i 2 var(  i ) =  i  2 E(  i ) = 0 cov(  t,  s ) = 0 t = s 4.2 Weighted Least Squares

32 ECON 7710, 2010 10.31 Step 1: Decide which variable is proportional to the heteroskedasticity. Step 2: Divide all terms in the original model by that variable (divide by Z i ).

33 ECON 7710, 2010 10.32 Step 3: Run least squares on the transformed model which has new variables. Note that the transformed model have an intercept only if Z is one of the explanatory variables. For example, if Z i = X 2i, then

34 ECON 7710, 2010 10.33 Example 7 : WLS: US data (UE_Tab0301) What are values of the estimated coefficients of the original model? Has the problem of heteroskedasticity solved?

35 ECON 7710, 2010 10.34 00 11 OLS estimate40.770.128*** OLS se22.140.031 HC se24.320.039 WLS estimate21.280.158*** WLS se14.030.023 Comparing different estimates: US data (UE_Tab0301) The WLS estimates have improved upon those of OLS.

36 ECON 7710, 2010 10.35 Other possibilities var(  i ) = cZ i var(  i ) = cZ i  var(  i ) = c(a 1 X 1i + a 2 X 2i )

37 ECON 7710, 2010 10.36 In large samples HC standard errors are consistent measures for any type of heteroscedasticity. CI & t-test are valid.

38 ECON 7710, 2010 10.37 4.3 Re-specifying the Regression Model 4.3.1 Use another functional form E.g., Double-log: Less variation The heteroskedasticity may be impure. Example 8 : US data (UE_Tab0301) The hypothesis of constant variance can be rejected.

39 ECON 7710, 2010 10.38 Empirical model: foodexp i =  0 +  1 totexp i +  i. Example 9 : India data (Food_India55) The hypothesis of homoskedasticity can be rejected by the Park and White tests.

40 ECON 7710, 2010 10.39 Double-log HC WLS Which model is the best?

41 ECON 7710, 2010 10.40 4.3.2 Other reformulations E.g., take average of variables related to the size of observed units, adding more variables Example 10 : Data set “Concert” The concert tour of a singer in the US revenue =  0 +  1 adv +  2 stad +  3 cd +  4 radio +  5 weekend + .

42 ECON 7710, 2010 10.41 (1) (2) (3)

43 ECON 7710, 2010 10.42 Remarks: The variable Z is difficult to identify. The functional relationship between the error and Z is not known. Use WLS at last. With correct WLS, we expect the standard errors of the regression coefficients will be smaller than the OLS counterparts. A log transformation usually reduces the degree of heteroskedasticity. The hypothesis of homoskedasticity should not be rejected in the new model.

44 ECON 7710, 2010 10.43 5. A Complete Example Sources: Section 8.2.2 (pp. 255 – 256) Section 10.5 (pp. 369 – 376) pcon i =  0 +  1 reg i +  2 tax i +  3 uhm i +  i. Empirical regression model pconi 1 : petroleum consumption in the ith state reg i : motor vehicle registrations in the ithstate (‘000) tax i : the gasoline tax rate in the ith state(cents per gallon) uhm : urban highway miles wihtin the ith state

45 ECON 7710, 2010 10.44 pcon = 389.57 *** – 0.061reg – 36.47 *** tax + 60.76 *** uhm se, vif (0.04, 24.3) (13.15, 1.1) (10.26, 24.9) Adj. R 2 = 0.9192, N = 50. ^ Equation 1 Equation 2 pcon = 551.69 *** + 0.19 *** reg – 53.59 *** tax se (0.012) (16.86) Adj. R 2 = 0.8607, N = 50. ^

46 ECON 7710, 2010 10.45 Graphical investigation

47 ECON 7710, 2010 10.46 Park test White test ln(e 2 ) = 1.65 + 0.95***ln(REG) R 2 = 0.1657, N = 50 se (0.3083) ^ e 2 = 11,098,291 + 140REG – 0.0005REG 2 – 12.84REG  TAX – 237,873TAX + 12347TAX 2. R 2 = 0.6645, N = 50, N  R 2 = 33.22. ^ Checking for other specifications: Double log, quadratic

48 ECON 7710, 2010 10.47 pcon = 551.69 *** + 0.19 *** reg – 53.59 *** tax hc se (0.022) (23.90) R 2 = 0.8664, N = 50. ^ (4) (5) (6)

49 ECON 7710, 2010 10.48 Selected Exercises Ch. 10: Q. 1, 3, 4, 5, 8, 10, 12, 14

50 ECON 7710, 2010 10.49 Y i =  0 +  1 X 1i +  2 X 2i +  i Regression Model E(  i |X 1i,X 2i ) = 0 var(  i |X 1i,X 2i ) =   2 zero mean: homoskedasticity: cov(  i,  j |X 1i,X 2i,X 1j,X 2j ) =  i = j no autocorrelation: heteroskedasticity: var(  i |X 1i,X 2i ) =  i  2

51 ECON 7710, 2010 10.50. X1X1 X2X2. X3X3. X Y f(Y) Heteroskedasticity Y i =  0 +  1 X i +  i var(  i |X i ) =  i 2 for all i Conditional Distribution 0

52 ECON 7710, 2010 10.51 Step 3: Test the overall significance of the equation in Step 2. (df = number of regressors) Reject the hypothesis of homoskedasticity if NR 2 err > cv. Statistic = NR 2 err ~  2 df Critical value (cv) =  2 df, 

53 ECON 7710, 2010 10.52 Step 1: Decide which variable is proportional to the heteroskedasticity. Step 2: Divide all terms in the original model by that variable (divide by Z i ).

54 ECON 7710, 2010 10.53 Step 3: Run least squares on the transformed model which has new variables. Note that the transformed model have an intercept only if Z is one of the explanatory variables. For example, if Z i = X 2i, then

55 ECON 7710, 2010 10.54 In large samples HC standard errors are consistent measures for any type of heteroscedasticity. CI & t-test are valid.


Download ppt "ECON 7710, 2010 10.1 Heteroskedasticity What is heteroskedasticity? What are the consequences? How is heteroskedasticity identified? How is heteroskedasticity."

Similar presentations


Ads by Google