Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Heteroskedasticity. 2 The Nature of Heteroskedasticity  Heteroskedasticity is a systematic pattern in the errors where the variances of the errors.

Similar presentations


Presentation on theme: "1 Heteroskedasticity. 2 The Nature of Heteroskedasticity  Heteroskedasticity is a systematic pattern in the errors where the variances of the errors."— Presentation transcript:

1 1 Heteroskedasticity

2 2 The Nature of Heteroskedasticity  Heteroskedasticity is a systematic pattern in the errors where the variances of the errors are not constant.  Ordinary least squares assumes that all observations are equally reliable (constant variance).  For efficiency (accurate estimation / prediction) re-weight observations to ensure equal error variance.

3 3 y t =  1 +  2 x t + ε t Regression Model E( ε t ) = 0 var( ε t ) =  2 zero mean: homoskedasticity: nonautocorrelation: heteroskedasticity: var( ε t ) =  t 2 cov(ε t, ε s ) =  t = s

4 4 Homoskedastic pattern of errors xtxt ytyt......................................... Income Consumption

5 5.. xtxt x1x1 x2x2 ytyt f(y t ) The Homoskedastic Case.. x3x3 x4x4 Income Consumption

6 6 Heteroskedastic pattern of errors xtxt ytyt.............................................. income consumption

7 7. x t x1x1 x2x2 ytyt f(y t ) Consumption x3x3.. The Heteroskedastic Case Income Rich people Poor people

8 8 Properties of Least Squares 1. Least squares still linear and unbiased. 2.Least squares NOT efficient. 3.Hence, it is no longer B.L.U.E. 4. Usual formulas give incorrect standard errors for least squares. 5. Confidence intervals and hypothesis tests based on usual standard errors are wrong.

9 9 y t =  1 +  2 x t + ε Heteroskedasticity: E(ε t ) = 0, var(ε t ) =  t 2, cov(ε t, ε s ) =  t  s  (Linear) Where  (Unbiased)

10 10 y t =  1 +  2 x t + ε t heteroskedasticity: var(ε t ) =  t  2 Incorrect formula for least squares variance: var(b 2 ) = 22  x t  x   Correct formula for least squares variance: var(b 2 ) =   t 2  x t  x    x t  x    

11 11 Halbert White Standard Errors White estimator of the least squares variance: est.var(b 2 ) =  ε t 2  x t  x    x t  x     ^ 1.In large samples, White standard error (square root of estimated variance) is a consistent measure. 2. Because the squared residuals are used to approximate the variances, White's estimator is strictly appropriate only in large samples. : the squares of the least squares residuals εt2εt2 ^

12 12 Two Types of Heteroskedasticity 1.Proportional Heteroskedasticity. (continuous function (of x t, for example) ) For example, Income is less important as an explanatory variable for food expenditure of high- income families. It is harder to guess their food expenditure. 2.Partitioned Heteroskedasticity. (discrete categories/groups) For instance, exchange rates are more volatile after Asian Financial Crisis.

13 13 Proportional Heteroskedasticity y t =  1 +  2 x t + ε t where  t  2 =   2 x t The variance is assumed to be proportional to the value of x t var( ε t ) =  t  2 E(ε t ) = 0 cov(ε t, ε s ) = 0 t = s

14 14  t  2 =   2 x t y t =  1 +  2 x t + ε t std.dev. proportional to x t variance: standard deviation:  t  =   x t y t 1 x t ε t =  1 +  2 + x t x t To correct for heteroskedasticity divide the model by x t var(ε t ) =  t  2 It is important to recognize that  1 and  2 are the same in both the transformed model and the untransformed model

15 15 y t 1 x t ε t =  1 +  2 + x t x t y t =  1 x t1 +  2 x t2 + ε t * ** * var( ε t )= var( ) = var( ε t )=   2 x t * εtεt xtxt 1 xtxt 1 x t var( ε t ) =   2 * ε t is heteroskedastic, but ε t is homoskedastic. *

16 16 1. Decide which variable is proportional to the heteroskedasticity ( x t in previous example). 2. Divide all terms in the original model by the square root of that variable (divide by x t ). 3. Run least squares on the transformed model which has new y t, x t1 and x t2 variables but no intercept. Generalized Least Squares These steps describe weighted least squares: ***

17 17  The errors are weighted by the reciprocal of x t.  When x t is small, the data contain more information about the regression function and the observations are weighted heavily.  When x t is large, the data contain less information and the observations are weighted lightly.  In this way we take advantage of the heteroskedasticity to improve parameter estimation (efficiency).

18 18 Partitioned Heteroskedasticity y t =  1 +  2 x t + ε t var( ε t ) =  1 2 var( ε t ) =  2 2 Error variance of field corn : Error variance of sweet corn : y t = bushels per acre of corn x t = gallons of water per acre (rain or other) t = 1,...,100 t = 1,...,80 t = 81,...,100

19 19 y t =  1 +  2 x t + ε t var( ε t ) =  1 2 Field corn : y t =  1 +  2 x t + ε t var( ε t ) =  2 2 Sweet corn : y t 1 x t ε t =  1 +  2 +  1  1 y t 1 x t ε t =  1 +  2 +  2  2 Re-weighting Each Group’s Observations t = 1,...,80 t = 81,...,100

20 20 y t =  1 x t1 +  2 x t2 + ε t * ** * var( ε t ) = var( ) = var( ε t ) = * εtεt 1 1 var( ε t ) = 1 * ε t is heteroskedastic, but ε t is homoskedastic. * y t 1 x t ε t =  1 +  2 +  i  i  i  i  i 2 t = 1,...,100

21 21 Apply Generalized Least Squares Run least squares separately on data for each group.  1 2 (MSE 1 ) provides estimator of  1 2 using the 80 observations on field corn. ^  2 2 (MSE 2 ) provides estimator of  2 2 using the 20 observations on sweet corn. ^

22 22 1. Residual Plots provide information on the exact nature of heteroskedasticity (partitioned or proportional) to aid in correcting for it. 2. Goldfeld-Quandt Test checks for presence of heteroskedasticity. Detecting Heteroskedasticity Determine existence and nature of heteroskedasticity:

23 23 Residual Plots etet 0 xtxt............................................ Plot residuals against one variable at a time after sorting the data by that variable to try to find a heteroskedastic pattern in the data.

24 24 Goldfeld-Quandt Test The Goldfeld-Quandt test can be used to detect heteroskedasticity in either the proportional case or for comparing two groups in the discrete case. For proportional heteroskedasticity, it is first necessary to determine which variable, such as x t, is proportional to the error variance. Then sort the data from the largest to smallest values of that variable.

25 25 H o :  1  2 =  2  2 H 1 :  1  2 >  2  2 GQ = ~ F [T 1  K 1, T 2  K 2 ] 1212 2222 ^ ^ In the proportional case, (drop the middle r observations where r  T/6,) run separate least squares regressions on the first T 1 observations and the last T 2 observations. We assume that  1 2 >  2 2. (If not, then reverse the subscripts.) The Small values of GQ support H o while large values support H 1. Goldfeld-Quandt Test Statistic


Download ppt "1 Heteroskedasticity. 2 The Nature of Heteroskedasticity  Heteroskedasticity is a systematic pattern in the errors where the variances of the errors."

Similar presentations


Ads by Google