Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 MF-852 Financial Econometrics Lecture 10 Serial Correlation and Heteroscedasticity Roy J. Epstein Fall 2003.

Similar presentations


Presentation on theme: "1 MF-852 Financial Econometrics Lecture 10 Serial Correlation and Heteroscedasticity Roy J. Epstein Fall 2003."— Presentation transcript:

1 1 MF-852 Financial Econometrics Lecture 10 Serial Correlation and Heteroscedasticity Roy J. Epstein Fall 2003

2 2 Topics Serial correlation What is it? Effect on hypothesis tests Testing and correcting for serial correlation Heteroscedasticity Ditto. ARCH (or how to win a Nobel prize)

3 3 Serial Correlation The error terms in the regression should be independent, i.e., E(e i e j ) = 0 for any i and j. If this assumption is not true then the errors are serially correlated. Only a problem for time-series data.

4 4 Serial Correlation — Possible Causes Omitted variables. Wrong functional form. “inertia” in economic data—error term composed of many small effects, each with a similar trend.

5 5 Correlated Error Terms Suppose E(e i e i-1 )  0. Implies neighboring observations are correlated, not independent. 1st-order process. Most common form of serial correlation. Suppose E(e i e i-4 )  0. 4 th -order process. Often occurs with quarterly data.

6 6 Graph of Residuals from a Regression

7 7 Importance of Serial Correlation Regression coefficients (the marginal effects) are unbiased. BUT their standard errors are biased. Bias generally understates the standard errors, so significance tests are biased against H 0. H 0 is rejected too often.

8 8 Bias in Standard Errors Standard errors for the coefficients depend on estimated variance of error term, s 2 e. Regression program assumes independent errors with mean 0, so program calculates

9 9 Why the Standard Errors are Biased Calculation ignores covariance when errors are NOT independent. Covariance between errors, when it exists, is usually positive. So s 2 e would be understated and standard errors would be biased downward.

10 10 Testing for Serial Correlation Most common test is Durbin- Watson statistic Only used for 1 st order serial correlation Calculated as

11 11 Durbin-Watson Stat. When covariance between neighboring observations is zero then DW should be close to 2. High covariance implies DW —> 0. H 0 for no 1 st order serial correlation: DW = 2 Look up critical values in table (RR, p. 592) See sample regression in xls file.

12 12 Model with Serial Correlation Y t =  0 +  1 X t + e t Suppose e t =  e t-1 + u t, where u t is another error with mean 0 that is serially independent and uncorrelated with e or X. -1 <  < 1 (or the process is explosive) u t is called the innovation in e because it is the new component of e each period. Serially correlated: E(e t e t-1 ) =  var(e t ).

13 13 How to find  Estimate it as  = 1 – DW/2. We can do this in Excel. Fancier procedures: Cochrane- Orcutt and Hildreth-Liu and others. A good regression program will calculate  automatically.

14 14 Fixing Serial Correlation Suppose  is known. Then “difference” the model: Y t –  Y t-1 =  0 (1–  ) +  1 (X t –  X t-1 ) + (e t –  e t-1 ) Or Y t –  Y t-1 =  0 (1–  ) +  1 (X t –  X t-1 ) + u t u t is a “well behaved” error. Differenced model yields unbiased coefficients and unbiased standard errors. See example.

15 15 Heteroscedasticity Strange name! Greek for “different variances.” Violation of last assumption about residual: same variance for each error term. Can occur with any kind of data.

16 16 Heteroscedasticity — Possible Causes Wrong functional form. Var(e) correlated with an included X variable on the right side of the regression. E(var(e), X)  0, NOT E(e, X)  0

17 17 Heteroscedasticity — Importance Regression coefficients (the marginal effects) are unbiased. BUT their standard errors are biased. Direction of bias not usually known. Confidence levels, p-values, t statistics not reliable.

18 18 Model with Heteroscedasticity Y t =  0 +  1 X t + e t Suppose var(e t ) =  2 X t 2. Var(e) is different for each observation.

19 19 Fixing Heteroscedasticity — Weighted Least Squares Observations with smaller error variance are “better.” Give them more weight when estimating the model. Weighted Least Squares (WLS): Multiply observations by weighting factors that equalize the variance. (1/ X t )Y t = (1/ X t )  0 + (1/ X t )  1 X t + (1/ X t )e t Var((1/ X t )e t ) = ((1/ X t 2 )  2 X t 2 =  2

20 20 Calculating WLS Suppose form of heteroscedasticity is known, e.g., need to weight by X t. You just need to create new variables. (1/ X t )Y t = (1/ X t )  0 +  1 + u t Intercept in WLS is  1, slope on 1/X is  0. “Well behaved” error term, yields unbiased coefficients and unbiased standard errors.

21 21 ARCH models AutoRegressive Conditionally Heteroscedastic model Regression model with serial correlation (“autoregressive”) AND heteroscedasticity. Used to model volatility, i.e., variance, of returns.

22 22 ARCH models Sometimes you want to model volatility itself (e.g., it’s an input to an option pricing model). Volatility can change over time, periods of high and low volatility. ARCH describes this process.

23 23 Formulation of ARCH model Y t =  0 +  1 X t + e t Var(e t ) =  0 +  1 e t-1. 1 st order ARCH process. Can estimate  ’s and  ’s and perform WLS.


Download ppt "1 MF-852 Financial Econometrics Lecture 10 Serial Correlation and Heteroscedasticity Roy J. Epstein Fall 2003."

Similar presentations


Ads by Google