Serial Correlation and Heteroskedasticity in Time Series Regressions

Slides:



Advertisements
Similar presentations
Regression Analysis.
Advertisements

Applied Econometrics Second edition
Multiple Regression Analysis
ECON 4551 Econometrics II Memorial University of Newfoundland
Economics 20 - Prof. Anderson1 Time Series Data y t =  0 +  1 x t  k x tk + u t 2. Further Issues.
Chapter 11 Autocorrelation.
Regression Analysis Notes. What is a simple linear relation? When one variable is associated with another variable in such a way that two numbers completely.
Introduction and Overview
10 Further Time Series OLS Issues Chapter 10 covered OLS properties for finite (small) sample time series data -If our Chapter 10 assumptions fail, we.
8. Heteroskedasticity We have already seen that homoskedasticity exists when the error term’s variance, conditional on all x variables, is constant: Homoskedasticity.
Economics Prof. Buckles1 Time Series Data y t =  0 +  1 x t  k x tk + u t 1. Basic Analysis.
Economics 20 - Prof. Anderson1 Time Series Data y t =  0 +  1 x t  k x tk + u t 2. Further Issues.
12.3 Correcting for Serial Correlation w/ Strictly Exogenous Regressors The following autocorrelation correction requires all our regressors to be strictly.
Economics 20 - Prof. Anderson
Topic 3: Regression.
1Prof. Dr. Rainer Stachuletz Time Series Data y t =  0 +  1 x t  k x tk + u t 2. Further Issues.
12 Autocorrelation Serial Correlation exists when errors are correlated across periods -One source of serial correlation is misspecification of the model.
Regression Method.
BSc (Hons) Finance II/ BSc (Hons) Finance with Law II
Returning to Consumption
What does it mean? The variance of the error term is not constant
Chapter 10 Hetero- skedasticity Copyright © 2011 Pearson Addison-Wesley. All rights reserved. Slides by Niels-Hugo Blunch Washington and Lee University.
1 MF-852 Financial Econometrics Lecture 10 Serial Correlation and Heteroscedasticity Roy J. Epstein Fall 2003.
Pure Serial Correlation
Random Regressors and Moment Based Estimation Prepared by Vera Tabakova, East Carolina University.
1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u.
3.4 The Components of the OLS Variances: Multicollinearity We see in (3.51) that the variance of B j hat depends on three factors: σ 2, SST j and R j 2.
Chapter 4 The Classical Model Copyright © 2011 Pearson Addison-Wesley. All rights reserved. Slides by Niels-Hugo Blunch Washington and Lee University.
Problems with the Durbin-Watson test
STAT 497 LECTURE NOTE 9 DIAGNOSTIC CHECKS 1. After identifying and estimating a time series model, the goodness-of-fit of the model and validity of the.
Dynamic Models, Autocorrelation and Forecasting ECON 6002 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s notes.
© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Heteroscedasticity Chapter 8
Ch5 Relaxing the Assumptions of the Classical Model
REGRESSION DIAGNOSTIC III: AUTOCORRELATION
Dynamic Models, Autocorrelation and Forecasting
Kakhramon Yusupov June 15th, :30pm – 3:00pm Session 3
REGRESSION DIAGNOSTIC II: HETEROSCEDASTICITY
Further Issues Using OLS with Time Series Data
Econometric methods of analysis and forecasting of financial markets
Fundamentals of regression analysis
Fundamentals of regression analysis 2
STOCHASTIC REGRESSORS AND THE METHOD OF INSTRUMENTAL VARIABLES
Pure Serial Correlation
STAT 497 LECTURE NOTE 9 DIAGNOSTIC CHECKS.
HETEROSCEDASTICITY: WHAT HAPPENS IF THE ERROR VARIANCE IS NONCONSTANT?
Further Issues in Using OLS with Time Series Data
Chapter 12 – Autocorrelation
Autocorrelation.
REGRESSION DIAGNOSTIC I: MULTICOLLINEARITY
Serial Correlation and Heteroscedasticity in
HETEROSCEDASTICITY: WHAT HAPPENS IF THE ERROR VARIANCE IS NONCONSTANT?
Tutorial 1: Misspecification
Heteroskedasticity.
Chapter 7: The Normality Assumption and Inference with OLS
BEC 30325: MANAGERIAL ECONOMICS
Linear Panel Data Models
Further Issues Using OLS with Time Series Data
Multiple Regression Analysis: OLS Asymptotics
Chapter 13 Additional Topics in Regression Analysis
Tutorial 6 SEG rd Oct..
Instrumental Variables Estimation and Two Stage Least Squares
Autocorrelation Dr. A. PHILIP AROKIADOSS Chapter 5 Assistant Professor
Autocorrelation.
Lecturer Dr. Veronika Alhanaqtah
Tutorial 2: Autocorrelation
Autocorrelation MS management.
Financial Econometrics Fin. 505
BEC 30325: MANAGERIAL ECONOMICS
Serial Correlation and Heteroscedasticity in
Presentation transcript:

Serial Correlation and Heteroskedasticity in Time Series Regressions

Serial Correl. and Heterosked. in Time Series Regressions Properties of OLS with serially correlated errors OLS still unbiased and consistent if errors are serially correlated Correctness of R-squared also does not depend on serial correlation OLS standard errors and tests will be invalid if there is serial correlation OLS will not be efficient anymore if there is serial correlation

Serial Correl. and Heterosked. in Time Series Regressions Serial correlation and the presence of lagged dependent variables Is OLS inconsistent if there are ser. corr. and lagged dep. variables? No: Including enough lags so that TS.3‘ holds guarantees consistency Including too few lags will cause an omitted variable problem and serial correlation because some lagged dep. var. end up in the error term

Serial Correl. and Heterosked. in Time Series Regressions Testing for serial correlation Testing for AR(1) serial correlation with strictly exog. regressors Example: Static Phillips curve (see above) AR(1) model for serial correlation (with an i.i.d. series et) Replace true unobserved errors by estimated residuals Test in Reject null hypothesis of no serial correlation

Serial Correl. and Heterosked. in Time Series Regressions The Durbin-Watson test under classical assumptions Under assumptions TS.1 – TS.6, the Durbin-Watson test is an exact test (whereas the previous t-test is only valid asymptotically). Example: Static Phillips curve (see above) Unfortunately, the Durbin-Watson test works with a lower and an upper bound for the critical value. In the area between the bounds the test result is inconclusive. vs. Reject if , fail to reject if Reject null hypothesis of no serial correlation

Serial Correl. and Heterosked. in Time Series Regressions Testing for AR(1) serial correlation with general regressors The t-test for autocorrelation can be easily generalized to allow for the possibility that the explanatory variables are not strictly exogenous: The test may be carried out in a heteroskedasticity robust way General Breusch-Godfrey test for AR(q) serial correlation The test now allows for the possibility that the strict exogeneity assumption is violated. Test for Test

Serial Correl. and Heterosked. in Time Series Regressions Correcting for serial correlation with strictly exog. regressors Under the assumption of AR(1) errors, one can transform the model so that it satisfies all GM-assumptions. For this model, OLS is BLUE. Problem: The AR(1)-coefficient is not known and has to be estimated Simple case of regression with only one explana-tory variable. The general case works analogusly. Lag and multiply by The transformed error satis-fies the GM-assumptions.

Serial Correl. and Heterosked. in Time Series Regressions Correcting for serial correlation (cont.) Replacing the unknown by leads to a FGLS-estimator There are two variants: Cochrane-Orcutt estimation omits the first observation Prais-Winsten estimation adds a transformed first observation In smaller samples, Prais-Winsten estimation should be more efficient Comparing OLS and FGLS with autocorrelation For consistency of FGLS more than TS.3‘ is needed (e.g. TS.3) because the transformed regressors include variables from different periods If OLS and FGLS differ dramatically this might indicate violation of TS.3

Serial Correl. and Heterosked. in Time Series Regressions Serial correlation-robust inference after OLS In the presence of serial correlation, OLS standard errors overstate statistical significance because there is less independent variation One can compute serial correlation-robust std. errors after OLS This is useful because FGLS requires strict exogeneity and assumes a very specific form of serial correlation (AR(1) or, generally, AR(q)) Serial correlation-robust standard errors: Serial correlation-robust F- and t-tests are also available The usual OLS standard errors are normalized and then “inflated” by a correction factor.

Serial Correl. and Heterosked. in Time Series Regressions Correction factor for serial correlation (Newey-West formula) This term is the product of the residuals and the residuals of a regression of xtj on all other explanatory variables The integer g controls how much serial correlation is allowed: g = 1: The weight of higher order autocorrelations is declining g = 2:

Serial Correl. and Heterosked. in Time Series Regressions Discussion of serial correlation-robust standard errors The formulas are also robust to heteroskedasticity; they are therefore called “heteroskedasticity and autocorrelation consistent” (=HAC) For the integer g, values such as g=2 or g=3 are normally sufficient (there are more involved rules of thumb for how to choose g) Serial correlation-robust standard errors are only valid asymptotically; they may be severely biased if the sample size is not large enough

Serial Correl. and Heterosked. in Time Series Regressions The bias is the higher the more autocorrelation there is; if the series are highly correlated, it might be a good idea to difference them first Serial correlation-robust errors should be used if there is serial corr. and strict exogeneity fails (e.g. in the presence of lagged dep. var.)

Serial Correl. and Heterosked. in Time Series Regressions Heteroskedasticity in time series regressions Heteroskedasticity usually receives less attention than serial correlation Heteroskedasticity-robust standard errors also work for time series Heteroskedasticity is automatically corrected for if one uses the serial correlation-robust formulas for standard errors and test statistics

Serial Correl. and Heterosked. in Time Series Regressions Testing for heteroskedasticity The usual heteroskedasticity tests assume absence of serial correlation Before testing for heteroskedasticity one should therefore test for serial correlation first, using a heteroskedasticity-robust test if necessary After ser. corr. has been corrected for, test for heteroskedasticity

Serial Correl. and Heterosked. in Time Series Regressions Example: Serial correlation and homoskedasticity in the EMH Test equation for the EMH Test for serial correlation: No evidence for serial correlation Test for heteroskedasticity: Strong evidence for heteroskedasticity Note: Volatility is higher if returns are low

Serial Correl. and Heterosked. in Time Series Regressions Autoregressive Conditional Heteroskedasticity (ARCH) Consequences of ARCH in static and distributed lag models If there are no lagged dependent variables among the regressors, i.e. in static or distributed lag models, OLS remains BLUE under TS.1-TS.5 Also, OLS is consistent etc. for this case under assumptions TS.1‘- TS.5‘ As explained, in this case, assumption TS.4 still holds under ARCH Even if there is no heteroskedasticity in the usual sense (the error variance depends on the explanatory variables), there may be heteroskedasticity in the sense that the variance depends on how volatile the time series was in previous periods: ARCH(1) model

Serial Correl. and Heterosked. in Time Series Regressions Consequences of ARCH in dynamic models In dynamic models, i.e. models including lagged dependent variables, the homoskedasticity assumption TS.4 will necessarily be violated: This means the error variance indirectly depends on explanat. variables In this case, heteroskedasticity-robust standard error and test statistics should be computed, or a FGLS/WLS-procedure should be applied Using a FGLS/WLS-procedure will also increase efficiency because

Serial Correl. and Heterosked. in Time Series Regressions Example: Testing for ARCH-effects in stock returns Are there ARCH-effects in these errors? Estimating equation for ARCH(1) model There are statistically significant ARCH-effects: If returns were particularly high or low (squared returns were high) they tend to be particularly high or low again, i.e. high volatility is followed by high volatility.

Serial Correl. and Heterosked. in Time Series Regressions A FGLS procedure for serial correlation and heteroskedasticity Given or estimated model for heteroskedasticity Model for serial correlation Estimate transformed model by Cochrane-Orcutt or Prais-Winsten techniques (because of serial correlation in transformed error term)