Problems with the Durbin-Watson test

Slides:



Advertisements
Similar presentations
Autocorrelation and Heteroskedasticity
Advertisements

Regression Analysis.
Econometric Modeling Through EViews and EXCEL
Economics 20 - Prof. Anderson1 Time Series Data y t =  0 +  1 x t  k x tk + u t 2. Further Issues.
Chapter 11 Autocorrelation.
Economics Prof. Buckles1 Time Series Data y t =  0 +  1 x t  k x tk + u t 1. Basic Analysis.
Chapter 6 Autocorrelation.
Chapter 10 Simple Regression.
Prof. Dr. Rainer Stachuletz
Economics 20 - Prof. Anderson1 Time Series Data y t =  0 +  1 x t  k x tk + u t 2. Further Issues.
Specific to General Modelling The traditional approach to econometrics modelling was as follows: 1.Start with an equation based on economic theory. 2.Estimate.
Chapter 5 Heteroskedasticity. What is in this Chapter? How do we detect this problem What are the consequences of this problem? What are the solutions?
Additional Topics in Regression Analysis
12.3 Correcting for Serial Correlation w/ Strictly Exogenous Regressors The following autocorrelation correction requires all our regressors to be strictly.
Econ 140 Lecture 181 Multiple Regression Applications III Lecture 18.
Chapter 11 Multiple Regression.
Topic 3: Regression.
Review.
Dealing with Heteroscedasticity In some cases an appropriate scaling of the data is the best way to deal with heteroscedasticity. For example, in the model.
Econ 140 Lecture 191 Autocorrelation Lecture 19. Econ 140 Lecture 192 Today’s plan Durbin’s h-statistic Autoregressive Distributed Lag model & Finite.
1Prof. Dr. Rainer Stachuletz Time Series Data y t =  0 +  1 x t  k x tk + u t 2. Further Issues.
Autocorrelation Lecture 18 Lecture 18.
12 Autocorrelation Serial Correlation exists when errors are correlated across periods -One source of serial correlation is misspecification of the model.
Serial Correlation Serial correlation is a problem associated with time-series data. It occurs when the errors of the regression model are correlated with.
Regression Method.
BSc (Hons) Finance II/ BSc (Hons) Finance with Law II
What does it mean? The variance of the error term is not constant
The Examination of Residuals. The residuals are defined as the n differences : where is an observation and is the corresponding fitted value obtained.
Chapter 10 Hetero- skedasticity Copyright © 2011 Pearson Addison-Wesley. All rights reserved. Slides by Niels-Hugo Blunch Washington and Lee University.
1 MF-852 Financial Econometrics Lecture 10 Serial Correlation and Heteroscedasticity Roy J. Epstein Fall 2003.
Pure Serial Correlation
Random Regressors and Moment Based Estimation Prepared by Vera Tabakova, East Carolina University.
Lecturer: Kem Reat, Viseth, PhD (Economics)
The Examination of Residuals. Examination of Residuals The fitting of models to data is done using an iterative approach. The first step is to fit a simple.
Autocorrelation in Time Series KNNL – Chapter 12.
Welcome to Econ 420 Applied Regression Analysis Study Guide Week Thirteen.
Chapter 5 Demand Estimation Managerial Economics: Economic Tools for Today’s Decision Makers, 4/e By Paul Keat and Philip Young.
1Spring 02 Problems in Regression Analysis Heteroscedasticity Violation of the constancy of the variance of the errors. Cross-sectional data Serial Correlation.
Christopher Dougherty EC220 - Introduction to econometrics (revision lectures 2011) Slideshow: autocorrelation Original citation: Dougherty, C. (2011)
3.4 The Components of the OLS Variances: Multicollinearity We see in (3.51) that the variance of B j hat depends on three factors: σ 2, SST j and R j 2.
STAT 497 LECTURE NOTE 9 DIAGNOSTIC CHECKS 1. After identifying and estimating a time series model, the goodness-of-fit of the model and validity of the.
Chap 6 Further Inference in the Multiple Regression Model
Dynamic Models, Autocorrelation and Forecasting ECON 6002 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s notes.
ANAREGWEEK 14 AUTOCORRELATION IN TIME SRIES DATA  Problems of autocorrelation  First-order autoregressive error model  Durbin-Watson test for autocorrelation.
Chap 9 Regression with Time Series Data: Stationary Variables
Autocorrelation II Lecture 21 Lecture 21.
1 Regression Review Population Vs. Sample Regression Line Residual and Standard Error of Regression Interpretation of intercept & slope T-test, F-test.
7-1 MGMG 522 : Session #7 Serial Correlation (Ch. 9)
AUTOCORRELATION 1 Assumption B.5 states that the values of the disturbance term in the observations in the sample are generated independently of each other.
The Instrumental Variables Estimator The instrumental variables (IV) estimator is an alternative to Ordinary Least Squares (OLS) which generates consistent.
4-1 MGMG 522 : Session #4 Choosing the Independent Variables and a Functional Form (Ch. 6 & 7)
Heteroscedasticity Heteroscedasticity is present if the variance of the error term is not a constant. This is most commonly a problem when dealing with.
Ch5 Relaxing the Assumptions of the Classical Model
REGRESSION DIAGNOSTIC III: AUTOCORRELATION
Dynamic Models, Autocorrelation and Forecasting
Kakhramon Yusupov June 15th, :30pm – 3:00pm Session 3
Fundamentals of regression analysis 2
STOCHASTIC REGRESSORS AND THE METHOD OF INSTRUMENTAL VARIABLES
Pure Serial Correlation
STAT 497 LECTURE NOTE 9 DIAGNOSTIC CHECKS.
Serial Correlation and Heteroskedasticity in Time Series Regressions
The Regression Model Suppose we wish to estimate the parameters of the following relationship: A common method is to choose parameters to minimise the.
Serial Correlation and Heteroscedasticity in
Simple Linear Regression
Tutorial 1: Misspecification
Tutorial 6 SEG rd Oct..
Autocorrelation MS management.
Heteroskedasticity.
Financial Econometrics Fin. 505
Serial Correlation and Heteroscedasticity in
Presentation transcript:

Problems with the Durbin-Watson test It assumes a specific form of serial correlation and may fail to detect other forms. 2. It may not generate a definite answer if the statistic falls in the uncertain range. 3. It becomes unreliable when there are lagged endogenous variables included as regressors in the model.

Durbin’s h test If the model contains a lagged endogenous variable then we can use Durbin’s h test The test statistic is defined as: In large samples this is distributed as N(0,1).

The Breusch-Godfrey test For this test we first estimate the model and generate a set of residuals: We then estimate an auxiliary model in which we regress the residuals on the original regressors and their own lagged values: We then carry out either an F-test or a Chi-squared test for the joint significance of the lagged residuals in the above equation

The Box-Ljung Test This is a test for general serial correlation up to order m. The test statistic is: Under the null hypothesis that the errors are not serially correlated, this statistic is distributed as Chi-squared with m degrees of freedom. Tests of this type, in which the serial correlation can be of a very general type, are sometimes referred to as portmanteau tests.

Implications of serial correlation The implications of serial correlation depend on what causes it in the first place. Serial correlation can be the result of: Error dynamics – in this case our model has the correct set of RHS variables but the error is serially correlated. 2. Omitted variables – the model specified has left out a RHS variable which is itself serially correlated.

Error dynamics If serial correlation is due to error dynamics then OLS can be shown to be unbiased. However, OLS will be inefficient since the GM assumptions no longer hold and it is possible to find an estimator with lower variance. If the error exhibits positive autocorrelation and the x variable is also positively autocorrelated then the standard error of the OLS estimator will be biased downwards. This last property means that we may over-estimate the statistical significance of the RHS variable(s).

‘Correcting’ for Serial Correlation Most regression packages offer mechanical procedures which ‘correct’ for the presence of serial correlation. For example, if we have: We can use this equation to simultaneously estimate the slope coefficient and the autoregressive parameter. Estimation in this case is by non-linear least squares.

‘Quasi-Differencing’ Methods We have already seen that the model with an AR(1) error can be written: Suppose we have an estimate of the AR parameter, then we can write: This is the quasi-differenced form of the model. Methods such as the Cochrane-Orcutt and Hildreth-Lu methods use an iterative process to estimate the unknown parameters.

Example: regression of log consumption on log GDP

Common Factor Restrictions Some econometricians have argued against the use of mechanical corrections for autocorrelation on the grounds that it imposes an untested restriction. Suppose we have: The correction for autocorrelation can be obtained by imposing: It is argued that we should test this restriction before we apply mechanical corrections for the presence of autocorrelation.

Omitted Variables If we omit a variable from a regression equation that should be included, and that variable is itself serially correlated, the result will be a serially correlated error. In this case OLS will be biased, inefficient and the standard errors of the coefficients will be unreliable. There is little we can do in this case other than go back and respecify the model from the start. Note that if this is the cause of serial correlation then inference based on ‘corrections’ for serial correlation becomes unreliable.