Endogenous Regressors and Instrumental Variables Estimation Adapted from Vera Tabakova, East Carolina University.

Slides:



Advertisements
Similar presentations
Regression Analysis.
Advertisements

Panel Data Models Prepared by Vera Tabakova, East Carolina University.
Economics 20 - Prof. Anderson
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Multiple Regression Analysis
There are at least three generally recognized sources of endogeneity. (1) Model misspecification or Omitted Variables. (2) Measurement Error.
Irwin/McGraw-Hill © Andrew F. Siegel, 1997 and l Chapter 12 l Multiple Regression: Predicting One Factor from Several Others.
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Instrumental Variables Estimation and Two Stage Least Square
8. Heteroskedasticity We have already seen that homoskedasticity exists when the error term’s variance, conditional on all x variables, is constant: Homoskedasticity.
8.4 Weighted Least Squares Estimation Before the existence of heteroskedasticity-robust statistics, one needed to know the form of heteroskedasticity -Het.
1 Lecture 2: ANOVA, Prediction, Assumptions and Properties Graduate School Social Science Statistics II Gwilym Pryce
1 Lecture 2: ANOVA, Prediction, Assumptions and Properties Graduate School Social Science Statistics II Gwilym Pryce
The Multiple Regression Model Prepared by Vera Tabakova, East Carolina University.
The Simple Linear Regression Model: Specification and Estimation
Chapter 13 Additional Topics in Regression Analysis
Chapter 10 Simple Regression.
Prof. Dr. Rainer Stachuletz
Simultaneous Equations Models
Additional Topics in Regression Analysis
Chapter 9 Simultaneous Equations Models. What is in this Chapter? In Chapter 4 we mentioned that one of the assumptions in the basic regression model.
4. Multiple Regression Analysis: Estimation -Most econometric regressions are motivated by a question -ie: Do Canadian Heritage commercials have a positive.
Chapter 11 Multiple Regression.
Topic 3: Regression.
Inferences About Process Quality
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Chapter 12 Section 1 Inference for Linear Regression.
Simple Linear Regression Analysis
12 Autocorrelation Serial Correlation exists when errors are correlated across periods -One source of serial correlation is misspecification of the model.
3. Multiple Regression Analysis: Estimation -Although bivariate linear regressions are sometimes useful, they are often unrealistic -SLR.4, that all factors.
Inference for regression - Simple linear regression
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University.
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University ECON 4550 Econometrics Memorial University of Newfoundland.
Hypothesis Testing in Linear Regression Analysis
Regression Method.
2-1 MGMG 522 : Session #2 Learning to Use Regression Analysis & The Classical Model (Ch. 3 & 4)
Random Regressors and Moment Based Estimation Prepared by Vera Tabakova, East Carolina University.
Chap 14-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 14 Additional Topics in Regression Analysis Statistics for Business.
Interval Estimation and Hypothesis Testing Prepared by Vera Tabakova, East Carolina University.
3.4 The Components of the OLS Variances: Multicollinearity We see in (3.51) that the variance of B j hat depends on three factors: σ 2, SST j and R j 2.
Chapter 4 The Classical Model Copyright © 2011 Pearson Addison-Wesley. All rights reserved. Slides by Niels-Hugo Blunch Washington and Lee University.
Multiple Regression I 1 Copyright © 2005 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 4 Multiple Regression Analysis (Part 1) Terry Dielman.
I271B QUANTITATIVE METHODS Regression and Diagnostics.
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University ECON 4550 Econometrics Memorial University of Newfoundland.
11 Chapter 5 The Research Process – Hypothesis Development – (Stage 4 in Research Process) © 2009 John Wiley & Sons Ltd.
Dynamic Models, Autocorrelation and Forecasting ECON 6002 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s notes.
Endogenous Regressors and Instrumental Variables Estimation Adapted from Vera Tabakova, East Carolina University.
10-1 MGMG 522 : Session #10 Simultaneous Equations (Ch. 14 & the Appendix 14.6)
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University.
Multiple Regression Analysis: Inference
Vera Tabakova, East Carolina University
Vera Tabakova, East Carolina University
Prediction, Goodness-of-Fit, and Modeling Issues
STOCHASTIC REGRESSORS AND THE METHOD OF INSTRUMENTAL VARIABLES
CHAPTER 29: Multiple Regression*
CHAPTER 26: Inference for Regression
Serial Correlation and Heteroskedasticity in Time Series Regressions
Chapter 6: MULTIPLE REGRESSION ANALYSIS
Undergraduated Econometrics
Review of Statistical Inference
Interval Estimation and Hypothesis Testing
Serial Correlation and Heteroscedasticity in
Simple Linear Regression
Tutorial 1: Misspecification
Chapter 7: The Normality Assumption and Inference with OLS
Seminar in Economics Econ. 470
Linear Panel Data Models
Simultaneous Equations Models
Serial Correlation and Heteroscedasticity in
Presentation transcript:

Endogenous Regressors and Instrumental Variables Estimation Adapted from Vera Tabakova, East Carolina University

 10.1 Linear Regression with Random x’s  10.2 Cases in which x and e are Correlated  10.3 Estimators Based on the Method of Moments  10.4 Specification Tests  Show Kennedy;s graphs and explain “regression”

The assumptions of the simple linear regression are:  SR1.  SR2.  SR3.  SR4.  SR5. The variable x i is not random, and it must take at least two different values.  SR6. (optional)

The purpose of this chapter is to discuss regression models in which x i is random and correlated with the error term e i. We will:  Discuss the conditions under which having a random x is not a problem, and how to test whether our data satisfies these conditions.  Present cases in which the randomness of x causes the least squares estimator to fail.  Provide estimators that have good properties even when x i is random and correlated with the error e i.

 A10.1 correctly describes the relationship between y i and x i in the population, where β 1 and β 2 are unknown (fixed) parameters and e i is an unobservable random error term.  A10.2The data pairs, are obtained by random sampling. That is, the data pairs are collected from the same population, by a process in which each pair is independent of every other pair. Such data are said to be independent and identically distributed.

 A10.3 The expected value of the error term e i, conditional on the value of x i, is zero. This assumption implies that we have (i) omitted no important variables, (ii) used the correct functional form, and (iii) there exist no factors that cause the error term e i to be correlated with x i.  If, then we can show that it is also true that x i and e i are uncorrelated, and that.  Conversely, if x i and e i are correlated, then and we can show that.

 A10.4In the sample, x i must take at least two different values.  A10.5 The variance of the error term, conditional on x i is a constant σ 2.  A10.6 The distribution of the error term, conditional on x i, is normal.

 Under assumptions A10.1-A10.4 the least squares estimator is unbiased.  Under assumptions A10.1-A10.5 the least squares estimator is the best linear unbiased estimator of the regression parameters, conditional on the x’s, and the usual estimator of σ 2 is unbiased.

 Under assumptions A10.1-A10.6 the distributions of the least squares estimators, conditional upon the x’s, are normal, and their variances are estimated in the usual way. Consequently the usual interval estimation and hypothesis testing procedures are valid.

Figure 10.1 An illustration of consistency

Remark: Consistency is a “large sample” or “asymptotic” property. We have stated another large sample property of the least squares estimators in Chapter 2.6. We found that even when the random errors in a regression model are not normally distributed, the least squares estimators still have approximate normal distributions if the sample size N is large enough. How large must the sample size be for these large sample properties to be valid approximations of reality? In a simple regression 50 observations might be enough. In multiple regression models the number might be much higher, depending on the quality of the data.

 A10.3*

 Under assumption A10.3* the least squares estimators are consistent. That is, they converge to the true parameter values as N .  Under assumptions A10.1, A10.2, A10.3*, A10.4 and A10.5, the least squares estimators have approximate normal distributions in large samples, whether the errors are normally distributed or not. Furthermore our usual interval estimators and test statistics are valid, if the sample is large.

 If assumption A10.3* is not true, and in particular if so that x i and e i are correlated, then the least squares estimators are inconsistent. They do not converge to the true parameter values even in very large samples. Furthermore, none of our usual hypothesis testing or interval estimation procedures are valid.

Figure 10.2 Plot of correlated x and e

True model, but it gets estimated as: …so there is s substantial bias …

Figure 10.3 Plot of data, true and fitted regressions In the case of a positive correlation between x and the error Example Wages = f(intelligence)

When an explanatory variable and the error term are correlated the explanatory variable is said to be endogenous and means “determined within the system.”

XY vu e X Y

Omitted factors: experience, ability and motivation. Therefore, we expect that We will focus on this case Y W X

Other examples:  Y = crime, X = marriage, W = “marriageability”  Y = divorce, X = “shacking up”, W = “good match”  Y = crime, X = watching a lot of TV, W = “parental involvement”  Y = sexual violence, X = watching porn, W = any unobserved anything that would affect both W and Y  The list is endless!. We will focus on this case Y W X

There is a feedback relationship between P i and Q i. Because of this feedback, which results because price and quantity are jointly, or simultaneously, determined, we can show that The resulting bias (and inconsistency) is called the simultaneous equations bias. XY

In this case the least squares estimator applied to the lagged dependent variable model will be biased and inconsistent.

When all the usual assumptions of the linear model hold, the method of moments leads us to the least squares estimator. If x is random and correlated with the error term, the method of moments leads us to an alternative, called instrumental variables estimation, or two-stage least squares estimation, that will work in large samples.

Suppose that there is another variable, z, such that  z does not have a direct effect on y, and thus it does not belong on the right-hand side of the model as an explanatory variable ONCE X IS IN THE MODEL!  z i should not be simultaneously affected by y either, of course  z i is not correlated with the regression error term e i. Variables with this property are said to be exogenous  z is strongly [or at least not weakly] correlated with x, the endogenous explanatory variable  A variable z with these properties is called an instrumental variable.

 An instrument is a variable z correlated with x but not with the error e  In addition, the instrument does not directly affect y and thus does not belong in the actual model as a separate regressor (of course it should affect it through the instrumented regressor x)  It is common to have more than one instrument for x (just not good ones! )  These instruments, z1; z2; : : : ; zs, must be correlated with x, but not with e  Consistent estimation is obtained through the instrumental variables or two-stage least squares (2SLS) estimator, rather than the usual OLS estimator

 Using Z, an “instrumental variable” for X is one solution to the problem of omitted variables bias Z, to be a valid instrument for X must be: –Relevant = Correlated with X –Exogenous = Not correlated with Y except through its correlation with X Z XY W e

Based on the method of moments…

Solving the previous system, we obtain a new estimator that cleanses the endogeneity of X and exploits only the component of the variation of X that is not correlated with e instead: We do that by ensuring that we only use the predicted value of X from its regression on Z in the main regression

These new estimators have the following properties:  They are consistent, if  In large samples the instrumental variable estimators have approximate normal distributions. In the simple regression model

 The error variance is estimated using the estimator For: The stronger the correlation between the instrument and X the better!

Using the instrumental variables is less efficient than OLS (because you must throw away some information on the variation of X) so it leads to wider confidence intervals and less precise inference. The bottom line is that when instruments are weak, instrumental variables estimation is not reliable: you throw away too much information when trying to avoid the endogeneity bias

 Not all of the available variation in X is used  Only that component of X that is “explained” by Z is used to explain Y XY Z X = Endogenous variable Y = Response variable Z = Instrumental variable

XY Z Realistic scenario: Very little of X is explained by Z, and/or what is explained does not overlap much with Y XY Z Best-case scenario: A lot of X is explained by Z, and most of the overlap between X and Y is accounted for

 The IV estimator is BIASED  E(b IV ) ≠ β (finite-sample bias) but consistent: E(b) → β as N → ∞ So IV studies must often have very large samples  But with endogeneity, E(b LS ) ≠ β and plim(b LS ) ≠ β anyway…  Asymptotic behavior of IV: plim(b IV ) = β + Cov(Z,e) / Cov(Z,X)  If Z is truly exogenous, then Cov(Z,e) = 0

 Three different models to be familiar with  First stage: EDUC = α 0 + α 1 Z + ω (‘reduced form equation’)  Structural model: WAGES = β 0 + β 1 EDUC + ε  Also: WAGES = δ 0 + δ 1 Z + ξ  An interesting equality: δ 1 = α 1 × β 1 so… β 1 = δ 1 / α 1 ZXY α1α1 β1β1 ZY δ1δ1 ωε ξ

Check it out: as expected much lower than from OLS!!! We hope for a high t ratio here

Principles of Econometrics, 3rd Edition43 open logs wage square exper list x = const educ exper sq_exper list z = const exper sq_exper mothereduc # least squares and IV estimation of wage eq ols l_wage x tsls l_wage x ; z # tsls--manually smpl wage>0 --restrict ols educ z series educ_hat = $yhat ols l_wage const educ_hat exper sq_exper But check that the latter will give you wrong s.e. so not reccommended Include in z all G exogenous variables and the instruments available Exogenous variables Instrument themselves!

Principles of Econometrics, 3rd Edition44 In econometrics, two-stage least squares (TSLS or 2SLS) and instrumental variables (IV) estimation are often used interchangeably The `two-stage' terminology comes from the time when the easiest way to estimate the model was to actually use two separate least squares regressions With better software, the computation is done in a single step to ensure the other model statistics are computed correctly Unfortunately GRETL outputs R2 for the IV regression; you should ignore it! It is based on the correlation of y and yhat, so it does not have the usual interpretation…

A 2-step process.  Regress x on a constant term, z and all other exogenous variables G, and obtain the predicted values  Use as an instrumental variable for x.

Two-stage least squares (2SLS) estimator:  Stage 1 is the regression of x on a constant term, z and all other exogenous variables G, to obtain the predicted values. This first stage is called the reduced form model estimation.  Stage 2 is ordinary least squares estimation of the simple linear regression

If we regress education on all the exogenous variables and the TWO instruments: These look promising In fact: F test says Null hypothesis: the regression parameters are zero for the variables mothereduc, fathereduc Test statistic: F(2, 423) = , p-value e-022 Rule of thumb for strong instruments: F>10

With the additional instrument, we achieve a significant result in this case

Is a bit more complex, but the idea is that you need at least as many Instruments as you have endogenous variables You cannot use the F test we just saw to test for the weakness of the instruments, but see Appendix for a test based on the notion of partial correlations For example, run this experiment with the mroz.gdt data: # partial correlations--the FWL result ols educ const exper sq_exper series e1 = $uhat ols mothereduc const exper sq_exper series e2 = $uhat ols e1 e2 corr e1 e2

it is the independent correlation between the instrument and the endogenous regressor that is important. We can partial out the correlation in X that is due to the exogenous regressors. Whatever common variation that remains will measure the independent correlation between X and the instrument Z: we ask ourselves how much of the independent variation of educ is captured by the independent variation in mothereduc? Partial correlations For example, run this experiment with the mroz.gdt data: # partial correlations--the FWL result ols educ const exper sq_exper series e1 = $uhat ols mothereduc const exper sq_exper series e2 = $uhat ols e1 e2 corr e1 e2 And compare your results with ols educ const mothereduc exper sq_exper

When testing the null hypothesis use of the test statistic is valid in large samples. It is common, but not universal, practice to use critical values, and p-values, based on the Student-t distribution rather than the more strictly appropriate N(0,1) distribution. The reason is that tests based on the t-distribution tend to work better in samples of data that are not large.

When testing a joint hypothesis, such as, the test may be based on the chi-square distribution with the number of degrees of freedom equal to the number of hypotheses (J) being tested. The test itself may be called a “Wald” test, or a likelihood ratio (LR) test, or a Lagrange multiplier (LM) test. These testing procedures are all asymptotically equivalent Another complication: you might need to use tests that are robust to heteroskedasticity and/or autocorrelation…

Unfortunately R 2 can be negative when based on IV estimates. Therefore the use of measures like R 2 outside the context of the least squares estimation should be avoided (even if GRETL produces one!)

 Can we test for whether x is correlated with the error term? This might give us a guide of when to use least squares and when to use IV estimators. TEST OF EXOGENEITY: (Durbin-Wu) HAUSMAN TEST  Can we test whether our instrument is sufficiently strong to avoid the problems associated with “weak” instruments? TESTS FOR WEAK INSTRUMENTS  Can we test if our instrument is valid, and uncorrelated with the regression error, as required? SOMETIMES ONLY  WITH A SARGAN TEST

 If the null hypothesis is true, both OLS and the IV estimator are consistent. If the null hypothesis holds, use the more efficient estimator, OLS.  If the null hypothesis is false, OLS is not consistent, and the IV estimator is consistent, so use the IV estimator.

Let z 1 and z 2 be instrumental variables for x. 1. Estimate the model by least squares, and obtain the residuals. If there are more than one explanatory variables that are being tested for endogeneity, repeat this estimation for each one, using all available instrumental variables in each regression.

2. Include the residuals computed in step 1 as an explanatory variable in the original regression, Estimate this "artificial regression" by least squares, and employ the usual t-test for the hypothesis of significance

3. If more than one variable is being tested for endogeneity, the test will be an F-test of joint significance of the coefficients on the included residuals.  Note: This is a test for the exogeneity of the regressors xi and not for the exogeneity of the instruments zi. If the instruments are not valid, the Hausman test is not valid either.

# GRETL Hausman test (manually) ols educ z --quiet series ehat = $uhat ols l_wage x ehat  Note: whenever you use 2SLS GRETLautomatically produces the test statistic for the Hausman test.  There are several different ways of computing it, so don't worry if it differs from the one you compute manually using the above script

If we have L > 1 instruments available then the reduced form equation is

Then test with an F test whether the instruments help to determine the value of the endogenous variable. Rule of thumb F > 10 for one endogenous regressor

 As we saw before: open smpl wage>0 --restrict logs wage square exper list x = const educ exper sq_exper list z2 = const exper sq_exper mothereduc fathereduc ols educ z2 omit mothereduc fathereduc

We want to check that the instrument is itself exogenous…but you can only do it for surplus instruments if you have them (overidentified equation)… 1. Compute the IV estimates using all available instruments, including the G variables x 1 =1, x 2, …, x G that are presumed to be exogenous, and the L instruments z 1, …, z L. 2. Obtain the residuals

3. Regress on all the available instruments described in step Compute NR 2 from this regression, where N is the sample size and R 2 is the usual goodness-of-fit measure. 5. If all of the surplus moment conditions are valid, then If the value of the test statistic exceeds the 100(1−α)-percentile from the distribution, then we conclude that at least one of the surplus moment conditions restrictions is not valid.

# Sargan test of overidentification tsls l_wage x; z2 series uhat2 = $uhat ols uhat2 z2 scalar test = $trsq pvalue X 1 test tsls l_wage x ; z2 You should learn about the Cragg-Uhler test if you end up having more than one endogenous variable

Slide Principles of Econometrics, 3rd Edition  asymptotic properties  conditional expectation  endogenous variables  errors-in-variables  exogenous variables  finite sample properties  Hausman test  instrumental variable  instrumental variable estimator  just identified equations  large sample properties  over identified equations  population moments  random sampling  reduced form equation  sample moments  simultaneous equations bias  test of surplus moment conditions  two-stage least squares estimation  weak instruments