Specifying an Econometric Equation and Specification Error

Slides:



Advertisements
Similar presentations
Chapter 3 Learning to Use Regression Analysis Copyright © 2011 Pearson Addison-Wesley. All rights reserved. Slides by Niels-Hugo Blunch Washington and.
Advertisements

Multiple Regression Analysis
Statistical Techniques I EXST7005 Multiple Regression.
Random Assignment Experiments
Economics 20 - Prof. Anderson1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 7. Specification and Data Problems.
Econ 488 Lecture 5 – Hypothesis Testing Cameron Kaplan.
Copyright © 2009 Pearson Education, Inc. Chapter 29 Multiple Regression.
Introduction and Overview
4.3 Confidence Intervals -Using our CLM assumptions, we can construct CONFIDENCE INTERVALS or CONFIDENCE INTERVAL ESTIMATES of the form: -Given a significance.
The Use and Interpretation of the Constant Term
Graduate School Gwilym Pryce
Choosing a Functional Form
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. Lecture 4: Mathematical Tools for Econometrics Statistical Appendix (Chapter 3.1–3.2)
Multiple Linear Regression Model
1Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 7. Specification and Data Problems.
Specific to General Modelling The traditional approach to econometrics modelling was as follows: 1.Start with an equation based on economic theory. 2.Estimate.
Chapter 4 Multiple Regression.
4. Multiple Regression Analysis: Estimation -Most econometric regressions are motivated by a question -ie: Do Canadian Heritage commercials have a positive.
Multiple Regression Models
Multicollinearity Omitted Variables Bias is a problem when the omitted variable is an explanator of Y and correlated with X1 Including the omitted variable.
Further Inference in the Multiple Regression Model Prepared by Vera Tabakova, East Carolina University.
Topic 3: Regression.
Stat 112: Lecture 9 Notes Homework 3: Due next Thursday
Multiple Linear Regression A method for analyzing the effects of several predictor variables concurrently. - Simultaneously - Stepwise Minimizing the squared.
Bootstrapping applied to t-tests
Ordinary Least Squares
12 Autocorrelation Serial Correlation exists when errors are correlated across periods -One source of serial correlation is misspecification of the model.
What Is Hypothesis Testing?
Hypothesis Tests and Confidence Intervals in Multiple Regressors
Chapter 15 Forecasting Copyright © 2011 Pearson Addison-Wesley. All rights reserved. Slides by Niels-Hugo Blunch Washington and Lee University.
ECON 6012 Cost Benefit Analysis Memorial University of Newfoundland
Regression Analysis. Regression analysis Definition: Regression analysis is a statistical method for fitting an equation to a data set. It is used to.
Hypothesis Testing in Linear Regression Analysis
Problems with Incorrect Functional Form You cannot compare R 2 between two different functional forms. ▫ Why? TSS will be different. One should also remember.
Regression Method.
  What is Econometrics? Econometrics literally means “economic measurement” It is the quantitative measurement and analysis of actual economic and business.
4.2 One Sided Tests -Before we construct a rule for rejecting H 0, we need to pick an ALTERNATE HYPOTHESIS -an example of a ONE SIDED ALTERNATIVE would.
1-1 Copyright © 2015, 2010, 2007 Pearson Education, Inc. Chapter 23, Slide 1 Chapter 23 Comparing Means.
CORRELATION & REGRESSION
2-1 MGMG 522 : Session #2 Learning to Use Regression Analysis & The Classical Model (Ch. 3 & 4)
RMTD 404 Lecture 8. 2 Power Recall what you learned about statistical errors in Chapter 4: Type I Error: Finding a difference when there is no true difference.
Copyright © 2014 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill Education.
Specification Error I.
Chapter 10 Hetero- skedasticity Copyright © 2011 Pearson Addison-Wesley. All rights reserved. Slides by Niels-Hugo Blunch Washington and Lee University.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Slide
Pure Serial Correlation
Random Regressors and Moment Based Estimation Prepared by Vera Tabakova, East Carolina University.
Welcome to Econ 420 Applied Regression Analysis Study Guide Week Six.
1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u.
MGS3100_04.ppt/Sep 29, 2015/Page 1 Georgia State University - Confidential MGS 3100 Business Analysis Regression Sep 29 and 30, 2015.
Stat 112 Notes 9 Today: –Multicollinearity (Chapter 4.6) –Multiple regression and causal inference.
3-1 MGMG 522 : Session #3 Hypothesis Testing (Ch. 5)
Chapter 2 Ordinary Least Squares Copyright © 2011 Pearson Addison-Wesley. All rights reserved. Slides by Niels-Hugo Blunch Washington and Lee University.
Chapter 4 The Classical Model Copyright © 2011 Pearson Addison-Wesley. All rights reserved. Slides by Niels-Hugo Blunch Washington and Lee University.
Chap 6 Further Inference in the Multiple Regression Model
1 Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 1. Estimation.
Example x y We wish to check for a non zero correlation.
Roger B. Hammer Assistant Professor Department of Sociology Oregon State University Conducting Social Research Specification: Choosing the Independent.
10-1 MGMG 522 : Session #10 Simultaneous Equations (Ch. 14 & the Appendix 14.6)
4-1 MGMG 522 : Session #4 Choosing the Independent Variables and a Functional Form (Ch. 6 & 7)
Copyright © 2009 Pearson Education, Inc t LEARNING GOAL Understand when it is appropriate to use the Student t distribution rather than the normal.
Stats Methods at IC Lecture 3: Regression.
Further Inference in the Multiple Regression Model
Econometric Modeling.
Pure Serial Correlation
Further Inference in the Multiple Regression Model
Chapter 6: MULTIPLE REGRESSION ANALYSIS
Some issues in multivariate regression
Tutorial 1: Misspecification
Chapter 7: The Normality Assumption and Inference with OLS
Presentation transcript:

Specifying an Econometric Equation and Specification Error Before any equation can be estimated, it must be completely specified Specifying an econometric equation consists of three parts, namely choosing the correct: independent variables functional form form of the stochastic error term Again, this is part of the first classical assumption from Chapter 4 A specification error results when one of these choices is made incorrectly This chapter will deal with the first of these choices (the two other choices will be discussed in subsequent chapters) © 2011 Pearson Addison-Wesley. All rights reserved. 1

Omitted Variables Two reasons why an important explanatory variable might have been left out: we forgot… it is not available in the dataset, we are examining Either way, this may lead to omitted variable bias (or, more generally, specification bias) The reason for this is that when a variable is not included, it cannot be held constant Omitting a relevant variable usually is evidence that the entire equation is a suspect, because of the likely bias of the coefficients. © 2011 Pearson Addison-Wesley. All rights reserved. 2

The Consequences of an Omitted Variable Suppose the true regression model is: (6.1) Where is a classical error term If X2 is omitted, the equation becomes instead: (6.2) Where: (6.3) Hence, the explanatory variables in the estimated regression (6.2) are not independent of the error term (unless the omitted variable is uncorrelated with all the included variables—something which is very unlikely) But this violates Classical Assumption III! © 2011 Pearson Addison-Wesley. All rights reserved. 3

The Consequences of an Omitted Variable (cont.) What happens if we estimate Equation 6.2 when Equation 6.1 is the truth? We get bias! What this means is that: (6.4) The amount of bias is a function of the impact of the omitted variable on the dependent variable times a function of the correlation between the included and the omitted variable Or, more formally: (6.7) So, the bias exists unless: the true coefficient equals zero, or the included and omitted variables are uncorrelated © 2011 Pearson Addison-Wesley. All rights reserved. 4

Correcting for an Omitted Variable In theory, the solution to a problem of specification bias seems easy: add the omitted variable to the equation! Unfortunately, that’s easier said than done, for a couple of reasons Omitted variable bias is hard to detect: the amount of bias introduced can be small and not immediately detectable Even if it has been decided that a given equation is suffering from omitted variable bias, how to decide exactly which variable to include? Note here that dropping a variable is not a viable strategy to help cure omitted variable bias: If anything you’ll just generate even more omitted variable bias on the remaining coefficients! © 2011 Pearson Addison-Wesley. All rights reserved. 5

Correcting for an Omitted Variable (cont.) What if: – You have an unexpected result, which leads you to believe that you have an omitted variable – You have two or more theoretically sound explanatory variables as potential “candidates” for inclusion as the omitted variable to the equation is to use How do you choose between these variables? One possibility is expected bias analysis – Expected bias: the likely bias that omitting a particular variable would have caused in the estimated coefficient of one of the included variables © 2011 Pearson Addison-Wesley. All rights reserved. 6

Correcting for an Omitted Variable (cont.) Expected bias can be estimated with Equation 6.7: (6.7) When do we have a viable candidate? When the sign of the expected bias is the same as the sign of the unexpected result Similarly, when these signs differ, the variable is extremely unlikely to have caused the unexpected result © 2011 Pearson Addison-Wesley. All rights reserved. 7

Irrelevant Variables This refers to the case of including a variable in an equation when it does not belong there This is the opposite of the omitted variables case—and so the impact can be illustrated using the same model Assume that the true regression specification is: (6.10) But the researcher for some reason includes an extra variable: (6.11) The misspecified equation’s error term then becomes: (6.12) © 2011 Pearson Addison-Wesley. All rights reserved. 8

Irrelevant Variables (cont.) So, the inclusion of an irrelevant variable will not cause bias (since the true coefficient of the irrelevant variable is zero, and so the second term will drop out of Equation 6.12) However, the inclusion of an irrelevant variable will: Increase the variance of the estimated coefficients, and this increased variance will tend to decrease the absolute magnitude of their t-scores Decrease the R2 (but not the R2) Table 6.1 summarizes the consequences of the omitted variable and the included irrelevant variable cases (unless r12 = 0) © 2011 Pearson Addison-Wesley. All rights reserved. 9

Table 6.1 Effect of Omitted Variables and Irrelevant Variables on the Coefficient Estimates © 2011 Pearson Addison-Wesley. All rights reserved. 10

Four Important Specification Criteria We can summarize the previous discussion into four criteria to help decide whether a given variable belongs in the equation: 1. Theory: Is the variable’s place in the equation unambiguous and theoretically sound? 2. t-Test: Is the variable’s estimated coefficient significant in the expected direction? 3. R2: Does the overall fit of the equation (adjusted for degrees of freedom) improve when the variable is added to the equation? 4. Bias: Do other variables’ coefficients change significantly when the variable is added to the equation? If all these conditions hold, the variable belongs in the equation If none of them hold, it does not belong The tricky part is the intermediate cases: use sound judgment! © 2011 Pearson Addison-Wesley. All rights reserved. 11

Specification Searches Almost any result can be obtained from a given dataset, by simply specifying different regressions until estimates with the desired properties are obtained Hence, the integrity of all empirical work is open to question To counter this, the following three points of Best Practices in Specification Searches are suggested: Rely on theory rather than statistical fit as much as possible when choosing variables, functional forms, and the like 2. Minimize the number of equations estimated (except for sensitivity analysis, to be discussed later in this section) 3. Reveal, in a footnote or appendix, all alternative specifications estimated © 2011 Pearson Addison-Wesley. All rights reserved. 12

Sequential Specification Searches The sequential specification search technique allows a researcher to: Estimate an undisclosed number of regressions Subsequently present a final choice (which is based upon an unspecified set of expectations about the signs and significance of the coefficients) as if it were only a specification Such a method misstates the statistical validity of the regression results for two reasons: 1. The statistical significance of the results is overestimated because the estimations of the previous regressions are ignored 2. The expectations used by the researcher to choose between various regression results rarely, if ever, are disclosed © 2011 Pearson Addison-Wesley. All rights reserved. 13

Bias Caused by Relying on the t-Test to Choose Variables Dropping variables solely based on low t-statistics may lead to two different types of errors: 1. An irrelevant explanatory variable may sometimes be included in the equation (i.e., when it does not belong there) 2. A relevant explanatory variables may sometimes be dropped from the equation (i.e., when it does belong) In the first case, there is no bias but in the second case there is bias Hence, the estimated coefficients will be biased every time an excluded variable belongs in the equation, and that excluded variable will be left out every time its estimated coefficient is not statistically significantly different from zero So, we will have systematic bias in our equation! © 2011 Pearson Addison-Wesley. All rights reserved. 14

Sensitivity Analysis Contrary to the advice of estimating as few equations as possible (and based on theory, rather than fit!), sometimes we see journal article authors listing results from five or more specifications What’s going on here: In almost every case, these authors have employed a technique called sensitivity analysis This essentially consists of purposely running a number of alternative specifications to determine whether particular results are robust (not statistical flukes) to a change in specification Why is this useful? Because true specification isn’t known! © 2011 Pearson Addison-Wesley. All rights reserved. 15

Data Mining Data mining involves exploring a data set to try to uncover empirical regularities that can inform economic theory That is, the role of data mining is opposite that of traditional econometrics, which instead tests the economic theory on a data set Be careful, however! a hypothesis developed using data mining techniques must be tested on a different data set (or in a different context) than the one used to develop the hypothesis Not doing so would be highly unethical: After all, the researcher already knows ahead of time what the results will be! © 2011 Pearson Addison-Wesley. All rights reserved. 16

Key Terms from Chapter 6 Omitted variable Irrelevant variable Specification bias Sequential specification search Specification error The four specification criteria Expected bias Sensitivity analysis © 2011 Pearson Addison-Wesley. All rights reserved. 17