Multiple Regression Analysis: Specification And Data Issues

Slides:



Advertisements
Similar presentations
Multiple Regression Analysis
Advertisements

The Simple Regression Model
There are at least three generally recognized sources of endogeneity. (1) Model misspecification or Omitted Variables. (2) Measurement Error.
Chap 12-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 12 Simple Regression Statistics for Business and Economics 6.
C 3.7 Use the data in MEAP93.RAW to answer this question
Economics 20 - Prof. Anderson1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 7. Specification and Data Problems.
Lecture 8 (Ch14) Advanced Panel Data Method
Econ 488 Lecture 5 – Hypothesis Testing Cameron Kaplan.
3.2 OLS Fitted Values and Residuals -after obtaining OLS estimates, we can then obtain fitted or predicted values for y: -given our actual and predicted.
8. Heteroskedasticity We have already seen that homoskedasticity exists when the error term’s variance, conditional on all x variables, is constant: Homoskedasticity.
Lecture 4 Econ 488. Ordinary Least Squares (OLS) Objective of OLS  Minimize the sum of squared residuals: where Remember that OLS is not the only possible.
Assumption MLR.3 Notes (No Perfect Collinearity)
1 Lecture 2: ANOVA, Prediction, Assumptions and Properties Graduate School Social Science Statistics II Gwilym Pryce
Analysis of Economic Data
1 Lecture 2: ANOVA, Prediction, Assumptions and Properties Graduate School Social Science Statistics II Gwilym Pryce
HETEROSKEDASTICITY Chapter 8.
Economics Prof. Buckles1 Time Series Data y t =  0 +  1 x t  k x tk + u t 1. Basic Analysis.
Economics 20 - Prof. Anderson1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 6. Heteroskedasticity.
1Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 7. Specification and Data Problems.
Chapter 12 Simple Regression
Prof. Dr. Rainer Stachuletz
Econ Prof. Buckles1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 1. Estimation.
1Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 6. Heteroskedasticity.
Chapter 4 Multiple Regression.
4. Multiple Regression Analysis: Estimation -Most econometric regressions are motivated by a question -ie: Do Canadian Heritage commercials have a positive.
Multicollinearity Omitted Variables Bias is a problem when the omitted variable is an explanator of Y and correlated with X1 Including the omitted variable.
Economics 20 - Prof. Anderson
Economics Prof. Buckles
Economics 20 - Prof. Anderson1 Summary and Conclusions Carrying Out an Empirical Project.
Multiple Linear Regression A method for analyzing the effects of several predictor variables concurrently. - Simultaneously - Stepwise Minimizing the squared.
8.1 Ch. 8 Multiple Regression (con’t) Topics: F-tests : allow us to test joint hypotheses tests (tests involving one or more  coefficients). Model Specification:
3. Multiple Regression Analysis: Estimation -Although bivariate linear regressions are sometimes useful, they are often unrealistic -SLR.4, that all factors.
Regression Method.
JDS Special program: Pre-training1 Carrying out an Empirical Project Empirical Analysis & Style Hint.
1 Research Method Lecture 6 (Ch7) Multiple regression with qualitative variables ©
BPS - 3rd Ed. Chapter 211 Inference for Regression.
Estimating Demand Functions Chapter Objectives of Demand Estimation to determine the relative influence of demand factors to forecast future demand.
7.1 Multiple Regression More than one explanatory/independent variable This makes a slight change to the interpretation of the coefficients This changes.
Specification Error I.
Instrumental Variables: Problems Methods of Economic Investigation Lecture 16.
Random Regressors and Moment Based Estimation Prepared by Vera Tabakova, East Carolina University.
Welcome to Econ 420 Applied Regression Analysis Study Guide Week Six.
1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u.
Multiple Linear Regression ● For k>1 number of explanatory variables. e.g.: – Exam grades as function of time devoted to study, as well as SAT scores.
Christopher Dougherty EC220 - Introduction to econometrics (revision lectures 2011) Slideshow: autocorrelation Original citation: Dougherty, C. (2011)
3.4 The Components of the OLS Variances: Multicollinearity We see in (3.51) that the variance of B j hat depends on three factors: σ 2, SST j and R j 2.
Chapter 4 The Classical Model Copyright © 2011 Pearson Addison-Wesley. All rights reserved. Slides by Niels-Hugo Blunch Washington and Lee University.
Chap 6 Further Inference in the Multiple Regression Model
Chapter 8: Simple Linear Regression Yang Zhenlin.
1 Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 1. Estimation.
5. Consistency We cannot always achieve unbiasedness of estimators. -For example, σhat is not an unbiased estimator of σ -It is only consistent -Where.
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc.. Chap 14-1 Chapter 14 Introduction to Multiple Regression Basic Business Statistics 10 th Edition.
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc. Chap 15-1 Chapter 15 Multiple Regression Model Building Basic Business Statistics 10 th Edition.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 14-1 Chapter 14 Multiple Regression Model Building Statistics for Managers.
Economics 20 - Prof. Anderson1 Time Series Data y t =  0 +  1 x t  k x tk + u t 1. Basic Analysis.
AUTOCORRELATION 1 Assumption B.5 states that the values of the disturbance term in the observations in the sample are generated independently of each other.
4-1 MGMG 522 : Session #4 Choosing the Independent Variables and a Functional Form (Ch. 6 & 7)
BPS - 5th Ed. Chapter 231 Inference for Regression.
Multiple Regression Analysis: Inference
Lecture 6 Feb. 2, 2015 ANNOUNCEMENT: Lab session will go from 4:20-5:20 based on the poll. (The majority indicated that it would not be a problem to chance,
Econ 326 Lecture 19.
Multiple Regression Analysis: Estimation
More on Specification and Data Issues
Multiple Regression Analysis
More on Specification and Data Issues
Chapter 6: MULTIPLE REGRESSION ANALYSIS
Simple Linear Regression
Tutorial 1: Misspecification
Chapter 7: The Normality Assumption and Inference with OLS
More on Specification and Data Issues
Presentation transcript:

Multiple Regression Analysis: Specification And Data Issues Chapter 9

I. Introduction Failure of zero conditional mean assumption Correlation between error, u, and one or more explanatory variables. Why variables can be endogenous Possible remedies Functional Form Misspecification If omitted variable is a function of an explanatory variable in the model, the model suffers from functional for misspecification Using proxy variables to address omitted variable bias Measurement error Not all variables are measured accurately.

II. Functional Form Regression model can suffer from misspecification when it doesn’t account for relationship between dependent and explanatory variables. wage = b0 + b1educ + b2exper + u Omit exper2 or exper*educ Omitting variable can lead to biased estimates of all regressors Use wage rather than log(wage) (latter satisfies GM) using wrong variable to relate LHS and RHS can lead to biased estimates of all regressors.

II. Functional Form We can change linear relationship by: using logs on RHS, LHS or both using quadratic forms of x’s Using interactions of x’s How do we know if we’ve gotten the right functional form for our model? Use F-test for joint exclusion restrictions to detect misspecification

II. Functional Form Ex: Model of Crime Quadratics or not? Each of sq terms is individually and jointly signficant (F=31.37, df=3; 2,713 Adding squares makes interpretation more difficult: Before, intuitive (–) sign on pcnv suggested conviction rate has deterrence on crime. Now, level is positive, quadratic is negative: for low levels conviction has no deterrent effect, only effective for large levels. Note: Don’t square qemp86, because it’s a discrete variable taking only few values.

II. Functional Form How do you know what to try? Use economic theory to guide you Think about the interpretation Does it make more sense for x to affect y in percentage (use logs) or absolute terms? Does it make more sense for the derivative of x1 to vary with x1 (quadratic) or with x2 (interactions) or to be fixed?

II. Ramsey’s RESET Know how to test joint exclusion restrictions for higher order terms or interactions. Can be tedious to add and test extra terms May find a square term matters when really using logs would be even better A test of functional form is Ramsey’s regression specification error test (RESET) Intuition: If specification okay, no nonlinear functions of the independent variables should be significant when put in original equation. Cost: Degrees of freedom

II. Ramsey’s RESET RESET relies on a trick similar to the special form of the White test Instead of adding functions of the x’s directly, we add and test functions of ŷ y = b0 + b1x1 + … + bkxk + d1ŷ2 + d1ŷ3 +error Don’t look at above for parameter estimates, just to test inclusion of extra terms H0: d1 = 0, d2 = 0 using F~F2,n-k-3 Significant F-stat suggests there’s some sort of functional for problem

II. Ramsey’s RESET Ex: Housing Price Equation (n=88) price = b0 + b1lotsize + b2sqrft + b3bdrms +u RESET statistic (up to yhat3)=4.67 F2,82 and p-value .012 Evidence of functional form misspecification lprice = b0 + b1llotsize + b2lsqrft + b3bdrms +u RESET statistic (up to yhat3)=2.56 F2,82 and p-value .84. No evidence of functional form misspecification On basis of RESET, log equation is preferred. But just because loq equation “passed” RESET, does that mean it’s the right specification? Should still use economic theory to determine if functional form makes sense.

III. Proxy Variables Previously, assumed could resolve functional form misspecification because you had the relevant data. What if model is misspecified because no data is available on an important x variable? Log(wage) = b0 + b1educ +b2exper + b3abil + u Would like to hold ability fixed, but have no measure of it. Exclusion causes parameter estimates to be biased. Potential solution: Obtain proxy variable for omitted variable

III. Proxy Variables A proxy variable is something that is related to the unobserved variable that we’d like to control for in our analysis-but can’t. Ex: IQ as proxy for ability x3* = d0 + d3x3 + v3, where * implies unobserved v3 signals that x3 and x3* are not directly related d0 allows different scales to be compared (i.e. IQ scale may not be how ability measured) just substitute x3 for x3* in y= b0 + b1 x1 +b2 x2 + b3 x3* + u

III. Proxy Variables What do we need for this solution to give us unbiased estimates of b1 and b2? Need assumptions on u and v3 1.) u uncorrelated with x1, x2, x3* (standard) Also suggests u uncorrelated with x3…once x1, x2, x3* included, x3 is irrelevant (i.e. x3 doesn’t directly affect y other than through x3*) 2.) v3 is uncorrelated with x1, x2, x3. For v3 to be uncorrelated with x1, x2 that means x3* must be good proxy for x3 Formally, this means E(x3* | x1, x2, x3) = E(x3* | x3) = d0 + d3x3 Once x3 controlled for, x3* does not depend on x1, x2

III. Proxy Variables So are really running: E(abil|educ,exper,IQ)=E(abil|IQ)=d0 + d3IQ Implies ability only changes with IQ, and not with educ and epxer (once include IQ). So are really running: y = (b0 + b3d0) + b1x1+ b2x2 + b3d3x3 + (u + b3v3) redefined intercept, error term, x3 coefficient Can rewrite as: y = a0 + a1x1+ a2x2 + a3x3 + e Unbiased estimates of a0 , b1 =a1 , b2 =a2 , a3 Won’t get original b0 or b3.

III. Proxy Variables IQ as proxy for ability Want to estimate return to education 6.5% when run regression w/o ability proxy 5.4% when include IQ Interact educ*IQ, allows for possibility that returns to education differ across different ability levels. See that interaction not significant though.

III. Proxy Variables Proxy variable can still lead to bias if assumptions are not satisfied Say x3* = d0 + d1x1 + d2x2 + d3x3 + v3 (violation) Then running: y = (b0 + b3d0) + (b1 + b3d1) x1+ (b2 + b3d2) x2 + b3d3x3 + (u + b3v3) Bias will depend on signs of b3 and dj Can safely assume d1 >0 and b3 >0, so that return to education is upward biased even when using proxy variable. This bias may be smaller than omitted variable bias, though (if x3* and x1 correlated less than x3 and x1)

III. Lagged Dependent Variables What if there are unobserved variables, and you can’t find reasonable proxy variables? Can include a lagged dependent variable to account for omitted variables that contribute to both past and current levels of y must think past and current y are related for this to make sense allows you to account for historical factors that cause current differences in dependent variables

III. Lagged Dependent Variables Ex: Model of Crime: Effect of expenditure on crime crime= b0 + b1 unem +b2 expend +u Concerned that cities which have lots of crime react by spending more on crime…biased estimates Coeff on unem and expend are not intuitive crime= b0 + b1 unem +b2 expend+ b3 crime-1 + u Lagged value controls for fact that cities with high historical crime rates may spend more on crime prevention Coefficient estimates now more intuitive

IV. Properties of OLS under Measurement Error Sometimes we have the variable we want, but we think it is measured with error how many hours did you work last year, how many weeks you used child care when your child was young When use imprecise measure of variable in our regression, then model contains measurement error. Consequences of M.E. Model is similar to that of omitted variable bias Often variable with measurement error is the one we’re interested in measuring There are some conditions under which we still get unbiased results Measurement error in y different from measurement error in x

IV. Measurement Error in a Dependent Variable Let y* denote variable we’d like to explain, like annual savings. Model: y* = b0 + b1x1 + …+ bkxk + u Most often, respondents are not perfect in their reporting, and so reported savings is denoted y Define measurement error as observed-actual: e0 = y – y* Thus, really estimating: y = b0 + b1x1 + …+ bkxk + u + e0

IV. Measurement Error in a Dependent Variable When will OLS produce unbiased results? Have assumed u has zero mean and that xj and u are uncorrelated Need to assume e0 also has zero mean (otherwise just biases b0 ) but more importantly e0 and xj are uncorrelated. That is, the measurement error in y is statistically independent of each explanatory variable. As result, estimates are unbiased. Generally find Var(u+ e0 )= su2 + se02 > su2 When have m.e. in LHS variable, get larger variances for OLS estimators.

IV. Measurement Error in a Dependent Variable Savings Function sav* = b0 + b1inc + b2size+ b3educ+ b4age + u e0= sav-sav* Is m.e. correlated with RHS variables? May think families with higher incomes or more education more likely to report savings accurately. Never know if that’s true, so assume there is no systematic relationship: i.e. wealthy or more educated just as likely to mis-report as non-wealthy, uneducated Scrap Rates Log(scrap*) = b0 + b1grant + u Error assumed to be multiplicative: y=(y*)*a0 where e0=log(a0) log(scrap)=log(scrap*)+e0 Log(scrap) = b0 + b1grant + u + e0 It’s possible that measurement error more likely to at firms that receive grant underreport scrap rate to make grant look more effective-so get more in future. Can’t verify whether true, so assume no relationship: i.e. measurement error not correlated with grant.

IV. Measurement Error in an Explanatory Variable More complicated when measurement error occurs in the explanatory variable(s) Model: y = b0 + b1 x1* + u x1* is not observed, instead only observe x1 define m.e. as e1 =observed-actual = x1 – x1* Assume E(e1) = 0 (not strong assumption) E(y| x1*, x1) = E(y| x1*)…means x1 doesn’t affect y after control for x1*…means u uncorrelated with x1 and x1*….similar to proxy variable assumption. Now are estimating y = b0 + b1x1 + (u – b1e1)

IV. Measurement Error in an Explanatory Variable What kind of results will OLS give us? depends on our assumption about the correlation between e1 and x1 Suppose Cov(x1, e1) = 0 OLS remains unbiased Variances larger ( since Var(u-b1 e1)= su2 + b12s e1 2 ) Assumption that Cov(x1, e1) is analogous to the proxy variable assumption.

IV. Measurement Error in an Explanatory Variable What if that’s not the case? Suppose only that Cov(x1*, e1) = 0 Called classical errors-in-variables assumption More realistic assumption than assuming Cov(x1, e1) =0 This means: Cov(x1, e1) = E(x1e1)-E(x1 )E(e1 ) =E[(x1*+e1)(e1)]= E(x1*e1) + E(e12) = 0 + se2 ≠0. This means x1 is correlated with the error so estimate is biased and inconsistent

IV. Measurement Error in an Explanatory Variable Notice that the multiplicative portion Var(x1*)/Var(x1)< 1 Means the estimate is biased toward zero – called attenuation bias True regardless of if b1 is (+) or (-) Larger Var(x1*)/Var(x1) suggests inconsistency with OLS is small, because variation in “noise” (a.k.a. m.e.) is small relative to variation in true value. It’s more complicated with a multiple regression, but can still expect attenuation bias when assume classical errors in variables. Economics 20 - Prof. Anderson

IV. Measurement Error in an Explanatory Variable y = b0 + b1x*1 + b2x2 + b3x3 +u Assume u uncorrelated with x*1,x1,x2,x3 If assume e1 uncorrelated with x1,x2,x3 then get y = b0 + b1x1 + b2x2 + b3x3 +u -b1e1 get consistent estimates But, if e1 uncorrelated with x2,x3 but not necessarily x1, get If x*1 uncorrelated with x2,x3 get consistent estimates of b2, b3 If this doesn’t hold, then other estimates will be inconsistent (size and direction are indeterminate) Economics 20 - Prof. Anderson

IV. Measurement Error in an Explanatory Variable Ex: GPA with measurement error colGPA = b0 + b1faminc* + b2hsGPA+ b3SAT + b4smoke + u faminc* is actual annual family income faminc=faminc*+e1 Assuming CEV holds, get OLS estimator of b1 that is attenuated (biased toward zero). colGPA = b0 + b1faminc + b2hsGPA+ b3SAT + b4smoke* + u smoke=smoke*+e1 CEV unlikely to hold, because those who don’t smoke are really unlikely to mis-report. Those that do smoke can mis-report, such that error and actual number of times smoked (smoked*) are correlated. Deriving the implications of measurement error when CEV doesn’t hold is difficult and out of scope of text. Economics 20 - Prof. Anderson

V. Missing Data, Nonrandom Samples, Outlying Observations Introduction into data problems that can violated MLR.2 of G-M assumptions Cases when data problems have no effect on OLS estimates Other cases when get biased estimates Missing Data Generally collect data from random sample of observations (people, schools, firms) Discover that information from these observations on key variables are missing Economics 20 - Prof. Anderson

V. Missing Data – Is it a Problem? Consequences If any observation is missing data on one of the variables in the model, it can’t be used Data missing at Random If data is missing at random, using a sample restricted to observations with no missing values will be fine Simply reduces sample size, thus reducing precision of estimates Economics 20 - Prof. Anderson

V. Missing Data – Is it a Problem? Data not missing at random A problem can arise if the data is missing systematically High income individuals refuse to provide income data Low education people generally don’t report education People with high IQ more likely to report IQ When missing data does not lead to bias Sample chosen on basis of independent variables Ex: Savings, income, age, size for population of people 35 years and older No bias because E(savings|income, age, size) is same for any subset of population described by income, age, size in this data. Economics 20 - Prof. Anderson

V. Nonrandom Samples When missing data leads to bias If the sample is chosen on the basis of the y variable, then we have sample selection bias Ex: estimating wealth based on education, experience, and age. Only those with wealth below 250k included OLS gives biased estimates because E(wealth|educ, exper, age) not same as expected value conditional on wealth being less than 250k. Economics 20 - Prof. Anderson

V. Outliers /Influential Observations Sometimes an individual observation can be very different from the others “Influential” for estimates if dropping that observation(s) from the analysis changes the key OLS estimates by a lot Particularly important with small data sets OLS susceptible to outliers because by definition, minimizes sum of squared residual, and this outlier will have “large” residual. Causes of outliers errors in data entry – one reason why looking at summary statistics is important sometimes the observation will just truly be very different from the others Economics 20 - Prof. Anderson

V. Outliers /Influential Observations Example: R& D Intensity & Firm Size Sales more than triples, and now statistically significant. Economics 20 - Prof. Anderson

V. Outliers Not unreasonable to fix observations where it’s clear there was just an extra zero entered or left off, etc. Not unreasonable to drop observations that appear to be extreme outliers, although readers may prefer to see estimates with and without the outliers Can use Stata to investigate outliers graphicall Economics 20 - Prof. Anderson