# Multiple Regression Analysis: Specification And Data Issues

## Presentation on theme: "Multiple Regression Analysis: Specification And Data Issues"— Presentation transcript:

Multiple Regression Analysis: Specification And Data Issues
Chapter 9

I. Introduction Failure of zero conditional mean assumption
Correlation between error, u, and one or more explanatory variables. Why variables can be endogenous Possible remedies Functional Form Misspecification If omitted variable is a function of an explanatory variable in the model, the model suffers from functional for misspecification Using proxy variables to address omitted variable bias Measurement error Not all variables are measured accurately.

II. Functional Form Regression model can suffer from misspecification when it doesn’t account for relationship between dependent and explanatory variables. wage = b0 + b1educ + b2exper + u Omit exper2 or exper*educ Omitting variable can lead to biased estimates of all regressors Use wage rather than log(wage) (latter satisfies GM) using wrong variable to relate LHS and RHS can lead to biased estimates of all regressors.

II. Functional Form We can change linear relationship by:
using logs on RHS, LHS or both using quadratic forms of x’s Using interactions of x’s How do we know if we’ve gotten the right functional form for our model? Use F-test for joint exclusion restrictions to detect misspecification

II. Functional Form Ex: Model of Crime Quadratics or not?
Each of sq terms is individually and jointly signficant (F=31.37, df=3; 2,713 Adding squares makes interpretation more difficult: Before, intuitive (–) sign on pcnv suggested conviction rate has deterrence on crime. Now, level is positive, quadratic is negative: for low levels conviction has no deterrent effect, only effective for large levels. Note: Don’t square qemp86, because it’s a discrete variable taking only few values.

II. Functional Form How do you know what to try?
Use economic theory to guide you Think about the interpretation Does it make more sense for x to affect y in percentage (use logs) or absolute terms? Does it make more sense for the derivative of x1 to vary with x1 (quadratic) or with x2 (interactions) or to be fixed?

II. Ramsey’s RESET Know how to test joint exclusion restrictions for higher order terms or interactions. Can be tedious to add and test extra terms May find a square term matters when really using logs would be even better A test of functional form is Ramsey’s regression specification error test (RESET) Intuition: If specification okay, no nonlinear functions of the independent variables should be significant when put in original equation. Cost: Degrees of freedom

II. Ramsey’s RESET RESET relies on a trick similar to the special form of the White test Instead of adding functions of the x’s directly, we add and test functions of ŷ y = b0 + b1x1 + … + bkxk + d1ŷ2 + d1ŷ3 +error Don’t look at above for parameter estimates, just to test inclusion of extra terms H0: d1 = 0, d2 = 0 using F~F2,n-k-3 Significant F-stat suggests there’s some sort of functional for problem

II. Ramsey’s RESET Ex: Housing Price Equation (n=88)
price = b0 + b1lotsize + b2sqrft + b3bdrms +u RESET statistic (up to yhat3)=4.67 F2,82 and p-value .012 Evidence of functional form misspecification lprice = b0 + b1llotsize + b2lsqrft + b3bdrms +u RESET statistic (up to yhat3)=2.56 F2,82 and p-value .84. No evidence of functional form misspecification On basis of RESET, log equation is preferred. But just because loq equation “passed” RESET, does that mean it’s the right specification? Should still use economic theory to determine if functional form makes sense.

III. Proxy Variables Previously, assumed could resolve functional form misspecification because you had the relevant data. What if model is misspecified because no data is available on an important x variable? Log(wage) = b0 + b1educ +b2exper + b3abil + u Would like to hold ability fixed, but have no measure of it. Exclusion causes parameter estimates to be biased. Potential solution: Obtain proxy variable for omitted variable

III. Proxy Variables A proxy variable is something that is related to the unobserved variable that we’d like to control for in our analysis-but can’t. Ex: IQ as proxy for ability x3* = d0 + d3x3 + v3, where * implies unobserved v3 signals that x3 and x3* are not directly related d0 allows different scales to be compared (i.e. IQ scale may not be how ability measured) just substitute x3 for x3* in y= b0 + b1 x1 +b2 x2 + b3 x3* + u

III. Proxy Variables What do we need for this solution to give us unbiased estimates of b1 and b2? Need assumptions on u and v3 1.) u uncorrelated with x1, x2, x3* (standard) Also suggests u uncorrelated with x3…once x1, x2, x3* included, x3 is irrelevant (i.e. x3 doesn’t directly affect y other than through x3*) 2.) v3 is uncorrelated with x1, x2, x3. For v3 to be uncorrelated with x1, x2 that means x3* must be good proxy for x3 Formally, this means E(x3* | x1, x2, x3) = E(x3* | x3) = d0 + d3x3 Once x3 controlled for, x3* does not depend on x1, x2

III. Proxy Variables So are really running:
E(abil|educ,exper,IQ)=E(abil|IQ)=d0 + d3IQ Implies ability only changes with IQ, and not with educ and epxer (once include IQ). So are really running: y = (b0 + b3d0) + b1x1+ b2x2 + b3d3x3 + (u + b3v3) redefined intercept, error term, x3 coefficient Can rewrite as: y = a0 + a1x1+ a2x2 + a3x3 + e Unbiased estimates of a0 , b1 =a1 , b2 =a2 , a3 Won’t get original b0 or b3.

III. Proxy Variables IQ as proxy for ability
Want to estimate return to education 6.5% when run regression w/o ability proxy 5.4% when include IQ Interact educ*IQ, allows for possibility that returns to education differ across different ability levels. See that interaction not significant though.

III. Proxy Variables Proxy variable can still lead to bias if assumptions are not satisfied Say x3* = d0 + d1x1 + d2x2 + d3x3 + v3 (violation) Then running: y = (b0 + b3d0) + (b1 + b3d1) x1+ (b2 + b3d2) x2 + b3d3x3 + (u + b3v3) Bias will depend on signs of b3 and dj Can safely assume d1 >0 and b3 >0, so that return to education is upward biased even when using proxy variable. This bias may be smaller than omitted variable bias, though (if x3* and x1 correlated less than x3 and x1)

III. Lagged Dependent Variables
What if there are unobserved variables, and you can’t find reasonable proxy variables? Can include a lagged dependent variable to account for omitted variables that contribute to both past and current levels of y must think past and current y are related for this to make sense allows you to account for historical factors that cause current differences in dependent variables

III. Lagged Dependent Variables
Ex: Model of Crime: Effect of expenditure on crime crime= b0 + b1 unem +b2 expend +u Concerned that cities which have lots of crime react by spending more on crime…biased estimates Coeff on unem and expend are not intuitive crime= b0 + b1 unem +b2 expend+ b3 crime-1 + u Lagged value controls for fact that cities with high historical crime rates may spend more on crime prevention Coefficient estimates now more intuitive

IV. Properties of OLS under Measurement Error
Sometimes we have the variable we want, but we think it is measured with error how many hours did you work last year, how many weeks you used child care when your child was young When use imprecise measure of variable in our regression, then model contains measurement error. Consequences of M.E. Model is similar to that of omitted variable bias Often variable with measurement error is the one we’re interested in measuring There are some conditions under which we still get unbiased results Measurement error in y different from measurement error in x

IV. Measurement Error in a Dependent Variable
Let y* denote variable we’d like to explain, like annual savings. Model: y* = b0 + b1x1 + …+ bkxk + u Most often, respondents are not perfect in their reporting, and so reported savings is denoted y Define measurement error as observed-actual: e0 = y – y* Thus, really estimating: y = b0 + b1x1 + …+ bkxk + u + e0

IV. Measurement Error in a Dependent Variable
When will OLS produce unbiased results? Have assumed u has zero mean and that xj and u are uncorrelated Need to assume e0 also has zero mean (otherwise just biases b0 ) but more importantly e0 and xj are uncorrelated. That is, the measurement error in y is statistically independent of each explanatory variable. As result, estimates are unbiased. Generally find Var(u+ e0 )= su2 + se02 > su2 When have m.e. in LHS variable, get larger variances for OLS estimators.

IV. Measurement Error in a Dependent Variable
Savings Function sav* = b0 + b1inc + b2size+ b3educ+ b4age + u e0= sav-sav* Is m.e. correlated with RHS variables? May think families with higher incomes or more education more likely to report savings accurately. Never know if that’s true, so assume there is no systematic relationship: i.e. wealthy or more educated just as likely to mis-report as non-wealthy, uneducated Scrap Rates Log(scrap*) = b0 + b1grant + u Error assumed to be multiplicative: y=(y*)*a0 where e0=log(a0) log(scrap)=log(scrap*)+e0 Log(scrap) = b0 + b1grant + u + e0 It’s possible that measurement error more likely to at firms that receive grant underreport scrap rate to make grant look more effective-so get more in future. Can’t verify whether true, so assume no relationship: i.e. measurement error not correlated with grant.

IV. Measurement Error in an Explanatory Variable
More complicated when measurement error occurs in the explanatory variable(s) Model: y = b0 + b1 x1* + u x1* is not observed, instead only observe x1 define m.e. as e1 =observed-actual = x1 – x1* Assume E(e1) = 0 (not strong assumption) E(y| x1*, x1) = E(y| x1*)…means x1 doesn’t affect y after control for x1*…means u uncorrelated with x1 and x1*….similar to proxy variable assumption. Now are estimating y = b0 + b1x1 + (u – b1e1)

IV. Measurement Error in an Explanatory Variable
What kind of results will OLS give us? depends on our assumption about the correlation between e1 and x1 Suppose Cov(x1, e1) = 0 OLS remains unbiased Variances larger ( since Var(u-b1 e1)= su2 + b12s e1 2 ) Assumption that Cov(x1, e1) is analogous to the proxy variable assumption.

IV. Measurement Error in an Explanatory Variable
What if that’s not the case? Suppose only that Cov(x1*, e1) = 0 Called classical errors-in-variables assumption More realistic assumption than assuming Cov(x1, e1) =0 This means: Cov(x1, e1) = E(x1e1)-E(x1 )E(e1 ) =E[(x1*+e1)(e1)]= E(x1*e1) + E(e12) = 0 + se2 ≠0. This means x1 is correlated with the error so estimate is biased and inconsistent

IV. Measurement Error in an Explanatory Variable
Notice that the multiplicative portion Var(x1*)/Var(x1)< 1 Means the estimate is biased toward zero – called attenuation bias True regardless of if b1 is (+) or (-) Larger Var(x1*)/Var(x1) suggests inconsistency with OLS is small, because variation in “noise” (a.k.a. m.e.) is small relative to variation in true value. It’s more complicated with a multiple regression, but can still expect attenuation bias when assume classical errors in variables. Economics 20 - Prof. Anderson

IV. Measurement Error in an Explanatory Variable
y = b0 + b1x*1 + b2x2 + b3x3 +u Assume u uncorrelated with x*1,x1,x2,x3 If assume e1 uncorrelated with x1,x2,x3 then get y = b0 + b1x1 + b2x2 + b3x3 +u -b1e1 get consistent estimates But, if e1 uncorrelated with x2,x3 but not necessarily x1, get If x*1 uncorrelated with x2,x3 get consistent estimates of b2, b3 If this doesn’t hold, then other estimates will be inconsistent (size and direction are indeterminate) Economics 20 - Prof. Anderson

IV. Measurement Error in an Explanatory Variable
Ex: GPA with measurement error colGPA = b0 + b1faminc* + b2hsGPA+ b3SAT + b4smoke + u faminc* is actual annual family income faminc=faminc*+e1 Assuming CEV holds, get OLS estimator of b1 that is attenuated (biased toward zero). colGPA = b0 + b1faminc + b2hsGPA+ b3SAT + b4smoke* + u smoke=smoke*+e1 CEV unlikely to hold, because those who don’t smoke are really unlikely to mis-report. Those that do smoke can mis-report, such that error and actual number of times smoked (smoked*) are correlated. Deriving the implications of measurement error when CEV doesn’t hold is difficult and out of scope of text. Economics 20 - Prof. Anderson

V. Missing Data, Nonrandom Samples, Outlying Observations
Introduction into data problems that can violated MLR.2 of G-M assumptions Cases when data problems have no effect on OLS estimates Other cases when get biased estimates Missing Data Generally collect data from random sample of observations (people, schools, firms) Discover that information from these observations on key variables are missing Economics 20 - Prof. Anderson

V. Missing Data – Is it a Problem?
Consequences If any observation is missing data on one of the variables in the model, it can’t be used Data missing at Random If data is missing at random, using a sample restricted to observations with no missing values will be fine Simply reduces sample size, thus reducing precision of estimates Economics 20 - Prof. Anderson

V. Missing Data – Is it a Problem?
Data not missing at random A problem can arise if the data is missing systematically High income individuals refuse to provide income data Low education people generally don’t report education People with high IQ more likely to report IQ When missing data does not lead to bias Sample chosen on basis of independent variables Ex: Savings, income, age, size for population of people 35 years and older No bias because E(savings|income, age, size) is same for any subset of population described by income, age, size in this data. Economics 20 - Prof. Anderson

V. Nonrandom Samples When missing data leads to bias
If the sample is chosen on the basis of the y variable, then we have sample selection bias Ex: estimating wealth based on education, experience, and age. Only those with wealth below 250k included OLS gives biased estimates because E(wealth|educ, exper, age) not same as expected value conditional on wealth being less than 250k. Economics 20 - Prof. Anderson

V. Outliers /Influential Observations
Sometimes an individual observation can be very different from the others “Influential” for estimates if dropping that observation(s) from the analysis changes the key OLS estimates by a lot Particularly important with small data sets OLS susceptible to outliers because by definition, minimizes sum of squared residual, and this outlier will have “large” residual. Causes of outliers errors in data entry – one reason why looking at summary statistics is important sometimes the observation will just truly be very different from the others Economics 20 - Prof. Anderson

V. Outliers /Influential Observations
Example: R& D Intensity & Firm Size Sales more than triples, and now statistically significant. Economics 20 - Prof. Anderson

V. Outliers Not unreasonable to fix observations where it’s clear there was just an extra zero entered or left off, etc. Not unreasonable to drop observations that appear to be extreme outliers, although readers may prefer to see estimates with and without the outliers Can use Stata to investigate outliers graphicall Economics 20 - Prof. Anderson