Presentation on theme: "III. Model Building. Model building: writing a model that will provide a good fit to a set of data & that will give good estimates of the mean value of."— Presentation transcript:
III. Model Building
Model building: writing a model that will provide a good fit to a set of data & that will give good estimates of the mean value of y and good predictions of y for given values of the explanatory variables.
Why is model building important, both in statistical analysis & in analysis in general? Theory & empirical research
“A social science theory is a reasoned and precise speculation about the answer to a research question, including a statement about why the proposed answer is correct.” “Theories usually imply several more specific descriptive or causal hypotheses” (King et al., page 19).
A model is “a simplification of, and approximation to, some aspect of the world.” “Models are never literally ‘true’ or ‘false,’ although good models abstract only the ‘right’ features of the reality they represent” (King et al., page 49).
Remember: social construction of reality (including alleged causal relations); skepticism; rival hypotheses; & contradictory evidence. What kinds of evidence (or perspectives) would force me to revise or jettison the model?
Three approaches to model building Begin with a linear model for its simplicity & as a rough approximation of the y/x relationships. Begin with a curvilinear model to capture the complexities of the y/x relationships. Begin with a model that incorporates linearity &/or curvilinearity in y/x relationships according to theory & observation.
The predominate approach used to be to start with a simple model & to test it against progressively more complex models. This approach suffers, however, from the problem associated with omitted variables in the simpler models. Increasingly common, then, is the approach of starting with more complex models & testing against simpler models (Greene, Econometric Analysis, pages ).
The point of departure for model-building is trying to grasp how the outcome variable y varies as the levels of an explanatory variable change. We have to know how to write a mathematical equation to model this relationship.
In what follows, let’s pretend that we’ve already done careful univariate & bivariate exploratory data analysis via graphs & numerical summaries (although, on the other hand, the exercise requires here & there that we didn’t do such careful groundwork…).
Suppose we want to model a person’s performance on an exam, y, as a function of a single explanatory variable, x, the person’s amount of study time. It may be that the person’s score, y, increases in a straight line as the amount of study time increases from 1 to 6 hours.
If this were the entire range of x values used to fit the equation, a linear model would be appropriate:
What, though, if the range of sample hours increased to 10 hours or more: would a straight- line model continue to be satisfactory? Quite possibly, the increase in exam score for a unit increase in study time would decrease, causing some amount of curvature in the y/x relationship.
What kind of model would be appropriate? A second-order polynomial, called a quadratic:
: value of y when x’s equal 0; shifts the quadratic parabola up or down the y-intercept : the slop of y on x when x=0 (which we don’t really care about & won’t interpret) : negative, degree of downward parabola; positive, degree of upward parabola
Recall that the model is valid only for the range of x values used to estimate the model. What does this imply about predictions for values that exceed this range?
Testing a second-order equation If tests significant, do not interpret. (which represents y’s slope on x1 when x1=0).
Let’s continue with this model-building strategy but change the substantive topic. We’ll focus on the relationship of average hourly wage to a series of explanatory variables (e.g., education & job tenure with the same employer). Let’s explore the relationship.
. gen educ2=educ^2. su educ educ2. reg wage educ educ2 SourceSSdf MS Number of obs= 526 F( 2, 523)= Model Prob > F= Residual R-squared= Adj R-squared= Total Root MSE= wageCoef.Std. Err. t P>t [95% Conf.Interval] educ educ _cons
If the second-order term tests significant we don’t we interpret the first-order term. Why not?
Let’s figure out what the second- order term means in this model. What do the following graphs say about the relationship of wage to years of education?
. twoway qfitci wage educ
. scatter wage educ || mband wage educ, ba(8)
. lowess wage educ, bwidth(.2)
Median Band Regression & Lowess Smoothing Median band regression (scatter mband y x) & lowess smoothing (lowess y x) are two very helpful tools for detecting (1) how a model fits or doesn’t fit to particular segments of the x –values (e.g., poorer to richer persons) & (2) thus non-linearity. Hence they’re really useful at all stages of exploratory data analysis. Another option: locpoly y x1
What did the graphs say about the relationship of wage to years of education? Let’s answer this question more precisely by predicting the direction & magnitude of the wage/education relationship at specific levels of education, identified via ‘su x, detail’ and/or our knowledge of the issue:
Don’t get hung up with every segment of the curve. The curve is only an approximation. Thus it may not fit the data well within any particular range (especially where there are few observations).
Remember, moreover, that Adj R 2 was just.198 for this model. Obviously there are other relevant explanatory variables. Not only do we need to identify them, but we also need to ask: are they independent & linear? independent & curvilinear? or are they interactional?
Interaction: the effect of a 1-unit change in one explanatory variable depends on the level of another explanatory variable. With interaction, both the y-intercept & the regression slope change; i.e. the regression lines are not parallel. Interaction Effects
E.g., how do education & job tenure interact with regard to predicted wage?. gen educXtenure=educ*tenure
. reg wage educ tenure educXtenure Source SS df MS Number of obs= 526 F( 3, 522)= Model Prob > F= Residual R-squared= Total Adj R-squared= Root MSE= wage Coef. Std. Err. t P>t [95% Conf. Interval] educ tenure educXtenure _cons
Let’s interpret the model. If the interaction term tests significant, we don’t interpret its base variables? Why not? Each base variable represents its y/x slope when the other x=0. We don’t care about this.
To interpret the interaction term, we use ‘su x1, d’ & our knowledge of the subject to identify key levels of educ & tenure (or use one SD above mean, mean, & one SD below mean): Then we predict the slope-effect of educXtenure on wage at the specified levels, as follows:
How the interaction of mean education with varying levels of tenure relates to average hourly wage:. lincom educ + (educXtenure*2) wage Coef.Std. Err. tP>t[95% Conf. Interval] (1) lincom educ + (educXtenure*10) wage Coef.Std. Err. tP>t[95% Conf. Interval] (1) lincom educ + (educXtenure*18) wage Coef.Std. Err. tP>t[95% Conf. Interval] (1)
How the interaction of mean tenure with varying levels of education relates to average hourly wage:. lincom tenure + (8*educXtenure) wageCoef. Std. Err. tP>t[95% Conf. Interval] (1) lincom tenure + (12*educXtenure) wageCoef. Std. Err. tP>t[95% Conf. Interval] (1) lincom tenure + (20*educXtenure) wageCoef. Std. Err. tP>t[95% Conf. Interval] (1)
With significant interaction, to repeat, both the regression coefficient & the y- intercept change as the levels of the second interacting variable change. That is, the regression slopes are unequal. What does this mean in the model for average hourly wage?
Our interaction model yielded an Adj R 2 of.317. Given the non-linearity we’ve uncovered, could we increase the explanatory power by combining quadratic & interaction terms?
. reg wage educ tenure educXtenure educ2 tenure2 Source SS df MS Number of obs= 526 F( 5, 520)= Model Prob > F= Residual R-squared= Adj R-squared= Total Root MSE= wageCoef. Std. Err. t P>t[95% Conf.Interval] educ tenure educXtenure educ tenure _cons
Let’s assess the model’s fit. Let’s conduct a test of nested models, comparing this new, ‘full’ model to each of the previous, ‘reduced’ models.
Did adding educXtenure, educ2 & tenure2 boost the model’s variance-explaining power by a statistically significant margin?. test educXtenure educ2 tenure2 ( 1) educXtenure = 0 ( 2) educ2 = 0 ( 3) tenure2 = 0 F( 3, 520) = Prob > F =
Did adding educ2 & tenure2 boost the model’s variance-explaining power by a statistically significant margin over the interaction model?. test educ2 tenure2 ( 1) educ2 = 0 ( 2) tenure2 = 0 F( 2, 520) = Prob > F =
To conduct a valid test of nested models: the number of observations for both the complete & reduced models must be equal; the functional form of y must be the same (e.g., we can’t compare outcome variable ‘wage’ to outcome variable ‘log-wage’). Valid testing of nested models
How do we compare non-nested models (i.e. models with the same number of explanatory variables), or nested models that don’t meet the criteria for comparative testing? Use either the AIC or BIC test statistics: the smaller the score, the better the model fits. Download the ‘fitstat’ command (see Long/Freese, Regression Models for Categorical Dependent Variables). Comparing non-nested models
. reg science read write math female. fitstat, saving(model1) bic. reg science read write. fitstat, using(model1) bic The output tells whether or not the ‘current’ model is supported &, if it is supported, to what degree.
And we can display AIC &/or BIC in ‘estimates table.’. reg science read write math. estimates store model1. estimates table model1, stats(N df_m adj_r2 aic bic) ‘ereturn list’ provides the codes.
For BIC or AIC, “The upshot is that ex post, neither model is discarded; we have merely revised or assessment of the comparative likelihood of the two in the face of the sample data” (Greene, Econometric Analysis, page 153). That is, the Bayesian approach compares “the two hypotheses rather than testing for the validity of one over the other” (Greene, Econometric Analysis, page 153).
Graphing multiple variables from regression models requires 3-D graphing capabilities (see, e.g., Systat, SAS). Here’s Stata’s crude version: Graphing the model
. gr3 wage educ educ2
. gr3 wage tenure tenure2
. twoway qfitci wage educ, bc(yellow)
. twoway qfitci wage tenure, bc(red)
What do the graphs tell us about the relationship of wage to years of education & to years of job tenure? Using lincom to predict the slope for wage at specific values of the interaction variables is important, too.
Adj R 2 was.360. Can we improve the model by adding dummy variables? Let’s explore the possibility for females versus males; nonwhites versus whites; & urban (smsa) versus rural.
But first, a quick detour. We find evidence that 52% of the population of interest is female & 16% is nonwhite: are the sample percentages significantly different than these population benchmarks? How do we statistically assess these possibilities?
. ci female nonwhite, binomial -- Binomial Exact -- Variable ObsMeanStd. Err.[95% Conf. Interval] female nonwhite We’ll first check the confidence intervals. Next we’ll try prtest to test the proportions.
. prtest female=.52 One-sample test of proportion female:Number of obs = 526 Variable Mean Std. Err.[95% Conf. Interval] female Ho: proportion(female) =.52 Ha: female.52 z = z = z = P z = P > z =
. prtest nonwhite=.16 One-sample test of proportion nonwhite:Number of obs = 526 Variable Mean Std. Err.[95% Conf. Interval] nonwhite Ho: proportion(nonwhite) =.16 Ha: nonwhite.16 z = z = z = P z = P > z =
Let’s get back to our regression model, first by exploring the proposed new variables.
. grmeanby female, su(wage)
. grmeanby nonwhite, su(wage)
. grmeanby smsa, su(wage)
. tab female, su(wage) =1 if | Summary of average hourly earnings female | Mean Std. Dev. Freq | | Total |
The sample’s average wage disparities are pronounced for females versus males, & notable but less pronounced for nonwhites versus whites & for urban (smsa) versus rural. We should examine the wage distribution for each of these categorical, binary variables. Let’s illustrate this for females versus males:
bys female: su wage -> female = 0 Variable | Obs Mean Std. Dev. Min Max wage | _____________________________________________________ -> female = 1 Variable | Obs Mean Std. Dev. Min Max wage |
. gr box wage, over(female, total) marker(1, mlabel(id))
. table female, contents(mean wage med wage sd wage min wage max wage) =1 if female mean(wage) med(wage)sd(wage) min(wage) max(wage)
Two-sample t test with unequal variances Obs Mean Std. Err. Std. Dev.[95% Conf. Interval] combined diff Satterthwaite's degrees of freedom: Ho: mean(0) - mean(1) = diff = 0 Ha: diff 0 t = t = t = P t = P > t = ttest wage, by(female) unequal
Let’s re-estimate the model, without nonwhite.. reg wage educ tenure educXtenure educ2 tenure2 female smsa SourceSSdf MS Number of obs= 526 F( 7, 518)= Model Prob > F= Residual R-squared= Adj R-squared= Total Root MSE=
wageCoef.Std. Err. t P>t [95% Conf.Interval] educ tenure educXtenure educ tenure female smsa _cons There are no notable changes in the coefficients. Let’s conduct a nested model test:
Let’s say we find evidence that the regression slope for wage on female versus male may not be the same in urban versus rural areas. That is, there may be a statistically significant femaleXsmsa interaction. Let’s find out:
Normally, we create the interaction variable femaleXsmsa & write the model as follows:. reg wage educ tenure educXtenure educ2 tenure2 female smsa femaleXsmsa Stata, however, let’s us do the work on the fly:. xi:reg wage educ tenure educXtenure educ2 tenure2 i.female*i.smsa
i.female _Ifemale_0-1 (naturally coded; _Ifemale_0 omitted) i.smsa _Ismsa_0-1 (naturally coded; _Ismsa_0 omitted) i.fem~e*i.smsa _IfemXsms_#_# (coded as above) Source SS df MS Number of obs = 526 F( 8, 517) = Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE =
wageCoef.Std. Err.tP>t[95% Conf.Interval] educ tenure educXtenure educ tenure _Ifemale_ _Ismsa_ _IfemXsms_~ _cons We fail to reject the null hypothesis for femaleXsmsa.
Next we hypothesize that average hourly wage varies by economic sector. So let’s add to the model a series of dummy variables for the economic sectors, the comparison sector being manufacturing. We hypothesize that the regression slope is the same for each sector but the y-intercept varies:
. reg wage educ tenure educXtenure educ2 tenure2 female smsa construc ndurman trcommpu trade services profservices Source SS dfMS Number of obs = 526 F( 13, 512)= Model Prob > F= Residual R-squared = Adj R-squared = Total Root MSE= Economic sectors are compared to manufacturing.
We test not the individual significance but the joint significance of the dummy variable series: testparm construc-profserv ( 1) construc = 0 ( 2) ndurman = 0 ( 3) trcommpu = 0 ( 4) trade = 0 ( 5) services = 0 ( 6) profserv = 0 F( 6, 512) = 5.31 Prob > F = Testing the nested model.
‘testparm’ (test parameters) allows us to enter the first dummy variable in the series, a dash, & the last dummy variable in the series. ‘test’ requires that each dummy variable in the series be entered.
The model, then, has greatly improved: Adj R 2 has reached.455, & the other, more important fit indicators look fine. But is the slope coefficient for wage really the same for females & males? Let’s test the assumption of equal slopes.
We have to estimate a new model, this time interacting the dummy variable female with each of the other explanatory variables. Why choose ‘female’ as the variable to interact with the others?
We could create an interaction variable corresponding to female’s interaction with each explanatory variable: femaleXeduc, femaleXtenure, femaleXprofserv, etc. Better: create an interaction variable only for those main- effect variables that for good reason you think should be expected to vary by gender. Either way, Stata again allows us to do the work on the fly—but be sure you know how formally to write such a model.
Note: we could have used this approach to create educXtenure, but for pedagogical reasons we created this variable the formal way. xi:reg wage i.female*educ i.female*tenure i.female*educXtenure i.female*educ2 i.female*tenure2 i.female*smsa i.female*construc i.female*ndurman i.female*trcommpu i.female*trade i.female*services i.female*profserv For the sake of it, we’ll do a ‘full interaction’ model.
Our conclusion? We reject the null hypothesis: there indeed is statistically significant evidence of unequal wage slopes for females vs. males with regard to average hourly wage. Substantive meaning?
Key lesson: we should test the baseline notions of linearity & uniform slopes, & when necessary revise the model accordingly. But we don’t have to do a ‘full interaction’ model. Don’t take linearity & uniform slopes for granted
Note: The econometric literature discusses the detection of significantly different slopes in terms of the Chow Test. Stata’s joint ‘test’ procedure is equivalent to the Chow Test (type ‘findit Chow Test’, which will lead you to Stata FAQ’s on the subject). See Wooldridge, Introductory Econometrics, pp ; & Stata’s online FAQ’s.
A colleague of yours inspects your statistical work & says “Nice try, but you goofed with regard to the outcome variable.” Where did we go wrong? Let’s take a look & see.
. histogram wage, norm plotr(c(navy))
Average hourly wage is highly right skewed. How should we address this problem? Let’s begin by using some helpful Stata tools— qladder & ladder.
For qladder & ladder, the null hypothesis is that each displayed, transformed distribution is normal. The alternative hypothesis that it isn’t normal. So, in ladder & qladder, we want to fail to reject the null hypothesis to obtain an effective normalizing transformation.
We’ll opt for a log transformation, which is the most basic way of linearizing a highly right- skewed distribution:. gen lwage=ln(wage). su wage lwage Note: log(wage) & ln(wage) are equivalent.
. histogram wage, norm plotr(c(navy))
. histogram lwage, norm plotr(c(navy))
Recall that log transformations require quantitative, ratio variables with positive values— plus not ‘too many’ zero values (see Wooldridge) & ideally a ratio between lowest & highest values of at least 10. Are the results of the log transformation satisfactory? Why, or why not?
How do the models fit (again recalling our discussion of comparing non-nested models)? Interpretation? Quantitative explanatory variables: every per unit change in x multiplies average hourly wage by …, on average, holding the other variables constant. Categorical explanatory variables: e.g., having a job in services multiplies average hourly wage by …, on average, holding the other variables constant.
Now we need to use lincom to predict wage, or the direction & magnitude of its slope, at specific levels of key explanatory variables (or one SD above mean, mean, & one SD below mean). We’ll leave that for you to do.
Remember that: (1) log transformations require quantitative, ratio variables with positive values—plus not ‘too’ many zero values & ideally a ratio of at least 10 between the lowest & highest values; (2) quadratic (& similar) transformations require quantitative, interval or ratio variables, & not necessarily positive values; & Some summary points
(3) in any instance, matters of theory, interpretability, & common sense may lead us not to transform a variable, even though doing so may make sense on purely statistical grounds.
Furthermore, the assumptions of linearity & uniform slopes must be tested. Compare nested models, using AIC or BIC when the two models’ number of observations are unequal or when the number of explanatory variables is equal. Don’t make predictions beyond the range of the model’s x-values.
Don’t overfit a model to a data sample: most samples have their quirks, & overfitting a model to such quirks comes at the expense of the model’s generality; Thus, don’t go overboard with transforming variables & with trying to boost R 2.
And don’t forget median band regression—scatter mband y x, bands(#); & lowess smoothing—lowess y x, bandwidth(.#) (see also locpoly): these are helpful tools at all stages of y/x data analysis.
We’ll be doing more tranformations as part of regression diagnostics (i.e. assessing & correcting violations of regression’s statistical assumptions & dealing with outliers that distort the results).
One final question: can we validly compare the magnitude of slope coefficients within a regression model? Usually not, because their metrics are typically different (e.g., years of education, score on a mental health scale, & quantitative versus categorical variables in regard to average hourly wage).
Standardized regression coefficients We can, however, validly compare the magnitude of the slope coefficients if we standardize them, which, of course, expresses them as standard deviations on the standard normal distribution. This is easy to do in Stata:. reg y x1 x2 x3, beta
.SourceSS df MS Model Number of obs = 526 Residual F( 3, 522)= Total Prob > F= R-squared= Adj R-squared=.30 wageCoef. Std. Err. t P>t Beta educ exper tenure _cons
For every standard deviation increase in education, wage increases by.45 standard deviations on average, holding the other variables constant. For every standard deviation increase in experience, wage increases by.08 standard deviations on average, holding the other variables constant. For every standard deviation increase in tenure, wage increases by.33 standard deviations on average, holding the other variables constant.
Standardizing regression coefficients can be quite useful & is commonly done, but it does have serious limitations: The standardized values depend on the particular sample: comparisons can’t be made across samples. The standardized values depend on which other variables are included in the equation: change one or more of the variables & the standardized values change.
Comparisons of standardized coefficients, then, can’t be made across regression equations. Standardization makes no sense for interpreting categorical explanatory variables: there’s no standard deviation change in, e.g., gender, ethnicity, or religion, so don’t bother trying to interpret categorical variables when they’re included in a standardized regression model (but, rather, use standardization to gauge the relative effect of categorical as well as quantitative explanatory variables on the outcome variable).
And the interpretation of standardized interaction terms can be deceptive.
Here’s a convenient (downloadable) command to obtain the standardized coefficients. After estimating the model:. listcoef, std See Long/Freese for details.
Combining graph curves in Stata Sometimes it may be helpful to combine straight & curved graphs of twoway scatterplots. Here are a couple of examples.
. scatter write math || lfit science math || fpfit science math
. scatter write math || lfit science math || mband science math
Summary What’s a theory? What does it involve? What’s a model? Interplay of theory & empirical research? Approaches to model building? Fundamental principles of model building?