Presentation is loading. Please wait.

Presentation is loading. Please wait.

Multilevel Models 2 Sociology 229A, Class 18

Similar presentations


Presentation on theme: "Multilevel Models 2 Sociology 229A, Class 18"— Presentation transcript:

1 Multilevel Models 2 Sociology 229A, Class 18
Copyright © 2008 by Evan Schofer Do not copy or distribute without permission

2 Multilevel Data Simple example: 2-level data Which can be shown as:
Class Which can be shown as: Class 1 S1 S2 S3 Class 2 Class 3 Level 2 Level 1

3 Multilevel Data: Problems
When is multilevel data NOT a problem? Answer: If you can successfully control for potential sources of correlated error Add a control to OLS model for: classroom, school, and state characteristics that would be sources of correlated error in each group Ex: Teacher quality, class size, budget, etc… But: We often can’t identify or measure all relevant sources of correlated error Thus, we need to abandon simple OLS regression and try other approaches.

4 Review: Multilevel Strategies
Problems of multilevel models Non-independence; correlated error Standard errors = underestimated Solutions: Each has benefits, disadvantages… 1. OLS regression 2. Aggregation (between effects model) 3. Robust Standard Errors 4. Robust Cluster Standard Errors 5. Dummy variables (Fixed Effects Model) 6. Random effects models

5 Robust Standard Errors
Strategy #1: Improve our estimates of the standard errors Option 1: Robust Standard Errors reg y x1 x2 x3, robust The Huber / White / “Sandwich” estimator An alternative method of computing standard errors that is robust to a variety of assumption violations Provides accurate estimates in presence of heteroskedasticity Also, robust to model misspecification Note: Freedman’s criticism: What good are accurate SEs if coefficients are biased due to poor specification?

6 Robust Cluster Standard Errors
Option 2: Robust cluster standard errors A modification of robust SEs to address clustering reg y x1 x2 x3, cluster(groupid) Note: Cluster implies robust (vs. regular SEs) It is easy to adapt robust standard errors to address clustering in data; See: Result: SE estimates typically increase, which is appropriate because non-independent cases aren’t providing as much information as would a sample of independent cases.

7 Dummy Variables Another solution to correlated error within groups/clusters: Add dummy variables Include a dummy variable for each Level-2 group, to explicitly model variance in means A simple version of a “fixed effects” model (see below) Ex: Student achievement; data from 3 classes Level 1: students; Level 2: classroom Create dummy variables for each class Include all but one dummy variable in the model Or include all dummies and suppress the intercept

8 Dummy Variables What is the consequence of adding group dummy variables? A separate intercept is estimated for each group Correlated error is absorbed into intercept Groups won’t systematically fall above or below the regression line In fact, all “between group” variation (not just error) is absorbed into the intercept Thus, other variables are really just looking at within group effects This can be good or bad, depending on your goals.

9 Dummy Variables Note: You can create a set of dummy variables in stata as follows: xi i.classid – creates dummy variables for each unique value of the variable “classid” Creates variables named _Iclassid_1, _Iclassid2, etc These dummies can be added to the analysis by specifying the variable: _Iclassid* Ex: reg y x1 x2 x3 _Iclassid*, nocons “nocons” removes the constant, allowing you to use a full set of dummies. Alternately, you could drop one dummy.

10 Example: Pro-environmental values
Dummy variable model . reg supportenv age male dmar demp educ incomerel ses _Icountry* Source | SS df MS Number of obs = F( 32, 27774) = Model | Prob > F = Residual | R-squared = Adj R-squared = Total | Root MSE = supportenv | Coef. Std. Err t P>|t| [95% Conf. Interval] age | male | dmar | demp | educ | incomerel | ses | _Icountry_32 | _Icountry_50 | _Icountry_70 | … dummies omitted … _Icountr~891 | _cons |

11 Dummy Variables Benefits of the dummy variable approach Weaknesses
It is simple Just estimate a different intercept for each group sometimes the dummy interpretations can be of interest Weaknesses Cumbersome if you have many groups Uses up lots of degrees of freedom (not parsimonious) Makes it hard to look at other kinds of group dummies Non-varying group variables = collinear with dummies Can be problematic if your main interest is to study effects of variables across groups Dummies purge that variation… focus on within-group variation If you don’t have much within group variation, there isn’t much left to analyze.

12 Dummy Variables Note: Dummy variables are a simple example of a “fixed effects” model (FEM) Effect of each group is modeled as a “fixed effect” rather than a random variable Also can be thought of as the “within-group” estimator Looks purely at variation within groups Stata can do a Fixed Effects Model without the effort of using all the dummy variables Simply request the “fixed effects” estimator in xtreg.

13 Fixed Effects Model (FEM)
For i cases within j groups Therefore aj is a separate intercept for each group It is equivalent to solely at within-group variation: X-bar-sub-j is mean of X for group j, etc Model is “within group” because all variables are centered around mean of each group.

14 Fixed Effects Model (FEM)
. xtreg supportenv age male dmar demp educ incomerel ses, i(country) fe Fixed-effects (within) regression Number of obs = Group variable (i): country Number of groups = R-sq: within = Obs per group: min = between = avg = overall = max = F(7,27774) = corr(u_i, Xb) = Prob > F = supportenv | Coef. Std. Err t P>|t| [95% Conf. Interval] age | male | dmar | demp | educ | incomerel | ses | _cons | sigma_u | sigma_e | rho | (fraction of variance due to u_i) F test that all u_i=0: F(25, 27774) = Prob > F = Identical to dummy variable model!

15 ANOVA: A Digression Suppose you wish to model variable Y for j groups (clusters) Ex: Wages for different racial groups Definitions: The grand mean is the mean of all groups Y-bar The group mean is the mean of a particular sub-group of the population Y-bar-sub-j

16 ANOVA: Concepts & Definitions
Y is the dependent variable We are looking to see if Y depends upon the particular group a person is in The effect of a group is the difference between a group’s mean & the grand mean Effect is denoted by alpha (a) If Y-bar = $8.75, YGroup 1 = $8.90, then aGroup 1= $0.15 Effect of being in group j is: It is like a deviation, but for a group.

17 ANOVA: Concepts & Definitions
ANOVA is based on partitioning deviation We initially calculated deviation as the distance of a point from the grand mean: But, you can also think of deviation from a group mean (called “e”): Or, for any case i in group j:

18 ANOVA: Concepts & Definitions
The location of any case is determined by: The Grand Mean, m, common to all cases The group “effect” a, common to members The distance between a group and the grand mean “Between group” variation The within-group deviation (e): called “error” The distance from group mean to an case’s value

19 The ANOVA Model This is the basis for a formal model:
For any population with mean m Comprised of J subgroups, Nj in each group Each with a group effect a The location of any individual can be expressed as follows: Yij refers to the value of case i in group j eij refers to the “error” (i.e., deviation from group mean) for case i in group j

20 Sum of Squared Deviation
We are most interested in two parts of model The group effects: aj Deviation of the group from the grand mean Individual case error: eij Deviation of the individual from the group mean Each are deviations that can be summed up Remember, we square deviations when summing Otherwise, they add up to zero Remember variance is just squared deviation

21 Sum of Squared Deviation
The total deviation can partitioned into aj and eij components: That is, aj + eij = total deviation:

22 Sum of Squared Deviation
The total deviation can partitioned into aj and eij components: The total variance (SStotal) is made up of: aj : between group variance (SSbetween) eij : within group variance (SSwithin) SStotal = SSbetween + SSwithin

23 ANOVA & Fixed Effects Note that the ANOVA model is similar to the fixed effects model But FEM also includes a bX term to model linear trend ANOVA Fixed Effects Model In fact, if you don’t specify any X variables, they are pretty much the same

24 Within Group & Between Group Models
Group-effect dummy variables in regression model creates a specific estimate of group effects for all cases Bs & error are based on remaining “within group” variation We could do the opposite: ignore within-group variation and just look at differences between Stata’s xtreg command can do this, too This is essentially just modeling group means!

25 Between Group Model . xtreg supportenv age male dmar demp educ incomerel ses, i(country) be Between regression (regression on group means) Number of obs = Group variable (i): country Number of groups = R-sq: within = Obs per group: min = between = avg = overall = max = F(7,19) = sd(u_i + avg(e_i.))= Prob > F = supportenv | Coef. Std. Err t P>|t| [95% Conf. Interval] age | male | dmar | demp | educ | incomerel | ses | _cons | Note: Results are identical to the aggregated analysis… Note that N is reduced to 27

26 Fixed vs. Random Effects
Dummy variables produce a “fixed” estimate of the intercept for each group But, models don’t need to be based on fixed effects Example: The error term (ei) We could estimate a fixed value for all cases This would use up lots of degrees of freedom – even more than using group dummies In fact, we would use up ALL degrees of freedom Stata output would simply report back the raw data (expressed as deviations from the constant) Instead, we model e as a random variable We assume it is normal, with standard deviation sigma.

27 Random Effects A simple random intercept model
Notation from Rabe-Hesketh & Skrondal 2005, p. 4-5 Random Intercept Model Where b is the main intercept Zeta (z) is a random effect for each group Allowing each of j groups to have its own intercept Assumed to be independent & normally distributed Error (e) is the error term for each case Also assumed to be independent & normally distributed Note: Other texts refer to random intercepts as uj or nj.

28 Random Effects Issue: The dummy variable approach (ANOVA, FEM) treats group differences as a fixed effect Alternatively, we can treat it as a random effect Don’t estimate values for each case, but model it This requires making assumptions e.g., that group differences are normally distributed with a standard deviation that can be estimated from data.

29 Linear Random Intercepts Model
The random intercept idea can be applied to linear regression Often called a “random effects” model… Result is similar to FEM, BUT: FEM looks only at within group effects Aggregate models (“between effects”) looks across groups Random effects models yield a weighted average of between & within group effects It exploits between & within information, and thus can be more efficient than FEM & aggregate models. IF distributional assumptions are correct.

30 Linear Random Intercepts Model
. xtreg supportenv age male dmar demp educ incomerel ses, i(country) re Random-effects GLS regression Number of obs = Group variable (i): country Number of groups = R-sq: within = Obs per group: min = between = avg = overall = max = Random effects u_i ~ Gaussian Wald chi2(7) = corr(u_i, X) = 0 (assumed) Prob > chi = supportenv | Coef. Std. Err z P>|z| [95% Conf. Interval] age | male | dmar | demp | educ | incomerel | ses | _cons | sigma_u | sigma_e | rho | (fraction of variance due to u_i) Assumes normal uj, uncorrelated with X vars SD of u (intercepts); SD of e; intra-class correlation

31 Linear Random Intercepts Model
Notes: Model can also be estimated with maximum likelihood estimation (MLE) Stata: xtreg y x1 x2 x3, i(groupid) mle Versus “re”, which specifies weighted least squares estimator Results tend to be similar But, MLE results include a formal test to see whether intercepts really vary across groups Significant p-value indicates that intercepts vary . xtreg supportenv age male dmar demp educ incomerel ses, i(country) mle Random-effects ML regression Number of obs = Group variable (i): country Number of groups = … MODEL RESULTS OMITTED … /sigma_u | /sigma_e | rho | Likelihood-ratio test of sigma_u=0: chibar2(01)= Prob>=chibar2 = 0.000

32 Choosing Models Which model is best?
There is much discussion (e.g, Halaby 2004) Fixed effects are most consistent under a wide range of circumstances Consistent: Estimates approach true parameter values as N grows very large But, they are less efficient than random effects In cases with low within-group variation (big between group variation) and small sample size, results can be very poor Random Effects = more efficient But, runs into problems if specification is poor Esp. if X variables correlate with random group effects Usually due to omitted variables.

33 Hausman Specification Test
Hausman Specification Test: A tool to help evaluate fit of fixed vs. random effects Logic: Both fixed & random effects models are consistent if models are properly specified However, some model violations cause random effects models to be inconsistent Ex: if X variables are correlated to random error In short: Models should give the same results… If not, random effects may be biased If results are similar, use the most efficient model: random effects If results diverge, odds are that the random effects model is biased. In that case use fixed effects…

34 Hausman Specification Test
Strategy: Estimate both fixed & random effects models Save the estimates each time Finally invoke Hausman test Ex: streg var1 var2 var3, i(groupid) fe estimates store fixed streg var1 var2 var3, i(groupid) re estimates store random hausman fixed random

35 Hausman Specification Test
Example: Environmental attitudes fe vs re . hausman fixed random ---- Coefficients ---- | (b) (B) (b-B) sqrt(diag(V_b-V_B)) | fixed random Difference S.E. age | male | dmar | demp | educ | incomerel | ses | b = consistent under Ho and Ha; obtained from xtreg B = inconsistent under Ha, efficient under Ho; obtained from xtreg Test: Ho: difference in coefficients not systematic chi2(7) = (b-B)'[(V_b-V_B)^(-1)](b-B) = Prob>chi2 = Direct comparison of coefficients… Non-significant p-value indicates that models yield similar results…

36 Within & Between Effects
What is the relationship between within-group effects (FEM) and between-effects (BEM)? Usually they are similar Ex: Student skills & test performance Within any classroom, skilled students do best on tests Between classrooms, classes with more skilled students have higher mean test scores.

37 Within & Between Effects
Issue: Between and within effects can differ! Ex: Effects of wealth on attitudes toward welfare At the individual level (within group) Wealthier people are conservative, don’t support welfare At the country level (between groups): Wealthier countries (high aggregate mean) tend to have pro-welfare attitudes (ex: Scandinavia) Result: Wealth has opposite between vs within effects! Issue: Such dynamics often result from omitted level-1 variables (omitted variable bias) Ex: If we control for individual “political conservatism”, effects may be consistent at both levels…

38 Within & Between Effects
You can estimate BOTH within- and between-group effects in a single model Strategy: Split a variable (e.g., SES) into two new variables… 1. Group mean SES 2. Within-group deviation from mean SES Often called “group mean centering” Then, put both variables into a random effects model Model will estimate separate coefficients for between vs. within effects Ex: egen meanvar1 = mean(var1), by(groupid) egen withinvar1 = var1 – meanvar1 Include mean (aggregate) & within variable in model.

39 Within & Between Effects
Example: Pro-environmental attitudes . xtreg supportenv meanage withinage male dmar demp educ incomerel ses, i(country) mle Random-effects ML regression Number of obs = Group variable (i): country Number of groups = Random effects u_i ~ Gaussian Obs per group: min = avg = max = LR chi2(8) = Log likelihood = Prob > chi = supportenv | Coef. Std. Err z P>|z| [95% Conf. Interval] meanage | withinage | male | dmar | demp | educ | incomerel | ses | _cons | Between & within effects are opposite. Older countries are MORE environmental, but older people are LESS Omitted variables? Wealthy European countries with strong green parties have older populations!

40 Within & Between Effects / Centering
Multilevel models & “centering” variables Grand mean centering: computing variables as deviations from overall mean Often done to X variables Has effect that baseline constant in model reflects mean of all cases Useful for interpretation Group mean centering: computing variables as deviation from group mean Useful for decomposing within vs. between effects Often in conjunction with aggregate group mean vars.

41 Generalizing: Random Coefficients
Linear random intercept model allows random variation in intercept (mean) for groups But, the same idea can be applied to other coefficients That is, slope coefficients can ALSO be random! Random Coefficient Model Which can be written as: Where zeta-1 is a random intercept component Zeta-2 is a random slope component.

42 Linear Random Coefficient Model
Rabe-Hesketh & Skrondal 2004, p. 63 Both intercepts and slopes vary randomly across j groups

43 Random Coefficients Summary
Some things to remember: Dummy variables allow fixed estimates of intercepts across groups Interactions allow fixed estimates of slopes across groups Random coefficients allow intercepts and/or slopes to vary across groups randomly! The model does not directly estimate those effects, just as a model does not estimate coefficients for each case residual BUT, random components can be predicted after the fact (just as you can compute residuals – random error).

44 STATA Notes: xtreg, xtmixed
xtreg – allows estimation of between, within (fixed), and random intercept models xtreg y x1 x2 x3, i(groupid) fe - fixed (within) model xtreg y x1 x2 x3, i(groupid) be - between model xtreg y x1 x2 x3, i(groupid) re - random intercept (GLS) xtreg y x1 x2 x3, i(groupid) mle - random intercept (MLE) xtmixed – allows random slopes & coefs “Mixed” models refer to models that have both fixed and random components xtmixed [depvar] [fixed equation] || [random eq], options Ex: xtmixed y x1 x2 x3 || groupid: x2 Random intercept is assumed. Random coef for X2 specified.

45 STATA Notes: xtreg, xtmixed
Random intercepts xtreg y x1 x2 x3, i(groupid) mle Is equivalent to xtmixed y x1 x2 x3 || groupid: , mle xtmixed assumes random intercept – even if no other random effects are specified after “groupid” But, we can add random coefficients for all Xs: xtmixed y x1 x2 x3 || groupid: x1 x2 x3 , mle Note: xtmixed can do a lot… but GLLAMM can do even more! “General linear & latent mixed models” Must be downloaded into stata. Type “search gllamm” and follow instructions to install…

46 Random intercepts: xtmixed
Example: Pro-environmental attitudes . xtmixed supportenv age male dmar demp educ incomerel ses || country: , mle Mixed-effects ML regression Number of obs = Group variable: country Number of groups = Obs per group: min = avg = max = Wald chi2(7) = Log likelihood = Prob > chi = supportenv | Coef. Std. Err z P>|z| [95% Conf. Interval] age | male | dmar | demp | educ | incomerel | ses | _cons | [remainder of output cut off] Note: xtmixed yields identical results to xtreg , mle

47 Random intercepts: xtmixed
Ex: Pro-environmental attitudes (cont’d) supportenv | Coef. Std. Err z P>|z| [95% Conf. Interval] age | male | dmar | demp | educ | incomerel | ses | _cons | Random-effects Parameters | Estimate Std. Err. [95% Conf. Interval] country: Identity | sd(_cons) | sd(Residual) | LR test vs. linear regression: chibar2(01) = Prob >= chibar2 = xtmixed output puts all random effects below main coefficients. Here, they are “cons” (constant) for groups defined by “country”, plus residual (e) Non-zero SD indicates that intercepts vary

48 Random Coefficients: xtmixed
Ex: Pro-environmental attitudes (cont’d) . xtmixed supportenv age male dmar demp educ incomerel ses || country: educ, mle [output omitted] supportenv | Coef. Std. Err z P>|z| [95% Conf. Interval] age | male | dmar | demp | educ | incomerel | ses | _cons | Random-effects Parameters | Estimate Std. Err. [95% Conf. Interval] country: Independent | sd(educ) | sd(_cons) | sd(Residual) | LR test vs. linear regression: chi2(2) = Prob > chi2 = Here, we have allowed the slope of educ to vary randomly across countries Educ (slope) varies, too!

49 Random Coefficients: xtmixed
What are random coefficients doing? Let’s look at results from a simplified model Only random slope & intercept for education Model fits a different slope & intercept for each group!

50 Random Coefficients Why bother with random coefficients?
1. A solution for clustering (non-independence) Usually people just use random intercepts, but slopes may be an issue also 2. You can create a better-fitting model If slopes & intercepts vary, a random coefficient model may fit better Assuming distributional assumptions are met Model fit compared to OLS can be tested…. 3. Better predictions Attention to group-specific random effects can yield better predictions (e.g., slopes) for each group Rather than just looking at “average” slope for all groups 4. Helps us think about multilevel data Ex: cross-level interactions (we’ll discuss soon!)

51 Multilevel Model Notation
So far, we have expressed random effects in a single equation: Random Coefficient Model However, it is common to separate the fixed and random parts into multiple equations: Just a basic OLS model… But, intercept & slope are each specified separately as having a random component Intercept equation Slope Equation

52 Multilevel Model Notation
The “separate equation” formulation is no different from what we did before… But it is a vivid & clear way to present your models All random components are obvious because they are stated in separate equations NOTE: Some software (e.g., HLM) requires this Rules: 1. Specify an OLS model, just like normal 2. Consider which OLS coefficients should have a random component These could be the intercept or any X variable (slope) 3. Specify an additional formula for each random coefficient.


Download ppt "Multilevel Models 2 Sociology 229A, Class 18"

Similar presentations


Ads by Google