Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Hierarchical Linear Modeling and Related Methods David A. Hofmann Department of Management Michigan State University Expanded Tutorial SIOP Annual Meeting.

Similar presentations


Presentation on theme: "1 Hierarchical Linear Modeling and Related Methods David A. Hofmann Department of Management Michigan State University Expanded Tutorial SIOP Annual Meeting."— Presentation transcript:

1 1 Hierarchical Linear Modeling and Related Methods David A. Hofmann Department of Management Michigan State University Expanded Tutorial SIOP Annual Meeting April 16, 2000

2 2  Hierarchical nature of organizational data –Individuals nested in work groups –Work groups in departments –Departments in organizations –Organizations in environments  Consequently, we have constructs that describe: –Individuals –Work groups –Departments –Organizations –Environments Hierarchical Data Structures

3 3  Hierarchical nature of longitudinal data –Time series nested within individuals –Individuals –Individuals nested in groups  Consequently, we have constructs that describe: –Individuals over time –Individuals –Work groups Hierarchical Data Structures

4 4  Meso Paradigm (House et al., 1995; Tosi, 1992): –Micro OB –Macro OB –Call for shifting focus: »Contextual variables into Micro theories »Behavioral variables into Macro theories  Longitudinal Paradigm (Nesselroade, 1991): –Intraindividual change –Interindividual differences in individual change Click to edit Master title style Theoretical Paradigms

5 5 Some Substantive Questions  Kidwell et al., (1997), Journal of Management –Dependent: Organizational citizenship behavior –Individual: Job satisfaction and organizational commit. –Group: Work group cohesion  Deadrick et al., (1997), Journal of Management –Dependent: Employee performance –Within individual: Performance over time (24 weeks) –Between individual: Cognitive & psychomotor ability  Question Given variables at different levels of analysis, how do we go about investigating them.

6 6  Aggregate level –Discard potentially meaningful variance –Ecological fallacies, aggregation bias, etc.  Individual level –Violation of independence assumption –Complex error term not dealt with –Higher units tested based on # of lower units  Hierarchical linear models –Models variance at multiple levels –Addresses independence issues –Straightforward conceptualization of multilevel data Statistical & Methodological Options

7 7 HLM Overview  Two-stage approach to multilevel modeling –Level 1: within unit relationships for each unit –Level 2: models variance in level-1 parameters (intercepts & slopes) with between unit variables Level 1:Y ij = ß 0j + ß 1j X ij + r ij Level 2:ß 0j =  00 +  01 (Group j ) + U 0j ß 1j =  10 +  11 (Group j ) + U 1j

8 8 Regression lines estimated separately for each unit Level 1 Level 2 Var. Intercepts = Modeled with between unit variables Var. Slopes = Modeled with between unit variables Y ij X ij

9 9 Some Substantive Questions: Applications of HLM  Kidwell et al., (1997), Journal of Management –Individual level job satisfaction and organizational commitment positively related to OCB –Cohesion will be positively related to OCB exhibited by employees beyond that accounted for by satisfaction and commitment –The relationships between commitment/satisfaction and OCB will be stronger in more cohesive groups  Deadrick et al., (1997), Journal of Management –Are there inter-individual differences in performance over time –Do individual differences in ability account for these inter-individual differences

10 10 HLM Overview Some Preliminary definitions: –Random coefficients/effects »Coefficients/effects that are assumed to vary across units –Common Random coefficients/effects Within unit intercepts Within unit slopes Level 2 residual –Fixed effects »Effects that do not vary across units –Common Fixed effects Level 2 intercept Level 2 slope

11 11 HLM Overview  Estimates provided: –Level 1 parameters (intercepts, slopes) –Level-2 parameters (intercepts, slopes)** –Variance of Level-1 residuals –Variance of Level-2 residuals*** –Covariance of Level-2 residuals  Statistical tests: –t-test for parameter estimates (Level-2, fixed effects)** –Chi-Square for variance components (Level-2, random effects)***

12 12 A set of example hypotheses: Answering them using HLM

13 13 HLM: A Simple Example  Individual variables –Helping behavior (DV) –Individual Mood (IV)  Group variable –Proximity of group members

14 14 HLM: A Simple Example  Hypotheses 1. Mood is positively related to helping 2.Proximity is positively related to helping after controlling for mood »On average, individuals who work in closer proximity are more likely to help; a group level main effect for proximity after controlling for mood 3.Proximity moderates mood-helping relationship »The relationship between mood and helping behavior is stronger in situations where group members are in closer proximity to one another

15 15 HLM: A Simple Example  Necessary conditions –Systematic within and between group variance in helping behavior –Mean level-1 slopes significantly different from zero (Hypothesis 1) –Significant variance in level-1 intercepts (Hypothesis 2) –Significant variance in level-1 slopes (Hypothesis 3) –Variance in intercepts significantly related to Proximity (Hypothesis 2) –Variance in slopes significantly related to Proximity (Hypothesis 3)

16 16 HLM: Hypothesis Testing  One-way ANOVA - no Level-1 or Level-2 predictors (null) Level 1:Helping ij = ß 0j + r ij Level 2: ß 0j =  00 + U 0j  where: ß 0j = mean helping for group j  00 = grand mean helping Var ( r ij ) =  2 = within group variance in helping Var ( U 0j ) =    between group variance in helping Var (Helping ij ) = Var ( U 0j + r ij ) =   +  2 ICC =   / (   +  2 )

17 17 HLM: Hypothesis Testing  Random coefficient regression model –Add mood to Level-1 model ( no Level-2 predictors) Level 1:Helping ij = ß 0j + ß 1j (Mood) + r ij Level 2:ß 0j =  00 + U 0j ß 1j =  10 + U 1j  where:  00 = mean (pooled) intercepts (t-test)  10 = mean (pooled) slopes (t-test; Hypothesis 1) Var ( r ij ) = Level-1 residual variance (R 2, Hyp. 1) Var ( U 0j ) = variance in intercepts (related Hyp. 2) Var (U 1j ) = variance in slopes (related Hyp. 3)

18 18 HLM: Hypothesis Testing  Intercepts-as-outcomes - model Level-2 intercept (Hyp. 2) –Add Proximity to intercept model Level 1:Helping ij = ß 0j + ß 1j (Mood) + r ij Level 2:ß 0j =  00 +  01 (Proximity) + U 0j ß 1j =  10 + U 1j  where:  00 = Level-2 intercept (t-test)  01 = Level-2 slope (t-test; Hypothesis 2)  10 = mean (pooled) slopes (t-test; Hypothesis 1) Var ( r ij ) = Level-1 residual variance Var ( U 0j ) = residual inter. var (R 2 - Hyp. 2) Var (U 1j ) = variance in slopes (related Hyp. 3)

19 19  Slopes-as-outcomes - model Level-2 slope (Hyp. 3) –Add Proximity to slope model Level 1:Helping ij = ß 0j + ß 1j (Mood) + r ij Level 2: ß 0j =  00 +  01 (Proximity j ) + U 0j ß 1j =  10 +  11 (Proximity j ) + U 1j  where:  00 = Level-2 intercept (t-test)  01 = Level-2 slope (t-test; Hypothesis 2)  10 = Level-2 intercept (t-test)  11 = Level-2 slope (t-test; Hypothesis 3) Var ( r ij ) = Level-1 residual variance Var ( U 0j ) = residual intercepts variance Var (U 1j ) = residual slope var (R 2 - Hyp. 3) HLM: Hypothesis Testing

20 20 Statistical Assumptions  Linear models  Level-1 predictors are independent of the level-1 residuals  Level-2 random elements are multivariate normal, each with mean zero, and variance  qq and covariance  qq’  Level-2 predictors are independent of the level-2 residuals  Level-1 and level-2 errors are independent.  Each r ij is independent and normally distributed with a mean of zero and variance  2 for every level-1 unit i within each level-2 unit j (i.e., constant variance in level-1 residuals across units).

21 21 Statistical Power  Kreft (1996) summarized several studies –.90 power to detect cross-level interactions 30 groups of 30 –Trade-off »Large number of groups, fewer individuals within »Small number of groups, more individuals per group  My experience –Cross-level main effects, pretty robust –Cross-level interactions more difficult –Related to within unit standard errors and between group variance

22 22 Centering Decisions: Scaling of Level-1 Predictors (It’s important and confusing)

23 23 Centering Decisions  Level-1 parameters are used as outcome variables at level-2  Thus, one needs to understand the meaning of these parameters  Intercept term: expected value of Y when X is zero  Slope term: expected increase in Y for a unit increase in X  Raw metric form: X equals zero might not be meaningful

24 24 Centering Decisions  3 Options –Raw metric –Grand mean –Group mean  Kreft et al. (1995): raw metric and grand mean equivalent, group mean non-equivalent  Raw metric/Grand mean centering –intercept var = adjusted between group variance in Y  Group mean centering –intercept var = between group variance in Y [ Kreft, I.G.G., de Leeuw, J., & Aiken, L.S. (1995). The effect of different forms of centering in Hierarchical Linear Models. Multivariate Behavioral Research, 30, 1-21.]

25 25 Centering Decisions  An illustration: –15 Groups / 10 Observations per –Within Group Variance: f (A, B, C, D) –Between Group Variable: G j »G = f (A j, B j ) »Thus, if between group variance in A & B (i.e., A j & B j ) is accounted for, G j should not significantly predict the outcome –Run the model: »Grand Mean »Group Mean »Group +

26 26 Centering Decisions  Grand Mean Centering

27 27 Centering Decisions  Group Mean Centering

28 28 Centering Decisions  Group Mean Centering with A, B, C, D Means in Level-2 Model

29 29 Centering Decisions  Centering decisions are also important when investigating cross-level interactions  Consider the following model: Level 1:Y ij = ß 0j + ß 1j (X grand ) + r ij Level 2:ß 0j =  00 + U 0j ß 1j =  10  Bryk & Raudenbush (1992) point out that  10 does not provide an unbiased estimate of the pooled within group slope –It actually represents a mixture of both the within and between group slope –Thus, you might not get an accurate picture of cross-level interactions

30 30 Centering Decisions  Bryk & Raudenbush make the distinction between cross- level interactions and between-group interactions –Cross-level: Group level predictor of level-1 slopes –Group-level: Two group level predictors interacting to predict the level-2 intercept  Only group-mean centering enables the investigation of both types of interaction  Illustration –Created two data sets »Cross-level interaction, no between-group interaction »Between-group interaction, no cross-level interaction

31 31 Centering Decisions

32 32 Centering Decision: Theoretical Paradigms  Incremental –group adds incremental prediction over and above individual variables –grand mean centering –group mean centering with means added in level-2 intercept model  Mediational –individual perceptions mediate relationship between contextual factors and individual outcomes –grand mean centering –group mean centering with means added in level-2 intercept model

33 33 Centering Decisions: Theoretical Paradigms  Moderational –group level variable moderates level-1 relationship –group mean centering provides clean estimate of within group slope –separates between group from cross-level interaction –Practical: If running grand mean centered, check final model group mean centered  Separate –group mean centering produces separate within and between group structural models

34 34 Hierarchical Linear Models: Let’s take a look at the software

35 35 HLM versus OLS regression

36 36 HLM versus OLS  Investigate the following model using OLS: Helping ij = ß 0 + ß 1 (Mood) + ß 2 (Prox.) + r ij  The HLM equivalent model (ß 1j is fixed across groups): Level 1:Helping ij = ß 0j + ß 1j (Mood) + r ij Level 2:ß 0j =  00 +  01 (Prox.) + U 0j ß 1j =  10  Form single equation from two HLM equations: Help = [  00 +  01 (Prox.) + U 0j ] + [  10 ] (Mood) + r ij =  00 +  10 (Mood) +  01 (Prox.) + U 0j + r ij =  00 + ß 1j (Mood) +  01 (Prox.) + [U 0j + r ij ] Independence Assump.

37 37 HLM Estimation: A brief overview

38 38 HLM Estimation  Types of effects estimated –Level-2 fixed effects / Level-1 random effects –Variance / covariance components »Estimated using maximum likelihood using EM algorithm  Purposes of HLM model –Inferences about level-2 effects –Estimating level-1 relationships for particular unit –Each purpose requires efficient estimates »level-2 effects => efficient estimates of level-2 regression coefficients »particular level-1 units => most efficient estimate of level-1 regression coefficients

39 39 HLM Estimation (fixed effects)  General level-1 model (matrix): Y j = X j ß j + r j r j ~ N(0,  2 I )  The OLS estimator of ß j is given by: ß ^ j = (X j ’X j ) -1 X j ’Y j  The dispersion, or variance in ß ^ j is given by: Var(ß ^ j ) = V j =  2 (X j ’X j ) -1  which means: ß ^ j = ß j + e j e j ~ N (0, V j )

40 40 HLM Estimation (fixed effects)  General model at level-2: ß j = W j  + u j u j ~ N( 0, T )  Substituting the equations yields a single combined model: ß ^ j = W j  + u j + e j  where the dispersion of ß ^ j given W j is Var (ß ^ j ) = Var (u j + e j ) = T + V j =  j  which equals parameter dispersion + error dispersion.

41 41  The Generalized Least Squares (GLS) estimator for  is:  ^ = (  W j ’  j -1 W j ) -1  W j ’  j -1 ß ^ j  which is a standard OLS regression estimate except each group’s data are weighted by its precision matrix (  j -1 ).  The dispersion of  ^ follows: Var (  ^ ) = (  W j ’  j -1 W j ) -1 HLM Estimation (fixed effects)

42 42 The Reported “Reliability”  The diagonal elements of T (e.g.,  qq ) and V j (e.g., v qqj ) can be used to form a “reliability” index for each OLS level-1 coefficients: reliability (ß ^ qj ) =  qq / (  qq + v qqj )  Because sampling variance (v qqj ) of ß ^ j will be different among the j units, each level-2 unit has a unique reliability index. The overall reliability can be summarized by computing the average reliability across j units: reliability (ß ^ q ) = 1 / j   qq / (  qq + v qqj )

43 43 The Reported “Reliability” ß ^ qj  qq v qqj Between group variance in parameters is considered systematic whereas the variance around each estimate is considered error. Thus, the reliability equals the ratio of: True variance / (True + error) Between group variance in parameters is considered systematic whereas the variance around each estimate is considered error. Thus, the reliability equals the ratio of: True variance / (True + error)

44 44 HLM Estimation (random level-1 coefficients)  Purpose: To obtain most efficient estimates of parameters for a particular level-1 unit. –Two estimates are available »the OLS estimate, ß ^ j »the predicted value from level-2, ß ^^ j = W j  ^ where  ^ is the GLS estimate described previously –Obviously, if two estimates are available, the best estimate is likely to be some combination of these two estimates.

45 45 HLM Estimation (random level-1 coefficients)  A composite level-1 estimate: ß * j =  j ß ^ j + ( I -  j ) W j  ^  where  j = T ( T + V j ) -1  which is the ratio of the parameter dispersion of ß j relative to the dispersion of ß ^ j (i.e., the ratio of “true” parameter variance over “observed” parameter variance).  Thus, the composite level-1 estimate is a weighted combination of the level-1 and level-2 estimate where each estimate is weighted proportional to its reliability. This is the most efficient estimate of the level-1 coefficient for any given unit (lowest mean square error; Raudenbush, 1988).

46 46 Do You Really Need HLM? Alternatives for Estimating Hierarchical Models Part I: Cross Level Models

47 47 SAS: Proc Mixed  SAS Proc Mixed will estimate these models  Key components of Proc Mixed command language –Proc mixed »Class –Group identifier »Model –Regression equation including both individual, group, and interactions (if applicable) »Random –Specification of random effects (those allowed to vary across groups)

48 48 SAS: Proc Mixed  Key components of Proc Mixed command language –Some options you might want to select »Class: noitprint ( suppresses interation history ) »Model: –solution ( prints solution for random effects ) –ddfm=bw ( specifies the “between/within” method for computing denominator degrees of freedom for tests of fixed effects ) »Random: –sub= id ( how level-1 units are divided into level-2 units ) –type=un ( specifies unstructured variance-covariance matrix of intercepts and slopes; i.e., allows parameters to be determined by data )

49 49 SAS: Proc Mixed

50 50 SAS: Proc Mixed  Key references –Singer, J. (1998). Using SAS PROC MIXED to fit multilevel models, hierarchical models, and individual growth models. Journal of Educational and Behavioral Statistics, 23, 323-355.  Available on her homepage –http://hugse1.harvard.edu/~faculty/singer/

51 51 Do You Really Need HLM? Alternatives for Estimating Hierarchical Models Part II: Longitudinal Models

52 52 Latent Growth Curve Models  Structural equation programs can be used to model –Interindividual differences in intraindividual change –Predictors of these change patterns  How does it work –Analyze covariance matrix of interrelationships among repeated measures of outcome –“Flip” the logic of factor analysis »Typically program estimates factor loadings and factors are interpreted in relation to factor loadings »In these models, you fix all of the factor loadings and interpret variance in factors in accordance to factor loadings that you specify

53 53 Latent Growth Curve Models InterceptSlope Y1Y1 Y5Y5 Y2Y2 Y3Y3 Y4Y4 Int. Slope 1 0 1 1 2 1 3 So what is this doing?

54 54 Latent Growth Curve Models  Estimating a set of regression equations Y t =  Intercept +  Slope +  t  which translates into Y1Y2Y3Y4Y5Y1Y2Y3Y4Y5 = 1111111111 0123401234 +  Intercept  Slope

55 55 Latent Growth Curve Models  Variance in factors –Individual differences in intercepts and slopes –Why »In factors analysis, variance in factors equals variability across persons on latent construct »Could create a factor score for each individual; variability in factor scores conceptually represents variance in factor across persons »Same applies here –Fixing factor loadings defines factors as intercept and linear trend –Variance in factors represents variance across persons in intercepts and slopes

56 56 Latent Growth Curve Models InterceptSlope Y1Y1 Y5Y5 Y2Y2 Y3Y3 Y4Y4 Individual Predictor

57 57 Latent Growth Curve Models  Key references –McArdle, J.J., & Epstein, D. (1987). Latent growth curves within developmental structural equation models. Child Development, 58, 110-133. –Muthen, B.O. (1991). Analysis of longitudinal data using latent variable models with varying parameters. In L.M. Collins, & J.L. Horn, (Eds.), Best Methods for the Analysis of Change (pp. 37-54). –Ployhart, R.E., & Hakel, M.D. (1998). The substantive nature of performance variability: Predicting interindividual differences in intraindividual change. Personnel Psychology, 51, 859-901. –Chan, D. (1998). The conceptualization and analysis of change over time: An integrative approach to incorporating longitudinal mean and covariance structures analysis (LMACS) and multiple indicator latent growth modeling (MLGM). Organizational Research Methods, 1, 421-483.

58 58 Questions: About Today or About Your Own Research


Download ppt "1 Hierarchical Linear Modeling and Related Methods David A. Hofmann Department of Management Michigan State University Expanded Tutorial SIOP Annual Meeting."

Similar presentations


Ads by Google