Download presentation

Presentation is loading. Please wait.

Published byIreland Ireson Modified over 2 years ago

1
E. Kevin Kelloway, Ph.D. Canada Research Chair in Occupational Health Psychology

2
Day 1: Familiarization with the Mplus environment – Varieties of regression Day 2 Introduction to SEM: Path Modeling, CFA and Latent variable analysis Day 3 Advanced Techniques – Longitudinal data, multi-level SEM etc

3
0900 - 1000Introduction : The Mplus Environment 1000 – 1015Break 1015 – 1100Using Mplus: Regression 1100 – 1200Variations on a theme: Categorical, Censored and Count Outcomes 1200 – 1300 Break 1300 – 1400Multilevel models: Some theory 1400 – 1415Break 1415 – 1530Estimating multilevel models in Mplus

4
Statistical modeling program that allows for a wide variety of models and estimation techniques Explicitly designed to do everything Techniques for handling all kinds of data (continuous, categorical, zero-inflated etc), Allows for multilevel and complex data Allows the integration of all of these techniques

5
Observed variables x background variables (no model structure) y continuous and censored outcome variables u categorical (dichotomous, ordinal, nominal) and count outcome variables Latent variables f continuous variables – interactions among fs c categorical variables – multiple cs

6
BASE MODEL – Does regression and most versions of SEM Mixture - Adds in mixture analysis (using categorical latent variables) Multi-level Add-on –adds the potential for multi-level analysis Recommend the Combo Platter

7
Batch processor Text commands (no graphical interface) and keywords Commands can come in any order in the file Three main tasks GET THE DATA into MPLUS and DESCRIBE IT ESTIMATE THE MODEL of INTEREST REQUEST THE DESIRED OUTPUT

8
10 Commands TITLE Provides a title DATA (required)Describes the Dataset VARIABLE (required)Names/identifies Variables DEFINEComputes/transforms ANALYSISTechnical details of analysis MODELModel to be estimated OUTPUTSpecifies the output SAVEDATASaves the data PLOTGraphical Output MONTECARLOMonte Carlo Analysis Comments are denoted by ! And can be anywhere in the file

9
is are and = can generally be used interchangeably Variable: Names is Bob Variable: Names = Bob Variable: Names are Bob - denotes a range Variable: Names = Bob1 – Bob5 : ends each command ; ends each line

10
Step 1: Move your data into a.dat file (ASCII) – SPSS or Excel will do this Step 2: Create the command file with DATA and VARIABLE STATEMENTS Step 3 (Optional) I always ask for the sample statistics so that I can check the accuracy of data reading OPEN and RUN Day1 Example 1.inp

11
TITLE: This is an example of how to read data into Mplus from an ASCII File DATA: file is workshop1.dat; Variable: NAMES are sex age hours location TL PL GHQ Injury; USEVARIABLES = tl – injury; Output: Sampstat; Include the demographic variables in the analysis

12
Repeat the input instructions – check to see if proper N, K and number of groups Describe the analysis – describes the analysis, check for accuracy Report the results Fit Statistics Parameter Estimates Requested information (sample statistics, standardized parameters etc) NOTE: Not all output is relevant to your analysis

13
N2Mplus – freeware program that will read SPSS or excel files Will Create the data file Will write the Mplus syntax which can be pasted into mplus Limit of 300 variables Watch variable name lengths (SPSS allows more characters than does Mplus)

14
General Goal To predict one variable (DV or criterion) from a set of other variables (IVs or Predictors). IVs may be (and usually are) intercorrelated. Minimize least squares (minimize prediction error) - Maximize R

15
Correlation is ZxZy/N Line of best fit (OLS Regression line) is found by y = mx+b where b = Y intercept Y – bX And m = slope = r Sdy/Sdx

16
Extension of Bivariate Regression to the case of multiple predictors Predictors may be (usually are) intercorrelated so need to partial variance to determine the UNIQUE effects of X on Y

17
To specify a simple linear regression you simply add a Model line to the file Model DV on IV1 IV2 IV3….IVX You also want to specify some specific forms of output to get the normal regression information Useful options are SAMPSTAT – sample statistics for the variables STANDARDIZED – standardized parameters Savedata:Save=Cooks Mahalanobis What predicts GHQ?

19
Used typically with dichotomous outcome (also ordered logistic and probit models) Similar to regression – generate an overall test of goodness of fit Generate parameters and tests of parameters Odds ratios When split is 50/50 then discriminant and logistic should give the same result When split varies, then logistic is preferred

20
Likelihood chi-squared - baseline to model comparisons ParameterTest (B/SE) Odds ratio - increase/decrease in odds of being in one outcome category if predictor increases by 1 unit (Log of B)

21
Specify one outcome as categorical (can be either binary or ordered) Default estimator is MLR which gives you a probit analysis Changing to ML gives you a Logistic regression RUN DAY1Example3.inp To dichotomize the outcome (from a multi- category or continuous measure define: cut injury (1);

24
Data from a study of metro transit bus drivers (n=174) Data on workplace violence (extent to which one has been hit/kicked; attacked by a weapon;had something thrown at you) 1 = not at all 4 = 3 or more times Data cleaning suggests highly skewed and kurtotic distribution Descriptive Statistics NMinimumMaximumMeanStd. DeviationSkewnessKurtosis StatisticStatisticStatisticStatisticStatisticStatisticStd. ErrorStatisticStd. Error violence 1701.003.001.2353.376231.900.1863.677.370 Valid N (listwise)170 An Example

25
Scores pile up at 1 (Not at all)

26
Negative Binomial. This distribution can be thought of as the number of trials required to observe k successes and is appropriate for variables with non-negative integer values. If a data value is non-integer, less than 0, or missing, then the corresponding case is not used in the analysis. The fixed value of the negative binomial distribution's ancillary parameter can be any number greater than or equal to 0. When the ancillary parameter is set to 0, using this distribution is equivalent to using the Poisson distribution. Normal. This is appropriate for scale variables whose values take a symmetric, bell-shaped distribution about a central (mean) value. The dependent variable must be numeric. Poisson. This distribution can be thought of as the number of occurrences of an event of interest in a fixed period of time and is appropriate for variables with non-negative integer values. If a data value is non-integer, less than 0, or missing, then the corresponding case is not used in the analysis. More Estimators

28
Counts are discrete not continuous Counts are generated by a Poisson distribution (discrete probability distribution) Poisson distributions are typically problematic because they are skewed (by definition non-normal) are non-negative (cannot have negative predicted values) have non constant variance– variance increases as mean increases BUT… Poisson regressions also make some very restrictive assumptions about the data (i.e., the underlying rate of the DV is the same for all individuals in the population or we have measured every possible influence on the DV ) Some Observations on Count Data

29
allows for more variance than does the poisson model (less restrictive assumptions) Can fit a poisson model and calculate dispersion (Deviance/df). Dispersion close to 1 indicates no problem; if over dispersion use the negative binomial Poisson but not neg binomial is available in Mplus The Negative Binomial Distribution

30
Zero Inflated Poisson Regresson (ZIP Regression) Zero Inflated Negative Binomial Regression (ZINB Regression) Assumes two underlying processes predict whether one scores 0 or not 0 Predict count for those scoring > 0 Zero Inflated Models

31
Run to obtain a Poisson Regression Outcome is specified as a count variable To obtain a ZIP regression run Day1 Example5 Note that one can specify different models for occurrence and frequency

33
What is the correlation between X and Y? Descriptive Statistics MeanStd. DeviationN x8.00004.4239615 y8.00004.4239615 Correlations a xy xPearson Correlation1.912 ** Sig. (2-tailed).000 yPearson Correlation.912 ** 1 Sig. (2-tailed).000 a. Listwise N=15 An Example

34
Group 1 r = 0.0Mean = 3N=5 Group 2 r = 0.0Mean = 8N=5 Group 3 r = 0.0Mean = 13N=5

35
Multi-level data occurs when responses are grouped (nested) within one or more higher level units of responses E.G. Employees nested within teams/groups Longitudinal data – observations nested within individuals Creates a series of problems that may not be accounted for in standard techniques (e.g., regression, SEM etc) Introduction

36
Individuals within each group are more alike than individuals from different groups (variance is distorted) – violation of the assumption of independence We may want to predict level 1 responses from level 2 characteristics (i.e., does company size predict individual job satisfaction). If we analyse at the lowest level only we under- estimate variance and hence standard errors leading to inflated Type 1 errors – we find effects where they dont exist Aggregation to the highest level may distort the variables of interest (or may not be appropriate) Some Problems with MultiLevel Data

37
Simpsons – Completely erroneous conclusions may be drawn if grouped data, drawn from heterogeneous populations are collapsed and analyzed as if drawn from a single population Ecological – The mistake of assuming that the relationship between variables at the aggregated (higher) level will be the same at the disaggregated (lower) level Two Paradoxes

38
Essentially an extension of a regression model Y = mx + b + error Multilevel models allow for variation in the regression parameter (intercepts (b) and slopes(m)) across the groups comprising your sample Also allow us to predict variation ask why groups might vary in intercepts or slopes Intercept differences imply mean differences across groups Slope differences indicate different relationships (e.g., correlations) across groups What are multi-level models?

39
Attempting to explain (partition) variance in the DV Why dont we all score the same on a given variable? Simplest explanation is error – individuals score is the grand mean + error. If employees are in groups – then the variance of the level 1 units has at least 2 components – the variance of individuals around the group mean (within group variance) and the variance of the group means around the grand mean (between group variance) This is known as the intercepts only or variance components or unconditional model – it is a baseline that incorporates no predictors The Multilevel model

40
Can introduce predictors either at level 1 or level 2 or both to further explain variance Can allow the effects of level 1 predictors to vary across groups (random slopes) Can examine interactions within and across levels Can incorporate quadratic terms etc The Multilevel model (contd)

42
To create level 2 observations we often need to aggregate variables to the higher level and to merge the aggregated data with our level 1 data. To aggregate you need to specify [a] the variables to be aggregated, [b] the method of aggregation (sum, mean etc) and [c] the break variable (definition for level 2) SPSS allows you to aggregate and save group level data to the current file using the aggregate command Mplus allows you to do this within the Mplus run File Handling: Aggregation

43
If you choose to aggregate, then there should be some empirical support (i.e., evidence for similar responses within group). Some typical measures are: ICC – the interclass correlation. The extent to which variance is attributable to group differences. From ANOVA (MSb-MSw)/MSb+C-1(MSw) where C= average group size ICC(2) -reliability of means(MSb – MSw)/MSb Rwg (multiple variants) indices of agreement MPLUS calculates the ICC for random intercept models Notes on Aggregation

44
Centering a variable helps us to interpret the effect of predictors. In the simplest sense, centering involves subtracting the mean from each score (resulting a distribution of deviation scores that have a mean of 0) Centering (among other things) helps with convergence by imposing a common scale GRAND MEAN Centering – involves subtracting the sample mean from each score GROUP MEAN Centering –involves subtracting the group mean from each score – must be done manually. Centering Predictors

45
Grand mean – each score is measured as a deviation from the grand mean. The intercept is the score of an individual who is at the mean of all predictors the average person Group mean – each score measured as a deviation from the group mean. The intercept is the score of an individual who is at the mean of all predictors in the group the average person in group X Grand mean is the same transformation for all cases – for fixed main effects and overall fit will give the same results as raw data Group mean – different for each group – different results Centering (contd)

46
Grand mean – helps model fitting, aids interpretation (meaningful 0), may reduce collinearity in testing interactions, or between model parameters or squared effects – may reduce meaning if raw scores actually mean something Group mean – helps model fitting, can remove collinearity if you are including both group (aggregate) and individual measures of the same construct in the model (aggregate data explains between group and individual level explains within group variance). Centering (contd)

47
Grand mean – may be appropriate when the underlying model is either incremental (group effects add to individual level effects) or mediational (group effects exert influence through individual) Group mean – may be more appropriate when testing cross-level interactions Hoffman & Gavin (1998) – Journal of Management A general recommendation

48
Calculations are complex, dependent on intraclass correlations, sample size, effect size etc etc In general power at Level 1 increases with the number of observations and a Level 2 with the number of groups Hox (2002) recommends 30 observations in each of 30 groups Heck & Thomas (2000) suggested 20 groups with 30 observations in each Others suggest that even k=50 is too small Practical constraints likely rule Better to have a large number of groups with fewer individuals in each group than a small number of groups with large group sizes Power and Sample Size How many subjects = how long is a piece of string?

49
Occassionally (about 50% of the time) the program will not converge on a solution and will report a partial solution (i.e., not all parameters). In my experience lack of convergence is a direct function of sample size (small samples = convergence failures) The easiest fix is to ensure that this is not a scaling issue – ie that all variables are measured on roughly the same metric (standardize) The single most frustrating aspect of multi-level models Convergence

51
1. Ensure data are structured/arranged properly (aggregate, centered etc) – most of this can be done in MPLUS 2.Run a null model – The null model estimates a grand mean only model and provides a baseline for comparison 3. Run the unconditional model (grouping but no predictors) – assess ICC1 and whether varying intercepts is appropriate - a low ICC1 leads one to question the importance of a multilevel model (although this can be controversial) A plan of analysis

52
4. Incorporate level 1 predictors. Assess change in fit, level 1 variance and level 2 variance – starting to move into conditional models - this is equivalent to modeling our data as a series of parallel lines (one for each group) – slopes are the same but intercepts are allowed to vary 5. Allow slope to vary Assess fit, change in variance etc. Can now also estimate the covariance between intercept and slope effects that may be of interest 6. Incorporate level 2 predictors - explain team group but not individual level variance A plan of analysis

53
A global test of the model adequacy is given by the -2 log likelihood statistic – also known as the model deviance We can examine the change in deviance as models are made more complex No equivalent to the difference test in REML (Residual Max Likelihood) Testing Models: -2 Log Likelihood

54
No direct equivalent to an R-squared because there are multiple portions of variance Can focus on explaining variance at either the group or the individual level (i.e., reducing the residual) One useful approach is to calculate the variance explained at each step of the model Variance explained after predictor is added/variance before the addition of the predictor Testing Models: Percentage of variance

55
Statistical tests of parameters Analagous to the tests of regression (B) coefficients in regression Tests the null hypothesis that the parameter is 0 Testing Models: Parameter tests

56
Run Day1Example 6 to read in the data. Measures include GHQ, transformational leadership and team identifier Sample Total N =851 in 31 locations Start by estimating the variance components (random intercept only) model On the variable statement specify the usevariables=ghq team Specify cluster=team Add an analysis command Analysis:Type = twolevel Implementing the Analysis

57
Implementing the Analysis (contd) Hypotheses GHQ varies across team GHQ is predicted by leadership Effect of leadership on stress varies by location

58
Run Day1Example6.inp – the variance components model – a random intercept only model Add in the within group predictor TFL Need to include tfl on the use variables line Specify the centering centering=grandmean(tfl) Specify the within group model Model %Within% GHQ on tfl Maybe try the between group model Model %between% Ghq on Tfl

59
In Mplus twolevel analyses variables are specified as either Within (can only be modeled in the within group model) or Between (can only be modeled with the Between group model) Unspecified variables will be used appropriately (if used in the between group model then MPLUS will calculate the aggregate score on the variable)

60
Add random to the type statement Type = Twolevel random; Specify the random slope in the within model as S| Y on X where S is the name of the slope, Y is the DV, X is the predictor e.g, %Within% S|ghq on tfl; In the between model allow the random slope to correlate with the random intercept GHQ with S Predict the random slope S GHQ on TFL

61
Can use any techniques previously discussed Specify outcomes as binary or ordered (multilevel logistic), multilevel poisson etc etc etc Can incorporate multilevel regressions into path or SEM analyses (More about this later)

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google