Presentation is loading. Please wait.

Presentation is loading. Please wait.

Proactive Monte Carlo Analysis in Structural Equation Modeling James H. Steiger Vanderbilt University.

Similar presentations


Presentation on theme: "Proactive Monte Carlo Analysis in Structural Equation Modeling James H. Steiger Vanderbilt University."— Presentation transcript:

1 Proactive Monte Carlo Analysis in Structural Equation Modeling James H. Steiger Vanderbilt University

2 Some Unhappy Scenarios A Confirmatory Factor Analysis – You fit a 3 factor model to 9 variables with N=150 – You obtain a Heywood Case Comparing Two Correlation Matrices – You wish to test whether two population matrices are equivalent, using ML estimation – You obtain an unexpected rejection

3 Some Unhappy Scenarios Fitting a Trait-State Model – You fit the Kenny-Zautra TSE model to 4 waves of panel data with N=200. You obtain a variance estimate of zero. Writing a Program Manual – You include an example analysis in your widely distributed computer manual – The analysis remains in your manuals for more than a decade – The analysis is fundamentally flawed, and gives incorrect results

4 Some Common Elements Models of covariance or correlation structure Potential problems could have been identified before data were ever gathered, using “proactive Monte Carlo analysis”

5 Confirmatory Factor Analysis VariableFactor 1Factor 2Factor 3 VIS_PERCX CUBESX LOZENGESX PAR_COMPX SEN_COMPX WRD_MNGX ADDITIONX CNT_DOTX ST_CURVEX

6 Confirmatory Factor Analysis VariableFactor 1Factor 2Factor 3 Unique Var. VIS_PERC CUBES LOZENGES PAR_COMP SEN_COMP WRD_MNG ADDITION CNT_DOT ST_CURVE

7 Confirmatory Factor Analysis VariableFactor 1Factor 2Factor 3 Unique Var. VIS_PERC CUBES LOZENGES PAR_COMP SEN_COMP WRD_MNG ADDITION CNT_DOT ST_CURVE

8 Proactive Monte Carlo Analysis Take the model you anticipate fitting Insert reasonable parameter values Generate a population covariance or correlation matrix and fit this matrix, to assess identification problems Examine Monte Carlo performance over a range of sample sizes that you are considering Assess convergence problems, frequency of improper estimates, Type I Error, accuracy of fit indices Preliminary investigations may take only a few hours

9 Confirmatory Factor Analysis (Speed)-1{.3}->[VIS_PERC] (Speed)-2{.4}->[CUBES] (Speed)-3{.5}->[LOZENGES] (Verbal)-4{.6}->[PAR_COMP] (Verbal)-5{.3}->[SEN_COMP] (Verbal)-6{.4}->[WRD_MNG] (Visual)-7{.5}->[ADDITION] (Visual)-8{.6}->[CNT_DOT] (Visual)-9{.3}->[ST_CURVE]

10 Confirmatory Factor Analysis

11

12

13 (Speed)-1{.53}->[VIS_PERC] (Speed)-2{.54}->[CUBES] (Speed)-3{.55}->[LOZENGES] (Verbal)-4{.6}->[PAR_COMP] (Verbal)-5{.3}->[SEN_COMP] (Verbal)-6{.4}->[WRD_MNG] (Visual)-7{.5}->[ADDITION] (Visual)-8{.6}->[CNT_DOT] (Visual)-9{.3}->[ST_CURVE]

14 Confirmatory Factor Analysis

15

16

17 VariableFactor 1Factor 2Factor 3 Unique Var. VIS_PERC CUBES LOZENGES PAR_COMP SEN_COMP WRD_MNG ADDITION CNT_DOT ST_CURVE

18 Proactive Monte Carlo Analysis

19

20

21

22 Percentage of Heywood Cases NLoading.4Loading.6Loading %30%0% 10078%11%0% 15062% 3%0% 30021% 0% 50001% 0%

23 Standard Errors

24

25

26 Distribution of Estimates

27 Standard Errors (N =300)

28

29 Distribution of Estimates

30 Correlational Pattern Hypotheses “Pattern Hypothesis” – A statistical hypothesis that specifies that parameters or groups of parameters are equal to each other, and/or to specified numerical values Advantages of Pattern Hypotheses – Only about equality, so they are invariant under nonlinear monotonic transformations (e.g., Fisher Transform).

31 Correlational Pattern Hypotheses Caution! You cannot use the Fisher transform to construct confidence intervals for differences of correlations – For an example of this error, see Glass and Stanley (1970, p ).

32 Comparing Two Correlation Matrices in Two Independent Samples Jennrich (1970) – Method of Maximum Likelihood (ML) – Method of Generalized Least Squares (GLS) – Example Two 11x11 matrices Sample sizes of 40 and 89

33 Comparing Two Correlation Matrices in Two Independent Samples ML Approach Minimizes ML discrepancy function Can be programmed with standard SEM software packages that have multi-sample capability

34 Comparing Two Correlation Matrices in Two Independent Samples Generalized Least Squares Approach Minimizes GLS discrepancy function SEM programs will iterate the solution Freeware (Steiger, 2005, in press) will perform direct analytic solution

35 Monte Carlo Results – Chi-Square Statistic MeanS.D. Observed Expected6611.5

36 Monte Carlo Results – Distribution of p-Values

37 Monte Carlo Results – Distribution of Chi-Square Statistics

38 Monte Carlo Results (ML) – Empirical vs. Nominal Type I Error Rate Nominal  Empirical 

39 Monte Carlo Results (ML) Empirical vs. Nominal Type I Error Rate N = 250 per Group Nominal  Empirical 

40 Monte Carlo Results – Chi-Square Statistic, N = 250 per Group MeanS.D. Observed Expected6611.5

41 Kenny-Zautra TSE Model T Y1Y1 11 Y2Y2 22 Y3Y3 33 YJYJ JJ O2O2 O1O1 O3O3 OJOJ … 22 33 JJ    J  TSE model

42 Likelihood of Improper Values in the TSE Model

43 Constraint Interaction Steiger, J.H. (2002). When constraints interact: A caution about reference variables, identification constraints, and scale dependencies in structural equation modeling. Psychological Methods, 7,

44 Constraint Interaction

45

46

47

48 Constraint Interaction – Model without ULI Constraints (Constrained Estimation) (XI1)-1->[X1] (XI1)-2->[X2] (XI1)-{1}-(XI1) (DELTA1)-->[X1] (DELTA2)-->[X2] (DELTA1)-3-(DELTA1) (DELTA2)-4-(DELTA2) (ETA1)-98->[Y1] (ETA1)-5->[Y2] (ETA2)-99->[Y3] (ETA2)-6->[Y4] (EPSILON1)-->[Y1] (EPSILON2)-->[Y2] (EPSILON3)-->[Y3] (EPSILON4)-->[Y4] (EPSILON1)-7-(EPSILON1) (EPSILON2)-8-(EPSILON2) (EPSILON3)-9-(EPSILON3) (EPSILON4)-10-(EPSILON4) (ZETA1)-->(ETA1) (ZETA2)-->(ETA2) (ZETA1)-11-(ZETA1) (ZETA2)-12-(ZETA2) (XI1)-13->(ETA1) (XI1)-13->(ETA2) (ETA1)-15->(ETA2)

49 Constraint Interaction

50

51 Constraint Interaction – Model With ULI Constraints (XI1)-->[X1] (XI1)-2->[X2] (XI1)-1-(XI1) (DELTA1)-->[X1] (DELTA2)-->[X2] (DELTA1)-3-(DELTA1) (DELTA2)-4-(DELTA2) (ETA1)-->[Y1] (ETA1)-5->[Y2] (ETA2)-->[Y3] (ETA2)-6->[Y4] (EPSILON1)-->[Y1] (EPSILON2)-->[Y2] (EPSILON3)-->[Y3] (EPSILON4)-->[Y4] (EPSILON1)-7-(EPSILON1) (EPSILON2)-8-(EPSILON2) (EPSILON3)-9-(EPSILON3) (EPSILON4)-10-(EPSILON4) (ZETA1)-->(ETA1) (ZETA2)-->(ETA2) (ZETA1)-11-(ZETA1) (ZETA2)-12-(ZETA2) (XI1)-13->(ETA1) (XI1)-13->(ETA2) (ETA1)-15->(ETA2)

52 Constraint Interaction – Model With ULI Constraints

53 Typical Characteristics of Statistical Computing Cycles Back-loaded – Occur late in the research cycle, after data are gathered Reactive – Often occur in support of analytic activities that are reactions to previous analysis results

54 Traditional Statistical World-View Data come first Analyses come second Analyses are well-understood and will work Before the data arrive, there is nothing to analyze and no reason to start analyzing

55 Modern Statistical World View Planning comes first – Power Analysis, Precision Analysis, etc. Planning may require some substantial computing – Goal is to estimate required sample size Data analysis must wait for data

56 Proactive SEM Statistical World View SEM involves interaction between specific model(s) and data. – Some models may not “work” with many data sets Planning involves: – Power Analysis – Precision Analysis – Confirming Identification – Proactive Analysis of Model Performance Without proper proactive analysis, research can be stopped cold with an “unhappy surprise.”

57 Barriers Software – Design – Availability Education


Download ppt "Proactive Monte Carlo Analysis in Structural Equation Modeling James H. Steiger Vanderbilt University."

Similar presentations


Ads by Google