Presentation is loading. Please wait.

Presentation is loading. Please wait.

MEASUREMENT MODELS. BASIC EQUATION x =  + e x = observed score  = true (latent) score: represents the score that would be obtained over many independent.

Similar presentations


Presentation on theme: "MEASUREMENT MODELS. BASIC EQUATION x =  + e x = observed score  = true (latent) score: represents the score that would be obtained over many independent."— Presentation transcript:

1 MEASUREMENT MODELS

2 BASIC EQUATION x =  + e x = observed score  = true (latent) score: represents the score that would be obtained over many independent administrations of the same item or test e = error: difference between y and 

3 ASSUMPTIONS  and e are independent (uncorrelated) The equation can hold for an individual or a group at one occasion or across occasions: x ijk =  ijk + e ijk (individual) x *** =  *** + e *** (group) combinations (individual across time)

4  x xx e

5 RELIABILITY Reliability is a proportion of variance measure (squared variable) Defined as the proportion of observed score (x) variance due to true score (  ) variance:  2 x  =  xx’ =  2  /  2 x

6 Var(  ) Var(x) Var(e) reliability

7 Reliability: parallel forms x 1 =  + e 1, x 2 =  + e 2  (x 1,x 2 ) = reliability =  xx’ = correlation between parallel forms

8  x1x1 xx e x2x2 e xx  xx’ =  x  *  x 

9 ASSUMPTIONS  and e are independent (uncorrelated) The equation can hold for an individual or a group at one occasion or across occasions: x ijk =  ijk + e ijk (individual) x *** =  *** + e *** (group) combinations (individual across time)

10 Reliability: Spearman-Brown Can show the reliability of the composite is  kk’ = [k  xx’ ]/[1 + (k-1)  xx’ ] k = # times test is lengthened example: test score has rel=.7 doubling length produces rel = 2(.7)/[1+.7] =.824

11 Reliability: parallel forms For 3 or more items x i, same general form holds reliability of any pair is the correlation between them Reliability of the composite (sum of items) is based on the average inter-item correlation: stepped-up reliability, Spearman-Brown formula

12 RELIABILITY Generalizability d - coefficients ANOVA g - coefficients Cronbach’s alpha test-retest internal consistency inter-rater parallel form Hoyt dichotomous split half scoring KR-20Spearman KR-21Brown average inter-item correlation

13 COMPOSITES AND FACTOR STRUCTURE 3 MANIFEST VARIABLES REQUIRED FOR A UNIQUE IDENTIFICATION OF A SINGLE FACTOR PARALLEL FORMS REQUIRES: –EQUAL FACTOR LOADINGS –EQUAL ERROR VARIANCES –INDEPENDENCE OF ERRORS

14  x1x1 xx e x2x2 e xx  xx’ =  x i  *  x j  x3x3 e xx

15 RELIABILITY FROM SEM TRUE SCORE VARIANCE OF THE COMPOSITE IS OBTAINABLE FROM THE LOADINGS: K  =  2 i i=1 K = # items or subtests = K  2 x 

16 Hancock’s Formula Hj = 1/ [ 1 + {1 / (Σ[l 2 ij /(1- l 2 ij )] ) } Ex. l 1 =.7, l 2 =.8, l 3 =.6 H = 1 / [ 1 +1/(.49/.51 +.64/.36 +.36/.64 )] = 1 / [ 1 + 1/ (.98 +1.67 +.56 ) ] = 1/ (1 + 1/3.21) =.76

17 Hancock’s Formula Explained Hj = 1/ [ 1 + {1 / (Σ[l 2 ij /(1- l 2 ij )] ) } now assume strict parallelism: then l 2 ij =  2 xt thus Hj = 1/ [ 1 + {1 / (Σ[  2 xt /(1-  2 xt )] ) } = k  2 xt / [1 + (k-1)  2 xt ] = Spearman-Brown formula

18 RELIABILITY FROM SEM RELIABILITY OF THE COMPOSITE IS OBTAINABLE FROM THE LOADINGS:  = K/(K-1)[1 - 1/  ] example  2 x  =.8, K=11  = 11/(10)[1 - 1/8.8 ] =.975

19 SEM MODELING OF PARALLEL FORMS PROC CALIS COV CORR MOD; LINEQS X1 = L1 F1 + E1, X2 = L1 F1 + E1, … X10 = L1 F1 + E1; STD E1=THE1, F1= 1.0;

20 TAU EQUIVALENCE ITEM TRUE SCORES DIFFER BY A CONSTANT:  i =  j +  k ERROR STRUCTURE UNCHANGED AS TO EQUAL VARIANCES, INDEPENDENCE

21 TESTING TAU EQUIVALENCE ANOVA: TREAT AS A REPEATED MEASURES SUBJECT X ITEM DESIGN: PROC VARCOMP;CLASS ID ITEM; MODEL SCORE = ID ITEM; LOW VARIANCE ESTIMATE CAN BE TAKEN AS EVIDENCE FOR PARALLELISM (UNLIKELY TO BE EXACTLY ZERO

22 CONGENERIC MODEL LESS RESTRICTIVE THAN PARALLEL FORMS OR TAU EQUIVALENCE: –LOADINGS MAY DIFFER –ERROR VARIANCES MAY DIFFER MOST COMPLEX COMPOSITES ARE CONGENERIC: –WAIS, WISC-III, K-ABC, MMPI, etc.

23  x1x1 x1x1 e1e1 x2x2 e2e2 x2x2  (x 1, x 2 )=  x 1  *  x 2  x3x3 e3e3 x3x3

24 COEFFICIENT ALPHA  xx’ = 1 -  2 E /  2 X = 1 - [  2 i (1 -  ii )]/  2 X, since errors are uncorrelated  = K/(K-1)[1 -  (  s 2 i )/ s 2 X ] where X =  x i (composite score)  s 2 i = variance of subtest  x i  s X = variance of composite Does not assume knowledge of subtest  ii

25 COEFFICIENT ALPHA- NUNNALLY’S COEFFICIENT IF WE KNOW RELIABILITIES OF EACH SUBTEST,  i  N = K/(K-1)[  s 2 i (1- r ii )/ s 2 X ] where r ii = coefficient alpha of each subtest Willson (1996) showed    N

26 SEM MODELING OF CONGENERIC FORMS MPLUS EXAMPLE:this is an example of a CFA DATA:FILE IS ex5.1.dat; VARIABLE:NAMES ARE y1-y6; MODEL:f1 BY y1-y3; f2 BY y4-y6; OUTPUT:SAMPSTAT MOD STAND;

27  x1x1 x1x1 e1e1 x2x2 e2e2 x2x2  X i X i =  2 x i  + s 2 i x3x3 e3e3 x3x3 s1s1 NUNNALLY’S RELIABILITY CASE s2s2 s3s3

28  x1x1 x1x1 e1e1 x2x2 e2e2 x2x2 Specificities can be misinterpreted as a correlated error model if they are correlated or a second factor x3x3 e3e3 x3x3 s CORRELATED ERROR PROBLEMS s3s3

29  x1x1 x1x1 e1e1 x2x2 e2e2 x2x2 Specificieties can be misinterpreted as a correlated error model if specificities are correlated or are a second factor x3x3 e3e3 x3x3 CORRELATED ERROR PROBLEMS s3s3

30 SEM MODELING OF CONGENERIC FORMS- CORRELATED ERRORS MPLUS EXAMPLE:this is an example of a CFA DATA:FILE IS ex5.1.dat; VARIABLE:NAMES ARE y1-y6; MODEL:f1 BY y1-y3; f2 BY y4-y6; y4 with y5; OUTPUT:SAMPSTAT MOD STAND; specifies residuals of previous model, correlates them

31 MULTIFACTOR STRUCTURE Measurement Model: Does it hold for each factor? –PARALLEL VS. TAU-EQUIVALENT VS. CONGENERIC How are factors related? What does reliability mean in the context of multifactor structure?

32 SIMPLE STRUCTURE PSYCHOLOGICAL CONCEPT: Maximize loading of a manifest variable on one factor ( IDEAL = 1.0 ) Minimize loadings of the manifest variables on all other factors ( IDEAL = 0 )

33 SIMPLE STRUCTURE Example SUBTESTFACTOR1FACTOR2FACTOR3 A100 B100 C010 D010 E001 F001

34 MULTIFACTOR ANALYSIS Exploratory: determine number, composition of factors from empirical sampled data –# factors  # eigenvalues > 1.0 (using squared multiple correlation of each item/subtest i with the rest as a variance estimate for  2 x i  –empirical loadings determine structure

35 MULTIFACTOR ANALYSIS TITLE:this is an example of an exploratory factor analysis with continuous factor indicators DATA:FILE IS ex4.1.dat; VARIABLE:NAMES ARE y1-y12; ANALYSIS:TYPE = EFA 1 4;

36 MULTIFACTOR MODEL WITH THEORETICAL PARAMETERS MPLUS EXAMPLE:this is an example of a CFA DATA:FILE IS ex5.1.dat; VARIABLE:NAMES ARE y1-y6; MODEL:f1 BY y1@.7 y2@.8 y3@.6; f2 BY y4@.6 y5@.7 y6@.8;y6@.8 f1 with f2@.7; OUTPUT:SAMPSTAT MOD STAND;

37 11 x1x1 x11x11 e1e1 x2x2 e2e2 x22x22 x3x3 e3e3 x31x31 MINIMAL CORRELATED FACTOR STRUCTURE 22 x4x4 e4e4 x42x42  12

38 FACTOR RELIABILITY Reliability for Factor 1:  = 2(  x 1  1 *  x 3  1 ) / (1 +  x 1  1 *  x 3  1 ) (Spearman-Brown for Factor 1 reliability based on the average inter-item correlation Reliability for Factor 2:  = 2(  x 2  2 *  x 4  2 ) / (1 +  x 2  2 *  x 4  2 )

39 FACTOR RELIABILITY Generalizes to any factors- reliability is simply the measurement model reliability for the scores for that factor This has not been well-discussed in the literature –problem has been exploratory analyses produce successively smaller eigenvalues for factors due to the extraction process –second factor will in general be less reliable using loadings to estimate interitem r’s

40 FACTOR RELIABILITY Theoretically, each factor’s reliability should be independent of any other’s, regardless of the covariance between factors Thus, the order of factor extraction should be independent of factor structure and reliability, since it produces maximum sample eigenvalues (and sample loadings) in an extraction process. Composite is a misnomer in testing if the factors are treated as independent constructs rather than subtests for a more global composite score (separate scores rather than one score created by summing subscale scores)

41 CONSTRAINED FACTOR MODELS If reliabilities for scales are known independent of the current data (estimated from items comprising scales, for example), error variance can be constrained: s 2 e i = s[1 -  i ]

42  x1x1 x 1  e1e1 x2x2 e2e2 x 2  x3x3 e3e3 x 3  CONSTRAINED SEM- KNOWN RELIABILITY s x3 [1-  3 ] 1/2 s x1 [1-  1 ] 1/2 s x2 [1-  2 ] 1/2

43 CONSTRAINED SEM-KNOWN RELIABILITY MPLUS EXAMPLE:this is an example of a CFA with known error unreliabilities DATA:FILE IS ex5.1.dat; VARIABLE:NAMES ARE y1-y6; MODEL:f1 BY y1-y3; f2 BY y4-y6; y1@.4; y2@.3; OUTPUT:SAMPSTAT MOD STAND; similar statement for each item

44 SEM Measurement Procedures 1. Evaluate the theoretical measurement model for ALL factors (not single indicator variables included) Demonstrate discriminant validity by showing the factors are separate constructs Revise factors as needed to demonstrate- drop some manifest variables if necessary and not theoretically damaging Ref: Anderson & Gerbing (1988)


Download ppt "MEASUREMENT MODELS. BASIC EQUATION x =  + e x = observed score  = true (latent) score: represents the score that would be obtained over many independent."

Similar presentations


Ads by Google