Presentation is loading. Please wait.

Presentation is loading. Please wait.

1crmda.KU.edu Todd D. Little University of Kansas Director, Quantitative Training Program Director, Center for Research Methods and Data Analysis Director,

Similar presentations


Presentation on theme: "1crmda.KU.edu Todd D. Little University of Kansas Director, Quantitative Training Program Director, Center for Research Methods and Data Analysis Director,"— Presentation transcript:

1 1crmda.KU.edu Todd D. Little University of Kansas Director, Quantitative Training Program Director, Center for Research Methods and Data Analysis Director, Undergraduate Social and Behavioral Sciences Methodology Minor Member, Developmental Psychology Training Program crmda. KU.edu Workshop presented 05-23-2012 @ University of Turku Based on my Presidential Address presented 08-04-2011 @ American Psychological Association Meeting in Washington, DC Why the Items versus Parcels Controversy Needn’t Be One

2 2crmda.KU.edu University of Kansas

3 3crmda.KU.edu University of Kansas

4 4crmda.KU.edu University of Kansas

5 5crmda.KU.edu University of Kansas

6 6crmda.KU.edu University of Kansas

7 7 7 Overview Learn what parcels are and how to make them Learn the reasons for and the conditions under which parcels are beneficial Learn the conditions under which parcels can be problematic Disclaimer: This talk reflects my view that parcels per se aren’t controversial if done thoughtfully. crmda.KU.edu

8 8 8 Key Sources and Acknowledgements Special thanks to: Mijke Rhemtulla, Kimberly Gibson, Alex Schoemann, Wil Cunningham, Golan Shahar, John Graham & Keith Widaman Little, T. D., Rhemtulla, M., Gibson, K., & Schoemann, A. M. (in press). Why the items versus parcels controversy needn’t be one. Psychological Methods, 00, 000-000. Little, T. D., Cunningham, W. A., Shahar, G., & Widaman, K. F. (2002). To parcel or not to parcel: Exploring the question, weighing the merits. Structural Equation Modeling, 9, 151-173. Little, T. D., Lindenberger, U., & Nesselroade, J. R. (1999). On selecting indicators for multivariate measurement and modeling with latent variables: When "good" indicators are bad and "bad" indicators are good. Psychological Methods, 4, 192-211. crmda.KU.edu

9 9 9 What is Parceling? Parceling: Averaging (or summing) two or more items to create more reliable indicators of a construct ≈ Packaging items, tying them together Data pre-processing strategy crmda.KU.edu

10 10 A CFA of Items.83 GreatCheerful Model Fit: χ 2 (53, n=759) = 181.2; RMSEA =.056 (.048-.066) ; NNFI/TLI =.97; CFI =.98 -.29 1* HappyGoodGladSuperSadDownUnhappyBlueBadTerrible.76 Negative2Positive1.81.84.77.84.80.71.82.81.69.43.31.34.30.41.30.50.32.34.35.52.35 crmda.KU.edu

11 11 CFA: Using Parcels Positive1Negative2 11 21 31 42 52 62  21  11  22  33  44  55  66 Great &Glad & GladUnhappy & Bad Down & Blue Terrible &Sad & SadHappy & Super Cheerful & Good 1*  11  22 (6.2.Parcels) Average 2 items to create 3 parcels per construct crmda.KU.edu

12 12 CFA: Using Parcels Positive1Negative2.89.91.87.91 -.27     Great &Glad & GladUnhappy & Bad Down & Blue Terrible &Sad & SadHappy & Super Cheerful & Good 1* Model Fit: χ 2 (8, n=759) = 26.8; RMSEA =.056 (.033-.079) ; NNFI/TLI =.99; CFI =.99 Similar solution Similar factor correlation Higher loadings, more reliable info Good model fit, improved χ2 crmda.KU.edu

13 13 Philosophical Issues To parcel, or not to parcel…? crmda.KU.edu

14 14 Pragmatic View From Little et al., 2002 “Given that measurement is a strict, rule- bound system that is defined, followed, and reported by the investigator, the level of aggregation used to represent the measurement process is a matter of choice and justification on the part of the investigator” Preferred terms: remove unwanted, clean, reduce, minimize, strengthen, etc. “Given that measurement is a strict, rule- bound system that is defined, followed, and reported by the investigator, the level of aggregation used to represent the measurement process is a matter of choice and justification on the part of the investigator” Preferred terms: remove unwanted, clean, reduce, minimize, strengthen, etc. crmda.KU.edu

15 15 Empiricist / Conservative View From Little et al., 2002 “Parceling is akin to cheating because modeled data should be as close to the response of the individual as possible in order to avoid the potential imposition, or arbitrary manufacturing of a false structure” Preferred terms: mask, conceal, camouflage, hide, disguise, cover-up, etc. “Parceling is akin to cheating because modeled data should be as close to the response of the individual as possible in order to avoid the potential imposition, or arbitrary manufacturing of a false structure” Preferred terms: mask, conceal, camouflage, hide, disguise, cover-up, etc. crmda.KU.edu

16 16 Psuedo-Hobbesian View Parcels should be avoided because researchers are ignorant (perhaps stupid) and prone to mistakes. And, because the unthoughtful or unaware application of parcels by unwitting researchers can lead to bias, they should be avoided. Preferred terms: most (all) researchers are un___ as in … unaware, unable, unwitting, uninformed, unscrupulous, etc. Parcels should be avoided because researchers are ignorant (perhaps stupid) and prone to mistakes. And, because the unthoughtful or unaware application of parcels by unwitting researchers can lead to bias, they should be avoided. Preferred terms: most (all) researchers are un___ as in … unaware, unable, unwitting, uninformed, unscrupulous, etc. crmda.KU.edu

17 17 Other Issues I Classical school vs. Modeling School –Objectivity versus Transparency –Items vs. Indicators –Factors vs. Constructs Self-correcting nature of science Suboptimal simulations –Don’t include population misfit –Emphasize the ‘straw conditions’ and proofing the obvious; sometimes over generalize crmda.KU.edu

18 18 Other Issues II Focus of inquiry –Question about the items/scale development? Avoid parcels –Question about the constructs? Parcels are warranted but must be done thoughtfully! –Question about factorial invariance? Parcels are OK if done thoughtfully. crmda.KU.edu

19 19 Measurement Measurement starts with Operationalization Defining a concept with specific observable characteristics [Hitting and kicking ~ operational definition of Overt Aggression] Process of linking constructs to their manifest indicants (object/event that can be seen, touched, or otherwise recorded; cf. items vs. indicators) Rule-bound assignment of numbers to the indicants of that which exists [e,g., Never=1, Seldom=2, Often=3, Always=4] … although convention often ‘rules’, the rules should be chosen and defined by the investigator “Whatever exists at all exists in some amount. To know it thoroughly involves knowing its quantity as well as its quality” - E. L. Thorndike (1918) “Whatever exists at all exists in some amount. To know it thoroughly involves knowing its quantity as well as its quality” - E. L. Thorndike (1918) crmda.KU.edu

20 20 “Indicators are our worldly window into the latent space” - John R. Nesselroade crmda.KU.edu

21 21 Classical Variants a) X i = T i + S i + e i b) X = T + S + e c) X 1 = T 1 + S 1 + e 1 X i : a person’s observed score on an item T i : 'true' score (i.e., what we hope to measure) S i : item-specific, yet reliable, component e i : random error or noise. Assume: S i and e i are normally distributed (with mean of zero) and uncorrelated with each other Across all items in a domain, the S i s are uncorrelated with each other, as are the e i s crmda.KU.edu

22 22 Latent Variable Variants a) X 1 = T + S 1 + e 1 b) X 2 = T + S 2 + e 2 c) X 3 = T + S 3 + e 3 X 1 -X 3 : are multiple indicators of the same construct T : common 'true' score across indicators S 1 -S 3 : item-specific, yet reliable, component e 1 -e 3 : random error or noise. Assume: S s and e s are normally distributed (with mean of zero) and uncorrelated with each other Across all items in a domain, the S s are uncorrelated with each other, as are the e s crmda.KU.edu

23 23 Empirical Pros Psychometric Characteristics of Parcels (vs. Items) Higher reliability, communality, & ratio of common- to-unique factor variance Lower likelihood of distributional violations More, Smaller, and more-equal intervals NeverSeldomOftenAlways Happy1234 Glad1234 Mean11.522.533.54 Sum2345678 crmda.KU.edu

24 24 More Empirical Pros Model Estimation and Fit with Parcels (vs. Items) Fewer parameter estimates Lower indicator-to-subject ratio Reduces sources of parsimony error (population misfit of a model)  Lower likelihood of correlated residuals & dual factor loading Reduces sources of sampling error Makes large models tractable/estimable crmda.KU.edu

25 25 Sources of Variance in Items crmda.KU.edu

26 26 Simple Parcel T e2e2e2e2 T s3s3s3s3 e3e3e3e3 + 3 T S1S1S1S1 e1e1e1e1 + T SxSxSxSx S2S2S2S2 1/9 of their original size! crmda.KU.edu

27 27 Correlated Residual T e8e8e8e8 T s9s9s9s9 e9e9e9e9 + 3 T S7S7S7S7 e7e7e7e7 + T SySySySy S8S8S8S8 SySySySy crmda.KU.edu

28 28 Cross-loading: Correlated Factors e5e5e5e5 s6s6s6s6 e6e6e6e6 + 3 U1 S4S4S4S4 e4e4e4e4 + S5S5S5S5 C U1 C U1 C U2 C U1 C T1 T2 crmda.KU.edu

29 29 Cross-loading: Uncorrelated Factors 3 e5e5e5e5 s6s6s6s6 e6e6e6e6 + U1 S4S4S4S4 e4e4e4e4 + S5S5S5S5 C U1 C U1 C U2 U1 C T1 crmda.KU.edu

30 30 Construct = Common Variance of Indicators crmda.KU.edu

31 31 Construct = Common Variance of Indicators crmda.KU.edu

32 32 Empirical Cautions TT S1S1S1S1 S2S2S2S2 E1E1E1E1 E2E2E2E2 + Specific Construct Error ¼ of their original size! crmda.KU.edu 2 T M M

33 33 Construct = Common Variance of Indicators crmda.KU.edu

34 34 Construct = Common Variance of Indicators crmda.KU.edu

35 35 Three is Ideal Positive1Negative2 11 21 31 42 52 62  21  11  22  33  44  55  66 Great &Glad & GladUnhappy & Bad Down & Blue Terrible &Sad & SadHappy & Super Cheerful & Good 1*  11  22 crmda.KU.edu

36 36 P1 11  11 11 +  11 P2 11  11 21 21  11 21 +  22 P3 11  11 31 21  11 31 31  11 31 +  33 N1 11  21 42 21  21 42 31  21 42 42  22 42 +  44 N2 11  21 52 21  21 52 31  21 52 42  22 52 52  22 52 +  55 N3 11  21 62 21  21 62 31  21 62 42  22 62 52  22 62 62  22 62 +  66 Great & Glad Unhappy & Bad Down & Blue Terrible & Sad Happy & Super Cheerful & Good Matrix Algebra Formula: Σ = Λ Ψ Λ´ + Θ Cross-construct item associations (in box) estimated only via Ψ 21 – the latent constructs’ correlation. Degrees of freedom only arise from between construct relations Three is Ideal crmda.KU.edu

37 37 Empirical Cons Multidimensionality Constructs and relationships can be hard to interpret if done improperly Model misspecification Can get improved model fit, regardless of whether model is correctly specified Increased Type II error rate if question is about the items Parcel-allocation variability Solutions depend on the parcel allocation combination (Sterba & McCallum, 2010; Sterba, in press) Applicable when the conditions for sampling error are high crmda.KU.edu

38 38 Psychometric Issues Principles of Aggregation (e.g., Rushton et al.) Any one item is less representative than the average of many items (selection rationale) Aggregating items yields greater precision Law of Large Numbers More is better, yielding more precise estimates of parameters (and a person’s true score) Normalizing tendency crmda.KU.edu

39 39 Construct Space with Centroid crmda.KU.edu

40 40 Potential Indicators of the Construct crmda.KU.edu

41 41 Selecting Six (Three Pairs) crmda.KU.edu

42 42 … take the mean crmda.KU.edu

43 43 … and find the centroid crmda.KU.edu

44 44 Selecting Six (Three Pairs) crmda.KU.edu

45 45 … take the mean crmda.KU.edu

46 46 … find the centroid crmda.KU.edu

47 47 How about 3 sets of 3? crmda.KU.edu

48 48 … taking the means crmda.KU.edu

49 49 … yields more reliable & accurate indicators crmda.KU.edu

50 50 Building Parcels Theory – Know thy S and the nature of your items. Random assignment of items to parcels (e.g., fMRI) Use Sterba’s calculator to find allocation variability when sampling error is high. Balancing technique Combine items with higher loadings with items having smaller loadings [Reverse serpentine pattern] Using a priori designs (e.g., CAMI) Develop new tests or measures with parcels as the goal for use in research crmda.KU.edu

51 51 Techniques: Multidimensional Case Example: ‘Intelligence’ ~ Spatial, Verbal, Numerical Domain Representative Parcels Has mixed item content from various dimensions Parcel consists of: 1 Spatial item, 1 Verbal item, and 1 Numerical item Facet Representative Parcels Internally consistent, each parcel is a ‘facet’ or singular dimension of the construct Parcel consists of: 3 Spatial items Recommended method crmda.KU.edu

52 52 Domain Representative Parcels SV += N + SV += N + SV += N + Spatial Verbal Numerical Parcel #1 Parcel #3 Parcel #2 crmda.KU.edu

53 53 Domain Representative crmda.KU.edu

54 54 Domain Representative But which facet is driving the correlation among constructs? Intellective Ability, Spatial Ability, Verbal Ability, Numerical Ability crmda.KU.edu

55 55 Facet Representative Parcels SS += S + VV += V + NN += N + Parcel: Spatial Parcel: Verbal Parcel: Numerical crmda.KU.edu

56 56 Facet Representative crmda.KU.edu

57 57 Facet Representative Intellective Ability Diagram depicts smaller communalities (amount of shared variance) crmda.KU.edu

58 58 Facet Representative Parcels +=+ +=+ +=+ A more realistic case with higher communalities crmda.KU.edu

59 59 Facet Representative crmda.KU.edu

60 60 Facet Representative Intellective Ability Parcels have more reliable information crmda.KU.edu

61 61 2 nd Order Representation Capture multiple sources of variance? crmda.KU.edu

62 62 2 nd Order Representation Variance can be partitioned even further crmda.KU.edu

63 63 2 nd Order Representation Lower-order constructs retain facet-specific variance crmda.KU.edu

64 64 Functionally Equivalent Models Explicit Higher-Order Structure Implicit Higher-Order Structure crmda.KU.edu

65 65 When Facet Representative Is Best crmda.KU.edu

66 66 When Domain Representative Is Best crmda.KU.edu

67 67crmda.KU.edu Todd D. Little University of Kansas Director, Quantitative Training Program Director, Center for Research Methods and Data Analysis Director, Undergraduate Social and Behavioral Sciences Methodology Minor Member, Developmental Psychology Training Program crmda. KU.edu Workshop presented 05-23-2012 @ University of Turku Based on Presidential Address presented 08-04-2011 @ American Psychological Association Meeting in Washington, DC Thank You!

68 Update Dr. Todd Little is currently at Texas Tech University Director, Institute for Measurement, Methodology, Analysis and Policy (IMMAP) Director, “Stats Camp” Professor, Educational Psychology and Leadership Email: yhat@ttu.eduyhat@ttu.edu IMMAP (immap.educ.ttu.edu) Stats Camp (Statscamp.org) 68www.Quant.KU.edu


Download ppt "1crmda.KU.edu Todd D. Little University of Kansas Director, Quantitative Training Program Director, Center for Research Methods and Data Analysis Director,"

Similar presentations


Ads by Google