MEASUREMENT MODELS. BASIC EQUATION x =  + e x = observed score  = true (latent) score: represents the score that would be obtained over many independent.

Slides:



Advertisements
Similar presentations
1 Regression as Moment Structure. 2 Regression Equation Y =  X + v Observable Variables Y z = X Moment matrix  YY  YX  =  YX  XX Moment structure.
Advertisements

Multiple Regression Analysis
Canonical Correlation
Structural Equation Modeling Using Mplus Chongming Yang Research Support Center FHSS College.
Structural Equation Modeling
Confirmatory Factor Analysis
Topics: Quality of Measurements
Some (Simplified) Steps for Creating a Personality Questionnaire Generate an item pool Administer the items to a sample of people Assess the uni-dimensionality.
Reliability Definition: The stability or consistency of a test. Assumption: True score = obtained score +/- error Domain Sampling Model Item Domain Test.
© McGraw-Hill Higher Education. All rights reserved. Chapter 3 Reliability and Objectivity.
Chapter 4 – Reliability Observed Scores and True Scores Error
Assessment Procedures for Counselors and Helping Professionals, 7e © 2010 Pearson Education, Inc. All rights reserved. Chapter 5 Reliability.
VALIDITY AND RELIABILITY
 A description of the ways a research will observe and measure a variable, so called because it specifies the operations that will be taken into account.
Measurement. Scales of Measurement Stanley S. Stevens’ Five Criteria for Four Scales Nominal Scales –1. numbers are assigned to objects according to rules.
Testing 05 Reliability.
LINEAR REGRESSION: Evaluating Regression Models Overview Assumptions for Linear Regression Evaluating a Regression Model.
LINEAR REGRESSION: Evaluating Regression Models. Overview Assumptions for Linear Regression Evaluating a Regression Model.
Factor Analysis Ulf H. Olsson Professor of Statistics.
When Measurement Models and Factor Models Conflict: Maximizing Internal Consistency James M. Graham, Ph.D. Western Washington University ABSTRACT: The.
LECTURE 5 TRUE SCORE THEORY. True Score Theory OBJECTIVES: - know basic model, assumptions - know definition of reliability, relation to TST - be able.
Factor Analysis Ulf H. Olsson Professor of Statistics.
Education 795 Class Notes Factor Analysis II Note set 7.
LECTURE 16 STRUCTURAL EQUATION MODELING.
Research Methods in MIS
FACTOR ANALYSIS LECTURE 11 EPSY 625. PURPOSES SUPPORT VALIDITY OF TEST SCALE WITH RESPECT TO UNDERLYING TRAITS (FACTORS) EFA - EXPLORE/UNDERSTAND UNDERLYING.
Social Science Research Design and Statistics, 2/e Alfred P. Rovai, Jason D. Baker, and Michael K. Ponton Internal Consistency Reliability Analysis PowerPoint.
Multivariate Methods EPSY 5245 Michael C. Rodriguez.
LECTURE 6 RELIABILITY. Reliability is a proportion of variance measure (squared variable) Defined as the proportion of observed score (x) variance due.
Reliability, Validity, & Scaling
Unanswered Questions in Typical Literature Review 1. Thoroughness – How thorough was the literature search? – Did it include a computer search and a hand.
Statistical Evaluation of Data
1 Exploratory & Confirmatory Factor Analysis Alan C. Acock OSU Summer Institute, 2009.
Psy 427 Cal State Northridge Andrew Ainsworth PhD.
1 Chapter 4 – Reliability 1. Observed Scores and True Scores 2. Error 3. How We Deal with Sources of Error: A. Domain sampling – test items B. Time sampling.
Tests and Measurements Intersession 2006.
Advanced Correlational Analyses D/RS 1013 Factor Analysis.
CJT 765: Structural Equation Modeling Class 8: Confirmatory Factory Analysis.
Roger B. Hammer Assistant Professor Department of Sociology Oregon State University Conducting Social Research Ordinary Least Squares Regression.
6. Evaluation of measuring tools: validity Psychometrics. 2012/13. Group A (English)
Confirmatory Factor Analysis Psych 818 DeShon. Construct Validity: MTMM ● Assessed via convergent and divergent evidence ● Convergent – Measures of the.
Factor Analysis ( 因素分析 ) Kaiping Grace Yao National Taiwan University
Measurement Models: Exploratory and Confirmatory Factor Analysis James G. Anderson, Ph.D. Purdue University.
G Lecture 7 Confirmatory Factor Analysis
1 EPSY 546: LECTURE 1 SUMMARY George Karabatsos. 2 REVIEW.
G Lecture 81 Comparing Measurement Models across Groups Reducing Bias with Hybrid Models Setting the Scale of Latent Variables Thinking about Hybrid.
RELIABILITY Prepared by Marina Gvozdeva, Elena Onoprienko, Yulia Polshina, Nadezhda Shablikova.
MEASUREMENT. MeasurementThe assignment of numbers to observed phenomena according to certain rules. Rules of CorrespondenceDefines measurement in a given.
G Lecture 91 Measurement Error Models Bias due to measurement error Adjusting for bias with structural equation models Examples Alternative models.
CJT 765: Structural Equation Modeling Class 8: Confirmatory Factory Analysis.
MEASUREMENT: PART 1. Overview  Background  Scales of Measurement  Reliability  Validity (next time)
Reliability performance on language tests is also affected by factors other than communicative language ability. (1) test method facets They are systematic.
Reliability: Introduction. Reliability Session 1.Definitions & Basic Concepts of Reliability 2.Theoretical Approaches 3.Empirical Assessments of Reliability.
Structural Equation Modeling Mgmt 291 Lecture 3 – CFA and Hybrid Models Oct. 12, 2009.
Reliability: Introduction. Reliability Session Definitions & Basic Concepts of Reliability Theoretical Approaches Empirical Assessments of Reliability.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Evaluation of structural equation models Hans Baumgartner Penn State University.
Reliability When a Measurement Procedure yields consistent scores when the phenomenon being measured is not changing. Degree to which scores are free of.
Lesson 2 Main Test Theories: The Classical Test Theory (CTT)
FACTOR ANALYSIS.  The basic objective of Factor Analysis is data reduction or structure detection.  The purpose of data reduction is to remove redundant.
Lesson 5.1 Evaluation of the measurement instrument: reliability I.
Chapter 17 STRUCTURAL EQUATION MODELING. Structural Equation Modeling (SEM)  Relatively new statistical technique used to test theoretical or causal.
5. Evaluation of measuring tools: reliability Psychometrics. 2011/12. Group A (English)
Classical Test Theory Psych DeShon. Big Picture To make good decisions, you must know how much error is in the data upon which the decisions are.
Descriptive Statistics Report Reliability test Validity test & Summated scale Dr. Peerayuth Charoensukmongkol, ICO NIDA Research Methods in Management.
Advanced Statistical Methods: Continuous Variables
CJT 765: Structural Equation Modeling
Classical Test Theory Margaret Wu.
Evaluation of measuring tools: reliability
EPSY 5245 EPSY 5245 Michael C. Rodriguez
Presentation transcript:

MEASUREMENT MODELS

BASIC EQUATION x =  + e x = observed score  = true (latent) score: represents the score that would be obtained over many independent administrations of the same item or test e = error: difference between y and 

ASSUMPTIONS  and e are independent (uncorrelated) The equation can hold for an individual or a group at one occasion or across occasions: x ijk =  ijk + e ijk (individual) x *** =  *** + e *** (group) combinations (individual across time)

 x xx e

RELIABILITY Reliability is a proportion of variance measure (squared variable) Defined as the proportion of observed score (x) variance due to true score (  ) variance:  2 x  =  xx’ =  2  /  2 x

Var(  ) Var(x) Var(e) reliability

Reliability: parallel forms x 1 =  + e 1, x 2 =  + e 2  (x 1,x 2 ) = reliability =  xx’ = correlation between parallel forms

 x1x1 xx e x2x2 e xx  xx’ =  x  *  x 

ASSUMPTIONS  and e are independent (uncorrelated) The equation can hold for an individual or a group at one occasion or across occasions: x ijk =  ijk + e ijk (individual) x *** =  *** + e *** (group) combinations (individual across time)

Reliability: Spearman-Brown Can show the reliability of the composite is  kk’ = [k  xx’ ]/[1 + (k-1)  xx’ ] k = # times test is lengthened example: test score has rel=.7 doubling length produces rel = 2(.7)/[1+.7] =.824

Reliability: parallel forms For 3 or more items x i, same general form holds reliability of any pair is the correlation between them Reliability of the composite (sum of items) is based on the average inter-item correlation: stepped-up reliability, Spearman-Brown formula

RELIABILITY Generalizability d - coefficients ANOVA g - coefficients Cronbach’s alpha test-retest internal consistency inter-rater parallel form Hoyt dichotomous split half scoring KR-20Spearman KR-21Brown average inter-item correlation

COMPOSITES AND FACTOR STRUCTURE 3 MANIFEST VARIABLES REQUIRED FOR A UNIQUE IDENTIFICATION OF A SINGLE FACTOR PARALLEL FORMS REQUIRES: –EQUAL FACTOR LOADINGS –EQUAL ERROR VARIANCES –INDEPENDENCE OF ERRORS

 x1x1 xx e x2x2 e xx  xx’ =  x i  *  x j  x3x3 e xx

RELIABILITY FROM SEM TRUE SCORE VARIANCE OF THE COMPOSITE IS OBTAINABLE FROM THE LOADINGS: K  =  2 i i=1 K = # items or subtests = K  2 x 

Hancock’s Formula Hj = 1/ [ 1 + {1 / (Σ[l 2 ij /(1- l 2 ij )] ) } Ex. l 1 =.7, l 2 =.8, l 3 =.6 H = 1 / [ 1 +1/(.49/ / /.64 )] = 1 / [ 1 + 1/ ( ) ] = 1/ (1 + 1/3.21) =.76

Hancock’s Formula Explained Hj = 1/ [ 1 + {1 / (Σ[l 2 ij /(1- l 2 ij )] ) } now assume strict parallelism: then l 2 ij =  2 xt thus Hj = 1/ [ 1 + {1 / (Σ[  2 xt /(1-  2 xt )] ) } = k  2 xt / [1 + (k-1)  2 xt ] = Spearman-Brown formula

RELIABILITY FROM SEM RELIABILITY OF THE COMPOSITE IS OBTAINABLE FROM THE LOADINGS:  = K/(K-1)[1 - 1/  ] example  2 x  =.8, K=11  = 11/(10)[1 - 1/8.8 ] =.975

SEM MODELING OF PARALLEL FORMS PROC CALIS COV CORR MOD; LINEQS X1 = L1 F1 + E1, X2 = L1 F1 + E1, … X10 = L1 F1 + E1; STD E1=THE1, F1= 1.0;

TAU EQUIVALENCE ITEM TRUE SCORES DIFFER BY A CONSTANT:  i =  j +  k ERROR STRUCTURE UNCHANGED AS TO EQUAL VARIANCES, INDEPENDENCE

TESTING TAU EQUIVALENCE ANOVA: TREAT AS A REPEATED MEASURES SUBJECT X ITEM DESIGN: PROC VARCOMP;CLASS ID ITEM; MODEL SCORE = ID ITEM; LOW VARIANCE ESTIMATE CAN BE TAKEN AS EVIDENCE FOR PARALLELISM (UNLIKELY TO BE EXACTLY ZERO

CONGENERIC MODEL LESS RESTRICTIVE THAN PARALLEL FORMS OR TAU EQUIVALENCE: –LOADINGS MAY DIFFER –ERROR VARIANCES MAY DIFFER MOST COMPLEX COMPOSITES ARE CONGENERIC: –WAIS, WISC-III, K-ABC, MMPI, etc.

 x1x1 x1x1 e1e1 x2x2 e2e2 x2x2  (x 1, x 2 )=  x 1  *  x 2  x3x3 e3e3 x3x3

COEFFICIENT ALPHA  xx’ = 1 -  2 E /  2 X = 1 - [  2 i (1 -  ii )]/  2 X, since errors are uncorrelated  = K/(K-1)[1 -  (  s 2 i )/ s 2 X ] where X =  x i (composite score)  s 2 i = variance of subtest  x i  s X = variance of composite Does not assume knowledge of subtest  ii

COEFFICIENT ALPHA- NUNNALLY’S COEFFICIENT IF WE KNOW RELIABILITIES OF EACH SUBTEST,  i  N = K/(K-1)[  s 2 i (1- r ii )/ s 2 X ] where r ii = coefficient alpha of each subtest Willson (1996) showed    N

SEM MODELING OF CONGENERIC FORMS MPLUS EXAMPLE:this is an example of a CFA DATA:FILE IS ex5.1.dat; VARIABLE:NAMES ARE y1-y6; MODEL:f1 BY y1-y3; f2 BY y4-y6; OUTPUT:SAMPSTAT MOD STAND;

 x1x1 x1x1 e1e1 x2x2 e2e2 x2x2  X i X i =  2 x i  + s 2 i x3x3 e3e3 x3x3 s1s1 NUNNALLY’S RELIABILITY CASE s2s2 s3s3

 x1x1 x1x1 e1e1 x2x2 e2e2 x2x2 Specificities can be misinterpreted as a correlated error model if they are correlated or a second factor x3x3 e3e3 x3x3 s CORRELATED ERROR PROBLEMS s3s3

 x1x1 x1x1 e1e1 x2x2 e2e2 x2x2 Specificieties can be misinterpreted as a correlated error model if specificities are correlated or are a second factor x3x3 e3e3 x3x3 CORRELATED ERROR PROBLEMS s3s3

SEM MODELING OF CONGENERIC FORMS- CORRELATED ERRORS MPLUS EXAMPLE:this is an example of a CFA DATA:FILE IS ex5.1.dat; VARIABLE:NAMES ARE y1-y6; MODEL:f1 BY y1-y3; f2 BY y4-y6; y4 with y5; OUTPUT:SAMPSTAT MOD STAND; specifies residuals of previous model, correlates them

MULTIFACTOR STRUCTURE Measurement Model: Does it hold for each factor? –PARALLEL VS. TAU-EQUIVALENT VS. CONGENERIC How are factors related? What does reliability mean in the context of multifactor structure?

SIMPLE STRUCTURE PSYCHOLOGICAL CONCEPT: Maximize loading of a manifest variable on one factor ( IDEAL = 1.0 ) Minimize loadings of the manifest variables on all other factors ( IDEAL = 0 )

SIMPLE STRUCTURE Example SUBTESTFACTOR1FACTOR2FACTOR3 A100 B100 C010 D010 E001 F001

MULTIFACTOR ANALYSIS Exploratory: determine number, composition of factors from empirical sampled data –# factors  # eigenvalues > 1.0 (using squared multiple correlation of each item/subtest i with the rest as a variance estimate for  2 x i  –empirical loadings determine structure

MULTIFACTOR ANALYSIS TITLE:this is an example of an exploratory factor analysis with continuous factor indicators DATA:FILE IS ex4.1.dat; VARIABLE:NAMES ARE y1-y12; ANALYSIS:TYPE = EFA 1 4;

MULTIFACTOR MODEL WITH THEORETICAL PARAMETERS MPLUS EXAMPLE:this is an example of a CFA DATA:FILE IS ex5.1.dat; VARIABLE:NAMES ARE y1-y6; MODEL:f1 BY f2 BY f1 with OUTPUT:SAMPSTAT MOD STAND;

11 x1x1 x11x11 e1e1 x2x2 e2e2 x22x22 x3x3 e3e3 x31x31 MINIMAL CORRELATED FACTOR STRUCTURE 22 x4x4 e4e4 x42x42  12

FACTOR RELIABILITY Reliability for Factor 1:  = 2(  x 1  1 *  x 3  1 ) / (1 +  x 1  1 *  x 3  1 ) (Spearman-Brown for Factor 1 reliability based on the average inter-item correlation Reliability for Factor 2:  = 2(  x 2  2 *  x 4  2 ) / (1 +  x 2  2 *  x 4  2 )

FACTOR RELIABILITY Generalizes to any factors- reliability is simply the measurement model reliability for the scores for that factor This has not been well-discussed in the literature –problem has been exploratory analyses produce successively smaller eigenvalues for factors due to the extraction process –second factor will in general be less reliable using loadings to estimate interitem r’s

FACTOR RELIABILITY Theoretically, each factor’s reliability should be independent of any other’s, regardless of the covariance between factors Thus, the order of factor extraction should be independent of factor structure and reliability, since it produces maximum sample eigenvalues (and sample loadings) in an extraction process. Composite is a misnomer in testing if the factors are treated as independent constructs rather than subtests for a more global composite score (separate scores rather than one score created by summing subscale scores)

CONSTRAINED FACTOR MODELS If reliabilities for scales are known independent of the current data (estimated from items comprising scales, for example), error variance can be constrained: s 2 e i = s[1 -  i ]

 x1x1 x 1  e1e1 x2x2 e2e2 x 2  x3x3 e3e3 x 3  CONSTRAINED SEM- KNOWN RELIABILITY s x3 [1-  3 ] 1/2 s x1 [1-  1 ] 1/2 s x2 [1-  2 ] 1/2

CONSTRAINED SEM-KNOWN RELIABILITY MPLUS EXAMPLE:this is an example of a CFA with known error unreliabilities DATA:FILE IS ex5.1.dat; VARIABLE:NAMES ARE y1-y6; MODEL:f1 BY y1-y3; f2 BY y4-y6; OUTPUT:SAMPSTAT MOD STAND; similar statement for each item

SEM Measurement Procedures 1. Evaluate the theoretical measurement model for ALL factors (not single indicator variables included) Demonstrate discriminant validity by showing the factors are separate constructs Revise factors as needed to demonstrate- drop some manifest variables if necessary and not theoretically damaging Ref: Anderson & Gerbing (1988)