Presentation is loading. Please wait.

Presentation is loading. Please wait.

Measuring Research Variables

Similar presentations


Presentation on theme: "Measuring Research Variables"— Presentation transcript:

1 Measuring Research Variables
chapter 11 Measuring Research Variables

2 Chapter Outline Validity Reliability
Methods of establishing reliability Intertester reliability (objectivity) Standard error of measurement Using standard scores to compare performance (continued)

3 Chapter Outline (continued)
Measuring movement Measuring written responses Measuring affective behavior Scales for measuring affective behavior Measuring knowledge Item response theory

4 Four Basic Types of Measurement Validity
The American Educational Research Association and American Psychological Association agree on the definition of four types of validity: Logical validity Content validity Criterion validity Concurrent Predictive Construct validity

5 Desired Qualities in a Criterion Measure
Relevance (e.g., the extent to which the criterion exemplifies success) Freedom from bias (Everyone must have same chance to achieve a good score.) Reliability of criterion (You can’t predict a criterion if you can’t measure it.) Availability (how hard is it to obtain criterion score)

6 Measurement Reliability
Overview Definition Observed score = True score + Error score Sources of measurement error Expressing reliability through correlation Interclass Intraclass

7 Estimating Reliability
Interclass Simple correlation Weaknesses Intraclass: ANOVA with repeated measures Treating trial-to-trial variation as measurement error Discarding trials Ignoring trial-to-trial variation

8 Trial-to-Trial Variation
Can be treated as measurement error R = (MSS – MSE)/MSS Can be solved by discarding trials, then using above formula Can be ignored R = (MSS – MSres)/MSS

9 Methods of Establishing Reliability
Determining stability: test, retest, then use intraclass method Constructing alternate forms Obtaining internal consistency Same-day test-retest Split-half technique Kuder-Richardson (KR-20 and KR-21) Coefficient alpha

10 Intertester Reliability (Objectivity)
Agreement among testers or raters

11 Other Measurement Issues
Standard error of measurement Standard scores z scores: z = (X – M)/s T scale: T = z S y x = s 1 . r

12 Measuring Various Types of Characteristics
Movement Affective behavior Attitudes Personality Scales for affective behavior Likert-type Semantic differential

13 Rating Scales Types of rating scales Rating errors Numerical Checklist
Forced choice Rankings Rating errors Leniency Central tendency Halo Proximity errors Observer bias Observer expectation

14 Measuring Knowledge Analyzing test items Types of knowledge test items
Item difficulty = (# correct)/total Item discrimination = (nH – nL)/n Types of knowledge test items Multiple choice True/false Completion Matching Essay Item response theory (IRT)


Download ppt "Measuring Research Variables"

Similar presentations


Ads by Google