Download presentation
Presentation is loading. Please wait.
Published bySilvia Anthony Modified over 9 years ago
1
Reliability, the Properties of Random Errors, and Composite Scores Week 7, Psych 350 - R. Chris Fraley http://www.yourpersonality.net/psych350/fall2015/
2
Reliability Reliability: the extent to which measurements are free of random errors. Random error: nonsystematic mistakes in measurement –misreading a questionnaire item –observer looks away when coding behavior –response scale not quite fitting
3
Reliability What are the implications of random measurement errors for the quality of our measurements?
4
Psychometric Reliability O = T + E + S O = a measured score (e.g., performance on an exam) T = true score (e.g., the value we want) E = random error S = systematic error O = T + E (we’ll ignore S for now, but we’ll return to it later)
5
Reliability O = T + E The error becomes a part of what we’re measuring This is a problem if we’re using a single measurement of a variable because part of our measurement is based on the true value that we want and part is based on error. Once we’ve taken a measurement, we have an equation with two unknowns. We can’t separate the relative contribution of T and E. 10 = T + E
6
Reliability: Do random errors accumulate? Question: If we aggregate or average multiple observations, will random errors accumulate?
7
Reliability: Do random errors accumulate? Answer: No. If E is truly random, we are just as likely to overestimate T as we are to underestimate T. Height example
8
5’ 2 5’ 3 5’ 4 5’ 5 5’ 6 5’ 7 5’ 8 5’ 9 5’ 10 5’ 11 66’ 1 6’ 2 6’ 3 6’ 4 6’ 5 6’ 6 6’ 7 6’ 8 6’ 9 6263646566676869707172737475767778798081
9
Reliability: Do random errors accumulate? Note: The average of the seven O’s is equal to T
10
Composite scores These demonstrations suggest that one important way to help eliminate the influence of random errors is to aggregate or average multiple measurements of the same construct. Composite scores. –use multiple questionnaire items in surveys of an attitude, behavior, or trait –use more than one observer when coding behavior –use observer- and self-reports when possible
11
Example: Self-esteem survey items 1. I feel that I'm a person of worth, at least on an equal plane with others. Strongly Disagree12345Strongly Agree 2. I feel that I have a number of good qualities. Strongly Disagree12345Strongly Agree 4. I am able to do things as well as most other people. Strongly Disagree12345Strongly Agree
12
Example: Self-esteem survey items 1. I feel that I'm a person of worth, at least on an equal plane with others. Strongly Disagree12345Strongly Agree 2. I feel that I have a number of good qualities. Strongly Disagree12345Strongly Agree 4. I am able to do things as well as most other people. Strongly Disagree12345Strongly Agree Composite self-esteem score = (4 + 5 + 3)/3 = 4
13
Two things to note about aggregation Reverse Keyed Items Some measurements are keyed in the direction opposite of the construct of interest. High values represent low values on the trait of interest.
14
Example: Self-esteem survey items 1. I feel that I'm a person of worth, at least on an equal plane with others. Strongly Disagree12345Strongly Agree 2. I feel that I have a number of good qualities. Strongly Disagree12345Strongly Agree 3. All in all, I am inclined to feel that I am a failure. Strongly Disagree12345Strongly Agree 4. I am able to do things as well as most other people. Strongly Disagree12345Strongly Agree 5. I feel I do not have much to be proud of. Strongly Disagree12345Strongly Agree Inappropriate composite self-esteem score = (5 + 5+ 1 + 4 + 1)/5 = 3.2
15
Reverse keying: Transform the measures such that high scores become low scores and vice versa. Example: Self-esteem survey items 1. I feel that I'm a person of worth, at least on an equal plane with others. Strongly Disagree12345Strongly Agree 2. I feel that I have a number of good qualities. Strongly Disagree12345Strongly Agree 3. All in all, I am inclined to feel that I am a failure. Strongly Disagree12345Strongly Agree 4. I am able to do things as well as most other people. Strongly Disagree12345Strongly Agree 5. I feel I do not have much to be proud of. Strongly Disagree12345Strongly Agree Appropriate composite self-esteem score = (5 + 5+ 5 + 4 + 5)/5 = 4.8
16
You don’t want to do this by hand when you have data on multiple people. A simple algorithm for reverse keying X in SPSS or Excel: New X = Max + Min - X Max represents the highest possible value (5 on the self-esteem scale). Min represents the lowest possible value (1 on the self-esteem scale).
17
Example: stress PersonHeart rate Complaints Average/composite A80241 B80342 C120261 D120362 Cautions: Two potential problems with aggregation
18
Example: stress PersonHeart rate Complaints Average/composite A80241 B80342 C120261 D120362 Cautions: Two potential problems with aggregation The first problem is that the metric for the composite doesn’t make much sense. Person A: 2 complaints + 80 beats per minute = 41 complaints/beats per minute???
19
Two things to note about aggregation Second, the variables may have different variances. If this is true, then some indicators will “count” more in the average than others.
20
Example: stress PersonHeart rate Complaints Average/composite A80241 B80342 C120261 D120362 Beats per minute Number of complaints The correlation between the composite and HR is.99. The correlation between the composite and Complaints is.05.
21
Two things to note about aggregation One common solution to these problems is to standardize the variables before aggregating them. Constant mean and variance
22
Standardization helps solve the problem that variables with a large range/variance will influence the composite score more than variable with a small range. PersonHeart rate(z) Complaints(z) Average A-.87-.87 -.87 B-.87.87 0 C.87 -.87 0 D.87.87.87 The correlation between the composite and HR is.71. The correlation between the composite and Complaints is.71.
23
Reliability: Estimating reliability Question: How can we quantify the reliability of our measurements? Answer: Two common ways: (a) test-retest reliability (b) internal consistency reliability
24
Reliability: Estimating reliability Test-retest reliability: Reliability assessed by measuring something at least twice at different time points. Test-retest correlation. The logic is as follows: If the errors of measurement are truly random, then the same errors are unlikely to be made more than once. Thus, to the degree that two measurements of the same thing agree, it is unlikely that those measurements contain random error.
25
r =.92 r =.27
26
Reliability: Estimating reliability Internal consistency: Reliability assessed by measuring something at least twice within the same broad slice of time. Split-half: based on an arbitrary split (e.g, comparing odd and even, first half and second half). Split-half correlation. Cronbach’s alpha ( ): based on the average of all possible split-half correlations.
27
Relationship between reliability and number of measurements Ave r =.10 Ave r =.25 Ave r =.50 The reliability of the composite (a) increases as the number of measurements (k) increases. In fact, the reliability of the composite can get relatively high even if the items themselves do not correlate strongly.
28
Ave r =.10
29
Reliability: Final notes An important implication: As you increase the number of measures, the amount of random error in the averaged measurement decreases. An important assumption: The entity being measured is not changing. An important note: Common indices of reliability range from 0 to 1—in the metric of correlation coefficients; higher numbers indicate better reliability (i.e., less random error).
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.