Presentation is loading. Please wait.

Presentation is loading. Please wait.

Validity In our last class, we began to discuss some of the ways in which we can assess the quality of our measurements. We discussed the concept of reliability.

Similar presentations


Presentation on theme: "Validity In our last class, we began to discuss some of the ways in which we can assess the quality of our measurements. We discussed the concept of reliability."— Presentation transcript:

1 Validity In our last class, we began to discuss some of the ways in which we can assess the quality of our measurements. We discussed the concept of reliability (i.e., the degree to which measurements are free of random error).

2 Why reliability alone is not enough Understanding the degree to which measurements are reliable, however, is not sufficient for evaluating their quality. In-class scale example –Recall that test-retest estimates of reliability tend to range between 0 (low reliability) and 1 (high reliability) –Note: The on-line correlation calculator is available at http://p034.psch.uic.edu/correlations.htmhttp://p034.psch.uic.edu/correlations.htm

3 Validity In this example, the measurements appear reliable, but there is a problem... Validity reflects the degree to which measurements are free of both random error, E, and systematic error, S. O = T + E + S Systematic errors reflect the influence of any non-random factor beyond what we’re attempting to measure.

4 Validity: Does systematic error accumulate? Question: If we sum or average multiple observations (i.e., using a multiple indicators approach), how will systematic errors influence our estimates of the “true” score?

5 Validity: Does error accumulate? Answer: Unlike random errors, systematic errors accumulate. Systematic errors exert a constant source of influence on measurements. We will always overestimate (or underestimate) T if systematic error is present.

6 Note: Each measurement is 2 points higher than the true value of 10. The errors do no average out.

7 Note: Even when random error is present, E averages to 0 but S does not. Thus, we have reliable measures that have validity problems.

8 Validity: Ensuring validity What can we do to minimize the impact of systematic errors? One way to minimize their impact is to use a variety of indicators Different kinds of indicators of a latent variable may not share the same systematic errors If true, then S will behave like random error across measurements (but not within measurements)

9 Example As an example, let’s consider the measurement of self-esteem. –Some methods, such as self-report questionnaires, may lead people to over-estimate their self-esteem. Most people want to think highly of themselves. –Other methods, such as clinical ratings by trained observers, may lead to under-estimates of self-esteem. Clinicians, for example, may be prone to assume that people are not as well-off as they say they are.

10 Note: Method 1 systematically overestimates T whereas Method 2 systematically underestimates T. In combination, however, those systematic errors cancel out. Clinical ratings Self- reports

11 Another example One problem with the use of self-report questionnaire rating scales is that some people tend to give high (or low) answers consistently (i.e., regardless of the question being asked).

12 ItemTSO I think I am a worthwhile person. 4+15 I have high self-esteem.4+15 I am confident in my ability to meet challenges in life. 4+15 My friends and family value me as a person. 4+15 Average score:4+15 In this example, we have someone with relatively high self- esteem, but this person systematically rates questions one point higher than he or she should. 1 = strongly disagree | 5 = strongly agree

13 ItemTSO I think I am a worthwhile person. 4+15 I have high self-esteem.4+15 I am NOT confident in my ability to meet challenges in life. 2+13 My friends and family DO NOT value me as a person. 2+13 Average score:4+14 If we “reverse key” half of the items, the bias averages out. Responses to reverse keyed items are counted in the opposite direction. T: (4 + 4 + [6-2] + [6-2]) / 4 = 4 O: (5 + 5 + [6-3] + [6-3]) / 4 = 4 1 = strongly disagree | 5 = strongly agree

14 Validity To the extent to which a measure has validity, we say that it measures what it is supposed to measure Question: How do you assess validity? ** Very tough question to answer! ** (But, we’ll give it a shot in our next class.)


Download ppt "Validity In our last class, we began to discuss some of the ways in which we can assess the quality of our measurements. We discussed the concept of reliability."

Similar presentations


Ads by Google