Download presentation
Presentation is loading. Please wait.
Published byNyasia Allmand Modified over 9 years ago
1
1Reliability Introduction to Communication Research School of Communication Studies James Madison University Dr. Michael Smilowitz
2
2 Reliability and Validity Why bother? Remember the principle of isomorphism? “To what extent do the data yielded by the measurement schemes accurately represent the nature and structure of the phenomenon being measured” (Smith, 1988, p. 47)
3
3 Reliability and Validity Important also to Replicability If a measuring device is not consistent, subject to error, or does not truly measure the constructs under investigation, then it is not likely that subsequent research will yield similar results.
4
4 Reliability and Validity The Basics Reliability: An evaluation of the internal consistency of the measuring devices. Does the instrument measure the same way every time? Validity: An evaluation of the measures relationship to external characteristics. Does the measuring device really measures what it claims to measure?
5
5Reliability What’s the point? Repeated measurements by the same device should yield (roughly) the same set of responses. Why “roughly” ? Because error is expected when measuring human behavior.
6
6Reliability Sources of error: 1. Chance. 2. Subject fatigue. 3. Subject carelessness. 4. Fluctuations in memory. 5. Subject familiarity with the measuring device.
7
7Reliability Sources of error: 6. Measurement conditions (room too hot, too close to lunch, noisy). 7. Confusing or vague language in the test. 8. Items not relevant to subject’s experience.
8
8Reliability Assessing Reliability Various methods can be used, but all depend on statistically analyzing the results to determine the reliability coefficient. A correlation statistic which measures the amount of association or coincidence of things.
9
9Reliability Reliability coefficients -- as correlation statistics -- range in value between 0 (no correlation) and 1 (perfect correlation). Why bother? Looking at the reliability coefficient value tells you something about the usefulness of the measuring scheme used by the researchers. There are certain values, set by convention, that are used to evaluate the measuring schemes reliability.
10
10Reliability A really good measure has a coefficient of.9 or higher. A good measure is between.8 and.89. A fair measure is between.7 and.79. A flop is less than.7, for communication journals. Some journals sometimes accept values above.6.
11
11Reliability Test-retest: (sometimes called the matching- pairs procedure) Involves giving the measure twice to the same group of people. The reliability coefficient reports the extent to which the two sets of scores are consistent.
12
12Reliability Test-retest method: This method is relatively straight forward in its logic and is relatively easy to employ. Disadvantages: 1. Time-consuming: two administrations are necessary. 2. Likely to “sensitize” people to the second administration, prompting them to try and remember their first response and respond in the same way again. Leads to an “inflated” estimate of reliability. 3. People’s views may change between tests. To overcome the “recall” problem from above, the second test is administered after some period of time. In the interim, the subjects may have changed. Leads to an “underestimate” of reliability. The amount of ideal time for overcoming problems 2 and 3 is a judgment call.
13
13Reliability Alternate forms: Involves administration of two parallel versions of the same instrument to the same group of subjects. The researcher produces a large number of items to measure the variable (s) of interest. Then items are selected to produce two nearly identical instruments. The advantage is in overcoming the “time” problem of the test-retest. The disadvantages: 1. Very difficult to produce two truly parallel versions of the same instrument. 2. Subjects may become aware of the similarity, causing over-estimation of the instrument’s reliability.
14
14Reliability Split half approach (an internal consistency method): The instrument is administered to the same group of subjects at one time. After administration, the research divides the test into two parts, and then checks the consistency between the two scores from each part. One technique is to compare first half with second half. (Subjects may experience fatigue during second half, so not usually a good idea. Odd-even a better technique.) Cronbach’s Alpha - most common technique...thanks to the computer. 1. The computerized analysis randomly selects multiple pairs of subsets from an instrument; 2. correlates each pair of scores; 3. and then uses the composite correlation between all the paired subsets as an index of the total instruments internal consistency.
15
15Reliability Item-to-total (another internal method): This procedure computes the reliability of any one particular item against the entire test. There should be a high correlation between any one item and the score on the entire test. Most appropriate when scoring “correct” or “incorrect” answers. The K-R 20 is computerized statistical routine for doing this type of reliability (Kuder-Richardson formula 20)
16
16Reliability Intercoder Reliability: Often seen in communication research. Used to determine the consistency among multiple observers who are categorizing the same phenomenon. The Geutzkow Estimate is often used for assessing reliability.
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.