Download presentation
Presentation is loading. Please wait.
Published byCecily Kennedy Modified over 9 years ago
1
Content Analysis: Reliability Kimberly A. Neuendorf, Ph.D. Cleveland State University Fall 2011
2
Reliability Generally—the extent to which a measuring procedure yields the same results on repeated trials (Carmines & Zeller, 1979) Types: Test-retest: Same people, different times. Intracoder reliability... Alternative-forms: Different people, same time, different measures. Internal consistency: Multiple measures, same construct. Inter-rater/Intercoder: Different people, same measures.
3
Index/Scale Construction Similar to survey or experimental work e.g., Bond analysis—Harm to female, sexual activity Need to check internal consistency reliability (e.g., Cronbach’s alpha)
4
Intercoder Reliability Defined: The level of agreement or correspondence on a measured variable among two or more coders What contributes to good reliability? careful unitizing, codebook construction, coder training (training, training!)
5
Reliability Subsamples Pilot and Final reliability subsamples Because of drift, fatigue, experience Selection of subsamples Random, representative subsample “Rich Range” subsample Useful for “rare event” measures Reliability/variance relationship
6
Intercoder Reliability Statistics - 1 Types Agreement Percent agreement Holsti’s Agreement beyond chance Scott’s pi Cohen’s kappa Fleiss’ multi-coder extension of kappa Krippendorff’s alpha(s) Covariation Spearman rho Pearson r Lin’s concordance correlation coefficient (r c )
7
Reliability Statistics – 2 See handouts on (a) Bivariate Correlation and (b) Pearson’s and Lin’s Compared
8
Reliability Statistics - 3 Core assumptions of coefficients “More scholarship is needed”—these coefficients have not been assessed!
9
Reliability Statistics - 4 My recommendations Do NOT use percent agreement ALONE Nominal/Ordinal: Kappa (Cohen’s, Fleiss’) Interval/Ratio: Lin’s concordance Calculate via PRAM Reliability analyses as diagnostics, e.g., Problematic variables, coders (“rogues”?), variable/coder interactions Confusion matrixes (categories that tend to be confused)
10
Reliability Statistics - 5 “Standards” for Minimums for Rel. Stats. Percent Agreement: 90%?? Kappa (Cohen’s, Fleiss’): .40 minimally,.60 OK,.80 good Pearson correlation; Lin’s concordance: .70 (~50% shared variance) --???
11
Reliability Statistics - 6 The problem of the “extreme” or “skewed” distribution Can have a % agreement of.95 and a Cohen’s kappa of -.10!!! Why? What to do?
12
PRAM: Program for Reliability Analysis with Multiple Coders Written by rocket scientists! Trial version available from Dr. N!
16
pause
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.