Presentation on theme: "Coding and Intercoder Reliability"— Presentation transcript:
1Coding and Intercoder Reliability Su LiSchool of Law, U.C. Berkeley2/12/2015
2Outline Basics of data coding What’s intercoder reliability? Why does it matter?How to measure and report intercoder reliability?How to improve intercoder reliability?References
3Data Coding Basics Start from a codebook Exhaustive and mutually exclusive value options for each variableUse multiple variables to code overlapping values or multiple values for one observation
4Example Codebook (White collar lawyer project) c_graduate_year: Year graduated from law school (or received the highest degree, if not law degree)999 if the information is not availableNote: type in the applicable year (YYYY)c_practice_area: Practice area1. White collar (includes white collar defense, white collar crime, white collar litigation, etc.)2. Government or corporate investigations3. White collar and government/corporate investigations (if the practice area is described this way)4. Criminal defense (if the practice area is described this way)Note: choose from one of the above 4 choices and type in the number. If the practice area has a different title, type in the title.See var14-18 in the WC project codebook
5Input data in StataLabel data in StataRecode data in StataExample 1:Graduation year– JD. 1989
6What’s intercoder reliability Intercoder reliability is the widely used term for the extent to which independent coders evaluate a characteristic of a message or artifact and reach the same conclusion. (Also known as intercoder agreement, according to Tinsley and Weiss (2000).The intercoder reliability is not exactly the same as the correlation coefficient that measures the degree to which "ratings of different judges are the same when expressed as deviations from their means."Rather it measures only "the extent to which the different judges tend to assign exactly the same rating to each object" (Tinsley & Weiss, 2000, p. 98);
7Why does it matter?Coding may involve coders’ judgments which vary among individuals.The quality of research depends on the coherence of coding judgments.Control the coding accuracy at the same time of monitoring intercoder reliability.Practically, make it possible for the division of labor among multiple coders.
8Mathematical measures that are commonly reported on intercoder reliability Popping (1988) identified 39 different "agreement indices" for coding nominal categories.Commonly used ones:Percent agreement: PA0=totalAs/nScott's pi (p): p=(PA0-PAe)/(1-PAe) [when PAe=Sigma(pi_squared)]Cohen's kappa (k): k=(PA0-PAe)/(1-PAe) [when PAe=(1/n_squared)*Sigma(pi_squared)]Krippendorff's alpha (a): (Krippendorff's Alpha 3.12a software)There is no consensus on a single, "best" one.Percent agreement is widely used, but is misleading. Tends to over estimate reliability.Cohen’s Kappa is being criticized but still the most frequently used.Hand calculations:"Kappa means that the level of agreement is [that] percent greater than would be expected by change and thus indicates If Kappa equals 0 then the amount of agreement between the two coders is exactly what one would expect by chance. If Kappa equals 1, then the coders agree perfectly.
9Example: binary var coding results of two coders coder11totalcoder25035394.34%5.66%89.83%42666.67%33.33%10.17%5455991.53%8.47%100%PA0=50+2=52; n=59; PAe (in Scott’s i)=53/59* 53/59+6/59*6/59; PAe (in Cohen’s Kappa)=PAe(in Scott’s i)*1/(59*59)
10Use SPSS to calculate Cohen’s Kappa CROSSTABS /TABLES=var1_coder2 BY var1_coder1 /FORMAT=AVALUE TABLES /STATISTICS=KAPPA /CELLS=COUNT /COUNT ROUND CELL.
11Use Stata to calculate Cohen’s Kappa Kappa varlist; (each column shows the frequency of a value coded by different coders)Kap coder1 coder2 ….(each column is a coder)(see stata demo)According to Landis and Koch (1977a, 165)below 0.0 Poor0.00 – 0.20 Slight0.21 – 0.40 Fair0.41 – 0.60 Moderate0.61 – 0.80 Substantial0.81 – 1.00 Almost perfect
12Coder 1 and coder 2; coder 1 and coder 3, differences are random obscoder 1coder 2coder 3coder 41234567891011121314151617181920Coder 1 and coder 2; coder 1 and coder 3, differences are randomCoder 1 and coder 3 differences a systematic (e.g. coder 3 alwys code 2 as 1 and 3 as 4, compared with coder 2
13Acceptance standard: Neuendorf (2002) No coherent standard. Some rules of thumb:“Coefficients of .90 or greater would be acceptable to all,.80 or greater would be acceptable in most situations,Below .8, there exists great disagreement” (p. 145).The criterion of .70 is often used for exploratory research.More liberal criteria are usually used for the indices known to be more conservative (i.e., Cohen’s kappa and Scott’s pi).
14Hughes, Marie Adele, Garrett Dennis E. (1990) Acceptance levelrecommend to use or notPercent agreementdoes not correct for chance agreementNOScott's pi (p)0.6address chance correction and systematic coding error problemAcceptableCohen's kappa (k)<0.00 Poor; Slight; Fair; Moderate; Substantial; is Almost Perfect." (Landis&Koch 1977)Acceptable (most extensively discussed)Krippendorff's alpha (a)Pearson's correlationdoes not consider systematic coding bias
15How to improve Intercoder reliability (Lombard et. Al. 2002) In Research Design:Assess reliability informally during coder training ( detailed instructions, close monitoring etc)Assess reliability formally in a pilot test.Assess reliability formally during coding of the full sample.Select and follow an appropriate procedure for incorporating the coding of the reliability sample into the coding of the full sample. (e.g. master coder quality control)In results report:Select one or more appropriate indices.Obtain the necessary tools to calculate the index or indices selected.Select an appropriate minimum acceptable level of reliability for the index or indices to be used.Report intercoder reliability in a careful, clear, and detailed manner in all research reports.
16Reference http://astro.temple.edu/~lombard/reliability/ Lombard, M., Snyder-Duch, J., & Bracken, C. C. (2002). Content analysis in mass communication: Assessment and reporting of intercoder reliability. Human Communication Research, 28,Tinsley, H. E. A. & Weiss, D. J. (2000). Interrater reliability and agreement. In H. E. A. Tinsley & S. D. Brown, Eds., Handbook of Applied Multivariate Statistics and Mathematical Modeling, pp San Diego, CA: Academic Press.Popping, R. (1988). On agreement indices for nominal data. In Willem E. Saris & Irmtraud N. Gallhofer (Eds.), Sociometric research: Volume 1, data collection and scaling (pp ). New York: St. Martin's Press.Richard J. Landis & Gary G. Koch, The Measurement of Observer Agreements for Categorical Data, Biometrics 33: (1977)Hughes, Marie Adele, Garrett Dennis E Intercoder Reliability Estimation Approaches in Marketing: A Generalizability Theory Framework for Quantitative Data. Journal of Marketing Research. Vol. 27, No. 2 (May, 1990), pp