Presentation is loading. Please wait.

Presentation is loading. Please wait.

Inter-Rater or Inter- Observer Reliability. Description Is the extent to which two or more individuals (coders or raters) agree. Inter- Rater reliability.

Similar presentations


Presentation on theme: "Inter-Rater or Inter- Observer Reliability. Description Is the extent to which two or more individuals (coders or raters) agree. Inter- Rater reliability."— Presentation transcript:

1 Inter-Rater or Inter- Observer Reliability

2 Description Is the extent to which two or more individuals (coders or raters) agree. Inter- Rater reliability addresses the consistency of the implementation of a rating system.

3 What value does reliability have to survey research? Surveys tend to be weak on validity and strong on reliability. Surveys tend to be weak on validity and strong on reliability. Survey research presents all subjects with a standardized stimulus, and so goes a long way toward eliminating unreliability in the researcher's observations. Survey research presents all subjects with a standardized stimulus, and so goes a long way toward eliminating unreliability in the researcher's observations.

4 How used in quantitative research? is generally perceived as a means of verifying coherence in the understanding of a certain topic. not necessarily required to engage deeply into the material in order to obtain an understanding of the study’s findings for rating purposes.

5 How used in qualitative research? During data collection, inter-observer reliability is demonstrated when two or more researchers' independently collected data mirror each other. During data analysis, it is demonstrated when independently functioning researchers agree upon data segments to be coded, categories to be used, the placing of data segments into the same categories, and the interpretations drawn from examination of classified data segments.

6 An example of how it is used Let's say you had 100 observations that were being rated by two raters. For each observation, the rater could check one of three categories. Imagine that on 86 of the 100 observations the raters checked the same category. In this case, the percent of agreement would be 86%. Let's say you had 100 observations that were being rated by two raters. For each observation, the rater could check one of three categories. Imagine that on 86 of the 100 observations the raters checked the same category. In this case, the percent of agreement would be 86%.

7 Direct link to on-line JAE article where reliability are addressed. http://stroke.ahajournals.org/cgi/content/f ull/strokeaha;26/1/46 http://stroke.ahajournals.org/cgi/content/f ull/strokeaha;26/1/46 http://stroke.ahajournals.org/cgi/content/f ull/strokeaha;26/1/46 http://stroke.ahajournals.org/cgi/content/f ull/strokeaha;26/1/46 http://www.iese.fraunhofer.de/network/IS ERN/pub/technical_reports/isern-98- 02.pdf http://www.iese.fraunhofer.de/network/IS ERN/pub/technical_reports/isern-98- 02.pdf http://www.iese.fraunhofer.de/network/IS ERN/pub/technical_reports/isern-98- 02.pdf http://www.iese.fraunhofer.de/network/IS ERN/pub/technical_reports/isern-98- 02.pdf

8 References Goodwin, L. D., & Goodwin, W. L. (1984). Are validity and reliability "relevant“ in qualitative evaluation research? Evaluation & the Health Professions, (7) 413- 426. Marques, J. F.,The Application of Inter-rater Reliability as a Solidification Instrument in a Phenomenological Study Retrieved 10/19/06 from http://www.nova.edu/ssss/QR/QR10-3/marques.pdf http://www.nova.edu/ssss/QR/QR10-3/marques.pdf http://www.socialresearchmethods.net/kb/rel&val.htm Retrieved 10/19/06 from http://www.socialresearchmethods.net/kb/rel&val.htm http://www.socialresearchmethods.net/kb/rel&val.htm http://writing.colostate.edu/guides/research/relval/ Retrieved 10/19/06 from http://writing.colostate.edu/guides/research/relval/ http://writing.colostate.edu/guides/research/relval/


Download ppt "Inter-Rater or Inter- Observer Reliability. Description Is the extent to which two or more individuals (coders or raters) agree. Inter- Rater reliability."

Similar presentations


Ads by Google