Chapter 5 Understanding, Calculating, and Evaluating Reliability and Objectivity.

Presentation on theme: "Chapter 5 Understanding, Calculating, and Evaluating Reliability and Objectivity."— Presentation transcript:

Chapter 5 Understanding, Calculating, and Evaluating Reliability and Objectivity

Methods for Evaluating Reliability and Their Calculations

Test–Retest Most straightforward way to determine reliabilityMost straightforward way to determine reliability Must have:Must have: –No major changes in the construct being measured –Sufficient recovery time between measurements

Evaluating Test–Retest Reliability Reliability coefficient:Reliability coefficient: –A ratio that shows the relationship between two measurements, indicating the consistency (or reliability) between them. Intraclass correlation:Intraclass correlation: –A statistical technique used to compute the reliability coefficient to assess the relationship between measures of the same class as in a test–retest study.

Calculating the Intraclass Correlation R=[(SSa/n–1) – (SSw/(n*(k–1)))]/ (SSa/n–1) SSa=(ΣT 2 /k) – ((ΣX) 2 /nk) SSw= ΣX 2 – (ΣT 2 /k) Where R is the intraclass reliability Σ represents the sum N = the number of test subjects k = the number of trials for each person (usually two) ΣT 2 = the sum of all the squared total scores for each person ΣX 2 = the sum of all the scores of everyone tested

An Alternative for Calculating the Intraclass Correlation Try this website:Try this website: http://department.obg.cuhk.edu.hk and go the Statistics Tool Box link.

Results of a Calculation of Intraclass Reliability

Evaluating Reliability with a Single Test Administration Split-half reliability:Split-half reliability: –Compare one half of a test with the other half –Spearman-Brown Prophecy Formula Internal consistency reliability:Internal consistency reliability: –Average all possible split-half estimates –Cronbach’s alpha

Evaluating the Reliability of Criterion-Referenced Measurements Calculate percentage of agreement between the test and the retest.Calculate percentage of agreement between the test and the retest. Percentage of Agreement= [(Cboth + NCboth) / (Cboth + NCboth + C/NC + NC/C)] * 100 Where Cboth = people scored as competent in both Trials 1 and 2 NCboth = people scored as not competent in both trials NCboth = people scored as not competent in both trials NC/C = people scored as not competent in Trial 1 but competent in Trial 2 NC/C = people scored as not competent in Trial 1 but competent in Trial 2 C/NC = people scored as competent in Trial 1 but not competent in Trial 2 C/NC = people scored as competent in Trial 1 but not competent in Trial 2

Example Diagram for Evaluating the Reliability of a Criterion-Referenced Measurement

Standard Error of Measurement Defined:Defined: –An estimation of the error inherent in any individual’s test score. SEM = SD * √1–r rc where SD = the standard deviation for the test r rc = the reliability coefficient for the test r rc = the reliability coefficient for the test

Increasing Reliability Repeat a measurement several times—Repeat a measurement several times— –To improve both validity and reliability –To discover and minimize errors –To average out the errors

Methods for Evaluating Objectivity

Calculating Objectivity Objectivity can be considered a special case of reliability.Objectivity can be considered a special case of reliability. –Inter-rater reliability Most techniques used to evaluate reliability can be used to evaluate objectivity.Most techniques used to evaluate reliability can be used to evaluate objectivity.

Calculating Objectivity of Different Types of Measures For continuous measures:For continuous measures: –Intraclass correlation For discrete measures:For discrete measures: –Calculate the percent agreement between test administrators

Validity, Reliability, and Objectivity It is possible to have high reliability or objectivity without high validity.It is possible to have high reliability or objectivity without high validity. Good reliability or objectivity will always be present with a valid measurement.Good reliability or objectivity will always be present with a valid measurement. Good reliability and objectivity do not establish good validity; they simply suggest that a measurement may be valid.Good reliability and objectivity do not establish good validity; they simply suggest that a measurement may be valid.