Download presentation

Presentation is loading. Please wait.

Published byChad Tibbett Modified over 2 years ago

1
Chapter 5 Understanding, Calculating, and Evaluating Reliability and Objectivity

2
Methods for Evaluating Reliability and Their Calculations

3
Test–Retest Most straightforward way to determine reliabilityMost straightforward way to determine reliability Must have:Must have: –No major changes in the construct being measured –Sufficient recovery time between measurements

4
Evaluating Test–Retest Reliability Reliability coefficient:Reliability coefficient: –A ratio that shows the relationship between two measurements, indicating the consistency (or reliability) between them. Intraclass correlation:Intraclass correlation: –A statistical technique used to compute the reliability coefficient to assess the relationship between measures of the same class as in a test–retest study.

5
Calculating the Intraclass Correlation R=[(SSa/n–1) – (SSw/(n*(k–1)))]/ (SSa/n–1) SSa=(ΣT 2 /k) – ((ΣX) 2 /nk) SSw= ΣX 2 – (ΣT 2 /k) Where R is the intraclass reliability Σ represents the sum N = the number of test subjects k = the number of trials for each person (usually two) ΣT 2 = the sum of all the squared total scores for each person ΣX 2 = the sum of all the scores of everyone tested

6
An Alternative for Calculating the Intraclass Correlation Try this website:Try this website: http://department.obg.cuhk.edu.hk and go the Statistics Tool Box link.

7
Results of a Calculation of Intraclass Reliability

8
Evaluating Reliability with a Single Test Administration Split-half reliability:Split-half reliability: –Compare one half of a test with the other half –Spearman-Brown Prophecy Formula Internal consistency reliability:Internal consistency reliability: –Average all possible split-half estimates –Cronbach’s alpha

9
Evaluating the Reliability of Criterion-Referenced Measurements Calculate percentage of agreement between the test and the retest.Calculate percentage of agreement between the test and the retest. Percentage of Agreement= [(Cboth + NCboth) / (Cboth + NCboth + C/NC + NC/C)] * 100 Where Cboth = people scored as competent in both Trials 1 and 2 NCboth = people scored as not competent in both trials NCboth = people scored as not competent in both trials NC/C = people scored as not competent in Trial 1 but competent in Trial 2 NC/C = people scored as not competent in Trial 1 but competent in Trial 2 C/NC = people scored as competent in Trial 1 but not competent in Trial 2 C/NC = people scored as competent in Trial 1 but not competent in Trial 2

10
Example Diagram for Evaluating the Reliability of a Criterion-Referenced Measurement

11
Standard Error of Measurement Defined:Defined: –An estimation of the error inherent in any individual’s test score. SEM = SD * √1–r rc where SD = the standard deviation for the test r rc = the reliability coefficient for the test r rc = the reliability coefficient for the test

12
Increasing Reliability Repeat a measurement several times—Repeat a measurement several times— –To improve both validity and reliability –To discover and minimize errors –To average out the errors

13
Methods for Evaluating Objectivity

14
Calculating Objectivity Objectivity can be considered a special case of reliability.Objectivity can be considered a special case of reliability. –Inter-rater reliability Most techniques used to evaluate reliability can be used to evaluate objectivity.Most techniques used to evaluate reliability can be used to evaluate objectivity.

15
Calculating Objectivity of Different Types of Measures For continuous measures:For continuous measures: –Intraclass correlation For discrete measures:For discrete measures: –Calculate the percent agreement between test administrators

16
Validity, Reliability, and Objectivity It is possible to have high reliability or objectivity without high validity.It is possible to have high reliability or objectivity without high validity. Good reliability or objectivity will always be present with a valid measurement.Good reliability or objectivity will always be present with a valid measurement. Good reliability and objectivity do not establish good validity; they simply suggest that a measurement may be valid.Good reliability and objectivity do not establish good validity; they simply suggest that a measurement may be valid.

17
Your Viewpoint Can you think of any times in your life when you have had to evaluate the reliability or objectivity of something or someone?Can you think of any times in your life when you have had to evaluate the reliability or objectivity of something or someone? What did you do with the results of this evaluation? Did it cause you to make any changes in your daily routines or change your mind about a decision?What did you do with the results of this evaluation? Did it cause you to make any changes in your daily routines or change your mind about a decision?

Similar presentations

OK

Reliability and Validity in Testing. What is Reliability? Consistency Accuracy There is a value related to reliability that ranges from -1 to 1.

Reliability and Validity in Testing. What is Reliability? Consistency Accuracy There is a value related to reliability that ranges from -1 to 1.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on web browser in java Ppt on digital media broadcasting Ppt on ministry of corporate affairs india Ppt on eddy current separator Ppt on central ac plant Ppt on id ego superego sigmund Ppt on non ferrous minerals definition Ppt on blood stain pattern analysis publications Download ppt on diversity in living organisms for class 9 Ppt on business model of dell