Presentation is loading. Please wait.

Presentation is loading. Please wait.

Consistency in testing

Similar presentations


Presentation on theme: "Consistency in testing"— Presentation transcript:

1 Consistency in testing
Reliability Consistency in testing

2 Types of variance Meaningful variance Error variance
Variance between test takers which reflects differences in the ability or skill being measured Error variance Variance between test takers which is caused by factors other than differences in the ability or skill being measured Test developers as ‘variance chasers’

3 Sources of error variance
Measurement error Environment Administration procedures Scoring procedures Examinee differences Test and items Remember, OS = TS + E

4 Estimating reliability for NRTs
Are the test scores reliable over time? Would a student get the same score if tested tomorrow? Are the test scores reliable over different forms of the same test? Would the student get the same score if given a different form of the test? Is the test internally consistent?

5 Reliability coefficient (rxx)
Range: 0.0 (totally unreliable test) to 1.0 (perfectly reliable test) Reliability coefficients are estimates of the systematic variance in the test scores lower reliability coefficient = greater measurement error in the test score

6 Test-retest reliability
Same students take test twice Calculate reliability (Pearson’s r) Interpret r as reliability (conservative) Problems Logistically difficult Learning might take place between tests

7 Equivalent forms reliability
Same students take parallel forms of test Calculate correlation Problems Creating parallel forms can be tricky Logistical difficulty

8 University of Michigan English Placement Test
(University of Michigan English Placement Test Examiner’s Manual)

9 Internal consistency reliability
Calculating the reliability from a single administration of a test Commonly reported Split-half Cronbach alpha K-R20 K-R21 Calculated automatically by many statistical software packages

10 Split-half reliability
The test is split in half (e.g., odd / even) creating “equivalent forms” The two “forms” are correlated with each other The correlation coefficient is adjusted to reflect the entire test length Spearman-Brown Prophecy formula

11 Calculating split half reliability
ID Q1 Q2 Q3 Q4 Q5 Q6 Odd Even 1 2 3 4 5 6 Odd Mean 1.83 2 1 SD 0.75 1 3 3 2 Even 2 Mean 1.33 2 2 SD 1.21 1

12 Calculating split half reliability (2)
Odd Mean Diff Even Prod. 2 1.83 1 1.33 3 0.17 -0.33 -0.056 1.67 -1.386 -0.83 1.17 0.67 0.784 -1.33 -0.226 0.17 0.114 0.17 0.67 -0.83 -1.33 1.104 0.334

13 Calculating split half
0.334 = 0.06 (6)(.75)(1.21) Adjust for test length using Spearman-Brown Prophecy formula 2 x 0.06 (2 – 1) rxx =0.11

14 Cronbach alpha = 0.12 2 (1 - (0.75)2 + (1.21)2 (1.47)2 )
Similar to split half but easier to calculate 2 (1 - (0.75)2 + (1.21)2 (1.47)2 ) = 0.12

15 K-R20 “Rolls-Royce” of internal reliability estimates
Simulates calculating split-half reliability for every possible combination of items

16 K-R20 formula Note that this is variance, not standard deviation
Sum of Item Variance = the sum of IF(1-IF)

17 K-R21 Slightly less accurate than KR-20, but can be calculated with just descriptive statistics Tends to underestimate reliability

18 KR-21 formula Note that this is variance (standard deviation squared)

19 Test summary report (TAP)
Number of Items Excluded = 0 Number of Items Analyzed = 40 Mean Item Difficulty = 0.597 Mean Item Discrimination = 0.491 Mean Point Biserial = 0.417 Mean Adj. Point Biserial = 0.369 KR20 (Alpha) = 0.882 KR = 0.870 SEM (from KR20) = 2.733 # Potential Problem Items = 9 High Grp Min Score (n=15) = Low Grp Max Score (n=14) = Split-Half (1st/ 2nd) Reliability = (with Spearman-Brown = 0.470) Split-Half (Odd/Even) Reliability = (with Spearman-Brown = 0.927)

20 Standard Error of Measurement
If we give a student the same test repeatedly (test-retest), we would expect to see some variation in the scores 50 49 52 50 51 49 48 50 With enough repetition, these scores would form a normal distribution We would expect the student to score near the center of the distribution the most often

21 Standard Error of Measurement
The greater the reliability of the test, the smaller the SEM We expect the student to score within one SEM approximately 68% of the time If a student has a score of 50 and the SEM is 3, we expect the student to score between 47 ~ 53 approximately 68% of the time on a retest

22 Interpreting the SEM For a score of 29: (K-R21)
26 ~ 32 is within 1 SEM 23 ~ 35 are within 2 SEM 20 ~ 38 are within 3 SEM

23 Calculating the SEM What is the SEM for a test with a reliability of r=.889 and a standard deviation of 8.124? SEM = 2.7 What if the same test had a reliability of r = .95? SEM = 1.8

24 Reliability for performance assessment
Traditional fixed response assessment Performance assessment (i.e. writing, speaking) Test-taker Test-taker Task Instrument (test) Performance Scale Score Score Rater / judge

25 Interrater/Intrarater reliability
Calculate correlation between all combinations of raters Adjust using Spearman-Brown to account for total number of raters giving score

26


Download ppt "Consistency in testing"

Similar presentations


Ads by Google