Presentation on theme: "Topics: Quality of Measurements"— Presentation transcript:
1Topics: Quality of Measurements ReliabilityValidity
2The Quality of Measuring Instruments: Definitions Reliability: Consistency - the extent to which the data are consistentValidity: Accuracy- the extent to which the instrument measures what it purports to measure
4The Questions of Reliability To what degree does a subject’s measured performance remain consistent across repeated testings? How consistently will results be reproduced if we measure the same individuals again?What is the equivalence of results of two measurement occasions using “parallel” tests?To what extent do the individual items that go together to make up a test or inventory consistently measure the same underlying characteristic?How much consistency exists among the ratings provided by a group of raters?When we have obtained a score, how precise is it?
6Sources of Error: Conditions of Test Administration and Construction Changes in time limitsChanges in directionsDifferent scoring proceduresInterrupted testing sessionQualities of test administratorTime test is takenSampling of itemsAmbiguity in wording of items/questionsAmbiguous directionsClimate of test situation (heating, light, ventilation, etc)Differences in observers
7Sources of Error: Conditions of the Person Taking the Test Reaction to specific itemsHealthMotivationMoodFatigueLuckMemory and/or attention fluctuationsAttitudesTest-taking skills (test-wiseness)Ability to understand instructionsAnxiety
8Reliability Reliability: ratio of true variance to observed variance Reliability coefficient: a numerical index which assumes a value between 0 and +1.00
9Relation between Reliability and Error True-ScoreVariabilityErrorTrue-ScoreVariabilityErrorReliable Measure (A)Unreliable Measure (B)
10Methods of Estimating Reliablity Test-Retest: Repeated measures with the same test (coefficient of stability)Parallel Forms: Repeated measures with equivalent forms of a test (coefficient of equivalence)Internal Consistency: Repeated measures using items on a single testInter-Rater: Judgments by more than one rater.
11Reliability Is The Consistency Of A Measurement Repeated Measurements/ObservationsPerson X1 X2 X Xk-->infinityCharlieHarryReliableRepeated Measurements/ObservationsPerson X1 X2 X Xk-->infinityCharlieHarryUnreliable
12Test-Retest Reliability Situation: Same people taking two administrations of the same testProcedure: Correlate scores on the two tests which yields the coefficient of stabilityMeaning: the extent to which scores on a test can be generalized over different occasions (temporal stability).Appropriate use: Information about the stability of the trait over time.
13Parallel (Alternate)Forms Reliability Situation: Testing of same people on different but comparable forms of the testProcedure: correlate the scores from the two tests which yields a coefficient of equivalenceMeaning: the consistency of response to different item samples (where testing is immediate) and across occasions (where testing is delayed).Appropriate use: to provide information about the equivalence of forms
14Internal Consistency Reliability Situation: a single administration of one test formProcedure: Divide test into comparable halves and correlate scores from both halves.Split Half with Spearman Brown adjustmentKuder Richardson #20 and #21Cronbach’s AlphaMeaning: consistency across the parts of a measuring instrument (“parts” = individual items or subgroups of items).Appropriate Use: Where focus is on the degree to which same characteristic is being measured. A measure of test homogeneity.
15Inter-rater Reliability Situation: Having a sample of test papers (essays) scored independently by two examinersProcedure: correlate the two sets of scoresKendall’s coefficient of concordanceCohen’s kappaIntraclass correlationPearson product momentMeaning: measure of scorer (rater) reliability (consistency, agreement) which yields the coefficient of concordance.Appropriate Use: For ensuring consistency between raters
16When is a reliability satisfactory? Depends on the type of instrumentDepends on the purpose of the studyDepends on who is affected by results
17Factors Affecting Reliability Estimates Test lengthRange of scoresItem similarity
18Standard Error of Measurement All tests scores contain some errorFor any test, the higher the reliability estimate, the lower the errorThe standard error or measurement is the average standard deviation of the error variance over the number of people in the sampleCan be used to estimate a range within which a true score would likely fall
19Use of Standard Error of Measurement We never know the true scoreBy knowing the s.e.m. and by understanding the normal curve, we can assess the likelihood of the true score being within certain limits.The higher the reliability the lower the standard error of measurement, hence more confidence we can place in the accuracy of a person’s test score.
20Normal CurveAreas Under the Curve.3413.3413.1359.135968%.0214.021495%.0013.001399%-3se-2se-1se+1se+2se+3seX=test score
21Warnings about Reliability No such thing as “the” reliability; Different methods are assessing consistency from different perspectivesReliability coefficients apply to the data, NOT to the instrumentAny reliability is only an estimate of consistency