Presentation is loading. Please wait.

Presentation is loading. Please wait.

Validity Is the Test Appropriate, Useful, and Meaningful?

Similar presentations


Presentation on theme: "Validity Is the Test Appropriate, Useful, and Meaningful?"— Presentation transcript:

1 Validity Is the Test Appropriate, Useful, and Meaningful?

2 Properties of Validity Does the test measure what it purports to measure? Validity is a property of inferences that can be made. Validity should be established over multiple inferences.

3 Evidence of Validity Meaning of test scores Reliability Adequate standardization and norming Content validity Criterion-related validity Construct validity

4 Content Validity How well does the test represent the domain? Appropriateness of items. Completeness of items. How the items assess the content.

5 Do the Items Match the Content? Face validity Is the item considered part of the domain Curriculum match Do the items reflect what has been taught Homogeneity with the test Is the item positively correlated with the test score? (Minimum.25) Point bi-serial r xx

6 Are the Total Items Representative? Are items included representing various parts of the domain? Are different parts differentially represented? How well are the parts represented?

7 How Is the Content Measured? Different types of measures Multiple choice, short answer Performance-based vs. Factual recall Measures should match the type of content taught Measures should match HOW the content was taught

8 Criterion-related (Cr)validity How well does performance on the test reflect performance on what the test purports to measure? Expressed as r xx Based on concurrent CR validity And predictive CR validity

9 Concurrent Criterion-related Validity How well does performance on the test estimate knowledge and/or skill on the criterion measure? Does the reading test estimate the students current reading performance? Compares performance on one test with performance with other similar tests. KTEA w/ Woodcock-Johnson

10 Predictive Criterion-related Validity Will performance on a test now be predictive of performance at a later time? Will a score derived now from one test be as accurate as a score derived by another and later test ? Will a student’s current reading scores accurately reflect reading measured at a later time by another test?

11 Criterion-related Validity Should Describe criterion measures accurately Provide rationales for choices. Provide sufficient information to judge adequacy of the criterion Describe adequately the sample of students used. Include analytical data used to determine predictiveness

12 Criterion-related Validity Should Basic statistics should include Number of cases Reasons for eliminating cases Central tendency estimates Provide analysis of the limits toward generalizability of the test What kind of inferences about the content can be made

13 Construct Validity How validly does the test measure the underlying constructs it purports to measure? Do IQ tests measure intelligence? Do self-concept scales measure self concept?

14 Definition of Construct A psychological or personality trait E.G. Intelligence, learning style Or A psychological concept, attribute, or theoretical characteristic E.G. Problem solving, locus of control, or learning disability

15 Ways to Measure Construct Validity Developmental change: determining expected differences among identified groups Assuming content validity Assuming reliability Convergent /divergent validity: high correlation with similar tests and low correlation with tests measuring different constructs

16 Ways to measure Construct Validity Predictive Validity: High scores on one test should predict high scores on similar tests. Accumulation of evidence FAILING TO DISPROVE: If the concept or trait tested can be influenced by intervention, intervention effects should be reflected by pre and posttest scores. If the test score should not be influenced by intervention, changes should not be reflected in pre and posttest scores.

17 Factors Affecting Validity Unsystematic error Lack of reliability Systematic error Bias

18 Reliability Effects on Validity The validity of a measure can NEVER exceed the measure’s reliability Reliability measures error Validity measures expected traits (content, constructs, criteria) r xy = r x(t)y(t) r xx r yy

19 Systematic Bias Method of Measurement Behaviors in testing Item selection Administration errors Norms

20 Who is Responsible for Valid Assessment? The Test Author Authors are reponsible for ensuring and publishing validation. The Test Giver Test administrators are reponsible for following procedures outlined in administration guidelines.

21 Guidelines for Giving Tests Exact Administration Read the administration instructions. Note procedures for establishing baselines. Note procedures for individual items. Practice giving the test. Appropriate Pacing Develop fluency with the test.


Download ppt "Validity Is the Test Appropriate, Useful, and Meaningful?"

Similar presentations


Ads by Google