Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Many Threats to Test Validity David Mott, Tests for Higher Standards and Reports Online Systems Presentation at the Virginia Association of Test Directors.

Similar presentations


Presentation on theme: "The Many Threats to Test Validity David Mott, Tests for Higher Standards and Reports Online Systems Presentation at the Virginia Association of Test Directors."— Presentation transcript:

1 The Many Threats to Test Validity David Mott, Tests for Higher Standards and Reports Online Systems Presentation at the Virginia Association of Test Directors (VATD) Conference, Richmond, VA, October 28, 2009

2 The Many Threats to Test Validity In order for a test or assessment to have any value whatsoever, it must be possible to make reasonable inferences from the score. This is much harder than it seems. The test instruments, the testing conditions, the students, and the score interpreters, and perhaps Fate, ALL need to be working together to produce data worth using. Many specific threats will be delineated; a number of solutions suggested; and audience participation is strongly encouraged.

3 Validity and Value come from the same Latin root. The word has to do with being strong, well, good. Validity = Value

4 Initial Attitude Adjustment Amassing Statistics The government are very keen on amassing statistics — they collect them, raise them to the n th power, take the cube root and prepare wonderful diagrams. But what you must never forget is that every one of those figures comes in the first instance from the village watchman, who just puts down what he damn well pleases. (J. C. Stamp (1929). Some Economic Factors in Modern Life. London: P. S. King and Son) Distance from Data I have noticed that the farther one is from the source of data, the more likely one is to believe that the data could be a good basis for action. (D. E. W. Mott (2009). Quotations.)

5 The Examination as shown by the Ghost of Testing Past

6 Validity — Older Formulations 1950’s through 1980’s  content validity  concurrent validity  predictive validity  construct validity Lee J. Cronbach

7 Content Validity —  Refers to the extent to which a measure represents all facets of a given social construct. Social constructs such as: Reading Ability, Math. Computation Proficiency, Optimism, Driving Skill, etc. It is a more formal term than face validity. As face validity refers, not to what the test actually measures, but to what it appears to measure. Face validity is whether a test "looks valid" to the examinees who take it, the administrative personnel who decide on its use, and to others.

8 Concurrent Validity —  Refers to a demonstration of how well a test correlates well with a measure that has previously been validated. The two measures may be for the same construct, or for different, but presumably related, constructs.

9 Predictive Validity —  Refers to the extent to which a score on a scale or test predicts scores on some criterion measure. For example, how well do your final benchmarks predict scores on the state SOL Tests?

10 Construct Validity —  Refers to whether a scale measures or correlates with the theorized underlying psychological construct (e.g., "fluid intelligence") that it claims to measure. It is related to the theoretical ideas behind the trait under consideration, i.e. the concepts that organize how aspects of personality, intelligence, subject-matter knowledge, etc. are viewed.

11 Validity — New Formulation 1990’s through now Six aspects or views of Construct Validity  content aspect  substantive aspect  structural aspect  generalizability aspect  external aspect  consequential aspect Samuel Messick

12 Validity — New Formulation Six aspects or views of Construct Validity  Content aspect – evidence of content relevance, representativeness, and technical quality  Substantive aspect – theoretical rationales for consistency in test responses, including process models, along with evidence that the processes are actually used in the assessment tasks  Structural aspect – judges the fidelity of scoring to the actual structure of the construct domain  Generalizability aspect – the extent to which score properties and interpretations generalize to related populations, settings, and tasks  External aspect – includes converging and discriminating evidence from multitrait-multimethod comparisons as well as proof of relevance and utility.  Consequential aspect – shows the values of score interpretation as a basis for action and the actual and potential consequences of test use, especially in regard to invalidity related to bias, fairness, and distributive justice

13 Validity — New Formulation Six aspects or views of Construct Validity  Content aspect – evidence of content relevance, representativeness, and technical quality  Substantive aspect – theoretical rationales for consistency in test responses, including process models, along with evidence that the processes are actually used in the assessment tasks  Structural aspect – judges the fidelity of scoring to the actual structure of the construct domain  Generalizability aspect – the extent to which score properties and interpretations generalize to related populations, settings, and tasks  External aspect – includes converging and discriminating evidence from multitrait-multimethod comparisons as well as proof of relevance and utility.  Consequential aspect – shows the values of score interpretation as a basis for action and the actual and potential consequences of test use, especially in regard to invalidity related to bias, fairness, and distributive justice

14 Validity — New Formulation Six aspects or views of Construct Validity  Content aspect – evidence of content relevance, representativeness, and technical quality  Substantive aspect – theoretical rationales for consistency in test responses, including process models, along with evidence that the processes are actually used in the assessment tasks  Structural aspect – judges the fidelity of scoring to the actual structure of the construct domain  Generalizability aspect – the extent to which score properties and interpretations generalize to related populations, settings, and tasks  External aspect – includes converging and discriminating evidence from multitrait-multimethod comparisons as well as proof of relevance and utility.  Consequential aspect – shows the values of score interpretation as a basis for action and the actual and potential consequences of test use, especially in regard to invalidity related to bias, fairness, and distributive justice

15 Validity — New Formulation Six aspects or views of Construct Validity  Content aspect – evidence of content relevance, representativeness, and technical quality  Substantive aspect – theoretical rationales for consistency in test responses, including process models, along with evidence that the processes are actually used in the assessment tasks  Structural aspect – judges the fidelity of scoring to the actual structure of the construct domain  Generalizability aspect – the extent to which score properties and interpretations generalize to related populations, settings, and tasks  External aspect – includes converging and discriminating evidence from multitrait-multimethod comparisons as well as proof of relevance and utility.  Consequential aspect – shows the values of score interpretation as a basis for action and the actual and potential consequences of test use, especially in regard to invalidity related to bias, fairness, and distributive justice

16 Validity — New Formulation Six aspects or views of Construct Validity  Content aspect – evidence of content relevance, representativeness, and technical quality  Substantive aspect – theoretical rationales for consistency in test responses, including process models, along with evidence that the processes are actually used in the assessment tasks  Structural aspect – judges the fidelity of scoring to the actual structure of the construct domain  Generalizability aspect – the extent to which score properties and interpretations generalize to related populations, settings, and tasks  External aspect – includes converging and discriminating evidence from multitrait-multimethod comparisons as well as proof of relevance and utility.  Consequential aspect – shows the values of score interpretation as a basis for action and the actual and potential consequences of test use, especially in regard to invalidity related to bias, fairness, and distributive justice

17 Validity — New Formulation Six aspects or views of Construct Validity  Content aspect – evidence of content relevance, representativeness, and technical quality  Substantive aspect – theoretical rationales for consistency in test responses, including process models, along with evidence that the processes are actually used in the assessment tasks  Structural aspect – judges the fidelity of scoring to the actual structure of the construct domain  Generalizability aspect – the extent to which score properties and interpretations generalize to related populations, settings, and tasks  External aspect – includes converging and discriminating evidence from multitrait-multimethod comparisons as well as proof of relevance and utility.  Consequential aspect – shows the values of score interpretation as a basis for action and the actual and potential consequences of test use, especially in regard to invalidity related to bias, fairness, and distributive justice

18 Administration Validity  Administration Validity is my own term. A test administration or a test session is valid if nothing happens that causes a test, an assessment, or a survey to fail to reflect the actual situation. Test-session validity is an alternate term.

19 Administration Validity  Many things can come between the initial creation of an assessment from valid materials and the final uses of the scores that come from that assessment.  Imagine a chain that is only as strong as its weakest link. If any link breaks, the value of the whole chain is lost.  This session deals with some of those weak links.

20 Areas of Validity Failure

21  We create a test out of some “valid” items — Discuss some of the realities most of us face: We either have some “previously validated” tests or we have a “validated” item bank we make tests from. Let’s assume that they really are valid, this is, the materials have good content matches with the Standards/ Curriculum Frameworks/Blueprints, and so on.

22 Areas of Validity Failure Some examples of things that can creep in within the supposedly “mechanical” aspects of creating a test from a bank.  Here are two items from a Biology benchmark test we recently made for a client:

23 Two Biology Items Bio.3b 5.Which organic compound is correctly matched with the subunit that composes it? A maltose – fatty acids B starch – glucose C protein – amino acids D lipid – sucrose Bio.3b 6.Which organic compounds are the building blocks of proteins? A sugars B nucleic acids C amino acids D polymers

24 Two Biology Items Bio.3b 5.Which organic compound is correctly matched with the subunit that composes it? A maltose – fatty acids B starch – glucose C protein – amino acids D lipid – sucrose Bio.3b 6.Which organic compounds are the building blocks of proteins? A sugars B nucleic acids C amino acids D polymers Standard BIO.3b The student will investigate and understand the chemical and biochemical principles essential for life. Key concepts include b) the structure and function of macromolecules.

25 Two Biology Items Bio.3b 5.Which organic compound is correctly matched with the subunit that composes it? A maltose – fatty acids B starch – glucose C protein – amino acids * D lipid – sucrose Bio.3b 6.Which organic compounds are the building blocks of proteins? A sugars B nucleic acids C amino acids D polymers

26 Two Biology Items Bio.3b 5.Which organic compound is correctly matched with the subunit that composes it? A maltose – fatty acids B starch – glucose C protein – amino acids * D lipid – sucrose Bio.3b 6.Which organic compounds are the building blocks of proteins? A sugars B nucleic acids C amino acids * D polymers

27 A Life Science Item LS.6c 12.In this energy pyramid, which letter would represent producers?ABCD ABCDABCD

28 The same Life Science Item “Randomized” LS.6c 12.In this energy pyramid, which letter would represent producers? A C B D C A D B ABCDABCD

29 Moving from test creation to test administration

30 What Can Fail in the Test Administration Process

31  Students aren’t properly motivated Random responding Patterning responses Unnecessary guessing Cheating Let’s look at what some of these look like:

32 What happened here?

33

34

35 What Can Fail in the Test Administration Process  Students or teachers make mistakes. Stopping before the end of test Getting off position on answer sheets Giving a student the wrong answer sheet Scoring a test with the wrong key Let’s look at what some of these look like:

36 What happened here?

37 Moving to diagnosing students’ needs

38 What is the obvious conclusion about these test results?

39 What do you think now?

40 The chain has many links  Nearly any of them can break  Try to find the weakest links in your organizations efforts  Fix them – one by one

41 What are some of my solutions to all of this?  To the problems of mistakes in test creation Use test blueprints Be very careful of automatic test construction Read the test carefully yourself and answer the questions Have someone else read the test carefully and answer the questions Use “Kid-Tested” items * * Future TfHS initiative

42 What are some of my solutions to all of this?  Be careful when reading reports – look past the obvious  For problems of careless, unmotivated test taking by students (even cheating) — Make the test less of a contest between the system/teacher and the student and more of a communication device between them Watch the students as they take the test and realize that proctoring rules necessary for high-stakes tests are possibly not best for formative or semi-formative assessments Look for/flag pattern marking and rapid responding * Watch the students as they take the test * Future TfHS/ROS initiative

43 Here is a graph showing the timing of student responses to an item

44 For online tests it is possible to screen for rapid responding * * Future TfHS/ROS initiative

45 A major new way of communicating!  Let the students tell you when they don’t know or understand something – eliminate guessing  New mc scoring scheme: * 1 point for each correct answer 0 points for each wrong answer ⅓ point for each unanswered question Students mark where they run out of time * Future TfHS/ROS initiative

46 A major new way of communicating!  Students have to be taught the new rules  Students need one or two tries to get the hang of it  Students need to know when the new scoring applies  It is better for students to admit not knowing than to guess Continued

47 Answering under the new scoring scheme

48

49 Scored under the new scoring scheme

50 Humor Time flies like an arrow; fruit flies like a banana. We sometimes need to take a 90° turn in our thinking

51 My contact information  David Mott  TfHS website –www.tfhs.net  ROS website – rosworks.com


Download ppt "The Many Threats to Test Validity David Mott, Tests for Higher Standards and Reports Online Systems Presentation at the Virginia Association of Test Directors."

Similar presentations


Ads by Google