Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reliability or Validity Reliability gets more attention: n n Easier to understand n n Easier to measure n n More formulas (like stats!) n n Base for validity.

Similar presentations


Presentation on theme: "Reliability or Validity Reliability gets more attention: n n Easier to understand n n Easier to measure n n More formulas (like stats!) n n Base for validity."— Presentation transcript:

1 Reliability or Validity Reliability gets more attention: n n Easier to understand n n Easier to measure n n More formulas (like stats!) n n Base for validity

2 Need for validity n Does test measure what it claims? n Can test be used to make decisions?

3 Validity Reliability is a necessary, but not a sufficient condition for validity.

4 Validity: a definition “A test is valid to the extent that inferences made from it are appropriate, meaningful, and useful” Standards for Educational and Psychological Testing, 1999

5 “Face Validity” “looks good to me”!!!!!!!

6 Trinitarian view of Validity n Content (meaning) n Construct (meaning) n Criterion (use)

7 1) Content Validity “How adequately a test samples behaviors representative of the universe of behaviors the test was designed to measure.”

8 Determining Content Validity describe the domain describe the domain specify areas to be measured specify areas to be measured compare test to domain compare test to domain

9 Content Validity Ratio (CVR) Agreement among raters if item is:  Essential  Useful but not essential  Not necessary

10 2) Construct validity “A theoretical intangible” “An informed, scientific idea” -- how well the test measures that construct

11 Determining Construct validity behaviors related to constructs behaviors related to constructs related/unrelated constructs related/unrelated constructs identify relationships identify relationships multi trait/multi method multi trait/multi method

12 Multitrait-Multimethod Matrix n Correlate scores from 2 (or more tests) n Correlate scores obtained from 2 (or more) methods

13 Evidence of Construct Validity n Upholds theoretical predictions u Changes (?) over time, gender, training n Homogeneity of questions u (internal consistency, factor or item analysis) n Convergent/discriminant u Multitrait-multimethod matrix

14 Decision Making How well the test can be used to help in decision making about a particular criterion.

15 Decision Theory n Base rate n Hit rate n Miss rate n False positive n False negative

16 3) Criterion Validity “The relationship between performance on the test and on some other criterion.”

17 Validity coefficient Correlation between test score and score on criterion measure.

18 Two ways to establish Criterion Validity A) Concurrent validity B) Predictive validity

19 Determining Concurrent validity n Assess individuals on construct n Administer test to lo/hi on construct n Correlate test scores to prior identification  Use test later to make decisions

20 Determining Predictive validity n Give test to group of people n Follow up group n Assess later n Review test scores  If correlate with behavior later can use later to make decisions

21 Incremental validity  Value of including more than one predictor  Based on multiple regression  What is added to prediction not present with previous measures?

22 Expectancy data n Taylor-Russell Table n Naylor-Shine Tables n Too vague, outdated, biased

23 Unified Validity - Messick “Validity is not a property of the test, but rather the meaning of the scores.” Value implications Relevance and utility

24 Unitarian considerations n Content n Construct n Criterion n Consequences

25 Threats to validity n Construct underrepresentation (too narrow) n Construct-irrelevant variance (too broad) construct-irrelevant difficulty construct-irrelevant difficulty construct-irrelevant easiness construct-irrelevant easiness

26 Example 1 Dr. Heidi considers using the Scranton Depression Inventory to help identify severity of depression and especially to distinguish depression from anxiety. What evidence should Dr. Heidi use to determine if the test does what she hopes it will do?

27 Example 2 The newly published Diagnostic Wonder Test promises to identify children with a mathematics learning disability. How will we know whether the test does so or is simply a slickly packaged general ability test?

28 Example 3 Ivy College uses the Western Admissions Test (WTA) to select applicants who should be successful in their studies. What type of evidence should we seek to determine if the WAT satisfies its purpose?

29 Example 4 Mike is reviewing a narrative report of his scores on the Nifty Personality Questionnaire (NPQ). The report says he is exceptionally introverted and unusually curious about the world around him. Can Mike have any confidence in these statements or should they be dismissed as equivalent to palm readings at the county fair?

30 Example 5 A school system wants to use an achievement battery that will measure the extent to which students are learning the curriculum specified by the school. How should the school system proceed in reviewing the available achievement tests?

31 Project homework question n What content or construct is your measure assessing? (explain your answer) n What do you think congruent and discriminate constructs would be to the one in your measure? n How would you determine the content or construct validity of your measure? n How would you determine the criterion validity of your measure? n Why would you use those approaches?

32 Project homework question n Select a standardized instrument from MMY to use as a comparison for your measure? n Copy the relevant data. n Why did you select that instrument? n How would you use it to help standardize your measure?

33


Download ppt "Reliability or Validity Reliability gets more attention: n n Easier to understand n n Easier to measure n n More formulas (like stats!) n n Base for validity."

Similar presentations


Ads by Google