Download presentation

Presentation is loading. Please wait.

Published byLisa Barks Modified over 2 years ago

1
IOP 301-T Test Validity

2
IOP 301-T What is validity? In simple English, the validity of a test concerns what the test measures and what the test measures and how well it measures how well it measures It is the accuracy of the measure in reflecting the concept it is supposed to measure.

3
IOP 301-T Types of validity Content-description Content-description Criterion-description Criterion-description Construct-identification Construct-identification

4
IOP 301-T

5
Content validity non-statistical in nature non-statistical in nature involves determining whether the sample used for the measure is representative of the aspect to be measured involves determining whether the sample used for the measure is representative of the aspect to be measured

6
IOP 301-T Content validity It adopts a subjective approach whereby we have recourse, for e.g, to expert opinion for the evaluation of items during the test construction phase. It adopts a subjective approach whereby we have recourse, for e.g, to expert opinion for the evaluation of items during the test construction phase.

7
IOP 301-T Content validity Relevant for evaluating Achievement Achievement Educational Educational Occupational Occupational measures.

8
IOP 301-T Content validity Basic requirement for Criterion-referenced Criterion-referenced Job sample Job sample measures, which are essential for Employee selection Employee selection Employee classification Employee classification

9
IOP 301-T Content validity Measures are interpreted in terms of Mastery of knowledge Mastery of knowledge Skills Skills for a specific job.

10
IOP 301-T Content validity Not appropriate for Aptitude Aptitude Personality Personality measures since validation has to be made through criterion- prediction procedures. measures since validation has to be made through criterion- prediction procedures.

11
IOP 301-T Face validity It is not validity in psychometric terms ! It is not validity in psychometric terms ! It just refers to what the test appears to measure and not to what it measures in fact. It just refers to what the test appears to measure and not to what it measures in fact.

12
IOP 301-T Face validity It is not useless since the aim may be achieved by using appropriate phrasing only ! It is not useless since the aim may be achieved by using appropriate phrasing only ! In a sense, it ensures relevance to the context by employing correct expressions. In a sense, it ensures relevance to the context by employing correct expressions.

13
IOP 301-T Criterion-related validity Definition Definition A criterion variable is one with (or against) which psychological measures are compared or evaluated. A criterion variable is one with (or against) which psychological measures are compared or evaluated. A criterion must be reliable ! A criterion must be reliable !

14
IOP 301-T Criterion-related validity It is a quantitative procedure which involves calculating the correlation coefficient between one or more predictor variables and a criterion variable. It is a quantitative procedure which involves calculating the correlation coefficient between one or more predictor variables and a criterion variable.

15
IOP 301-T Criterion-related validity The validity of the measure is also determined by its ability to predict performance on the criterion. The validity of the measure is also determined by its ability to predict performance on the criterion.

16
IOP 301-T Criterion-related validity Concurrent validity Concurrent validity Accuracy to identify the current status regarding skills and characteristics Accuracy to identify the current status regarding skills and characteristics Predictive validity Predictive validity Accuracy to forecast future behaviour. It implicitly contains the concept of decision-making Accuracy to forecast future behaviour. It implicitly contains the concept of decision-making

17
IOP 301-T Criterion-related validity Warning ! Sometimes a factor may affect the criterion such that it is no longer a valid measure. Sometimes a factor may affect the criterion such that it is no longer a valid measure. This is known as criterion contamination. This is known as criterion contamination.

18
IOP 301-T Criterion-related validity Warning ! For example, the rater might For example, the rater might Be too lenient Be too lenient Commit halo error, i. e, rely on impressions Commit halo error, i. e, rely on impressions

19
IOP 301-T Criterion-related validity Warning ! Therefore, we must make sure that the criterion is free from bias (prejudice). Therefore, we must make sure that the criterion is free from bias (prejudice). Bias definitely influences the correlation coefficient. Bias definitely influences the correlation coefficient.

20
IOP 301-T Common criterion measures Academic achievement Academic achievement Performance in specialised training Performance in specialised training Job performance Job performance Contrasted groups Contrasted groups Psychiatric diagnoses Psychiatric diagnoses

21
IOP 301-T Common criterion measures Academic achievement Used for validation of intelligence, multiple aptitude and personality measures. Used for validation of intelligence, multiple aptitude and personality measures. Indices include School, college or university grades School, college or university grades Achievement test scores Achievement test scores Special awards Special awards

22
IOP 301-T Common criterion measures Performance in specialised training Used for specific aptitude measures Indices include training outcomes for Technical courses Technical courses Academic courses Academic courses

23
IOP 301-T Common criterion measures Job performance Used for validating intelligence, special aptitude and personality measures Indices include jobs in industry, business, armed services, government Tests describe duties performed and the ways that they are measured

24
IOP 301-T Common criterion measures Contrasted groups Sometimes used for validating personality measures Relevant when distinguishing the nature of occupations (e.g, social and non- social – Public Relations Officer and Clerk)

25
IOP 301-T Common criterion measures Psychiatric diagnoses Used mainly for validating personality measures Based on Prolonged observation Prolonged observation Case history Case history

26
IOP 301-T Common criterion measures Academic achievement Performance in specialised training Job performance Contrasted groups Psychiatric diagnoses Intelligence Aptitude Personality

27
IOP 301-T Ratings Suitable for almost any type of measure Suitable for almost any type of measure Very subjective in nature Very subjective in nature Often the only source available Often the only source available

28
IOP 301-T Ratings These are given by teachers, lecturers, instructors, supervisors, officers, etc,… Raters may be trained to avoid common errors like Halo error Halo error Ambiguity Ambiguity Error of central tendency Error of central tendency Leniency Leniency

29
IOP 301-T Criterion-related validity We can always validate a new measure by correlating it with another valid test (obviously, reliable as well !) We can always validate a new measure by correlating it with another valid test (obviously, reliable as well !)

30
IOP 301-T Criterion-related validity Some modern and popular criterion- prediction procedures which are now widely used are Some modern and popular criterion- prediction procedures which are now widely used are Validity generalisation Validity generalisation Meta-analysis Meta-analysis Cross-validation Cross-validation

31
IOP 301-T Criterion-related validity Validity generalisation Validity generalisation Schmidt, Hunter et al. showed that the validity of tests measuring verbal, numeric and reasoning aptitudes can be generalised widely across occupations (these require common cognitive skills). Schmidt, Hunter et al. showed that the validity of tests measuring verbal, numeric and reasoning aptitudes can be generalised widely across occupations (these require common cognitive skills).

32
IOP 301-T Criterion-related validity Meta-analysis Meta-analysis Method of reviewing research literature Method of reviewing research literature Statistical integration and analysis of previous and current findings on a topic. Statistical integration and analysis of previous and current findings on a topic. Validation by correlation Validation by correlation

33
IOP 301-T Criterion-related validity Cross-validation Cross-validation Refinement of initial measure Refinement of initial measure Application to another representative normative sample Application to another representative normative sample Recalculation of validity coefficients Recalculation of validity coefficients Lowering of coefficient expected after minimisation of chance differences and sampling errors (spuriousness) Lowering of coefficient expected after minimisation of chance differences and sampling errors (spuriousness)

34
IOP 301-T Construct-identification validity Construct validity is the sensitivity of the instrument to pick up minor variations in the concept being measured. Construct validity is the sensitivity of the instrument to pick up minor variations in the concept being measured. Can an instrument (questionnaire) to measure anxiety pick up different levels of anxiety or just its presence or absence? Can an instrument (questionnaire) to measure anxiety pick up different levels of anxiety or just its presence or absence?

35
IOP 301-T Construct-identification validity Any data throwing light on the nature of the trait and the conditions affecting its development and manifestations represent appropriate evidence for this validation. Example I have designed a program to lower girls Math phobia. The girls who complete my program should have lower scores on the Math Phobia Measure compared to their scores before the program and compared to the scores of girls who have not completed the program.

36
IOP 301-T Construct validity methods Correlational validity Correlational validity Factor analysis Factor analysis Convergent and discriminant validity Convergent and discriminant validity

37
IOP 301-T Construct validity methods Correlational validity This involves correlating a new measure with similar previous measures of the same name. This involves correlating a new measure with similar previous measures of the same name. Warning ! High correlation may indicate duplication of measures. High correlation may indicate duplication of measures.

38
IOP 301-T Construct validity methods Factor analysis (FA) It is a multivariate statistical technique which is used to group multiple variables into a few factors. It is a multivariate statistical technique which is used to group multiple variables into a few factors. In doing FA you hope to find clusters of variables that can be identified as new hypothetical factors. In doing FA you hope to find clusters of variables that can be identified as new hypothetical factors.

39
IOP 301-T Construct validity methods Convergent and discriminant validity The idea is that a test should correlate highly with other similar tests and the test should correlate poorly with tests that are very dissimilar. The idea is that a test should correlate highly with other similar tests and the test should correlate poorly with tests that are very dissimilar.

40
IOP 301-T Construct validity methods Convergent and discriminant validity Example A newly developed test of motor coordination should correlate highly with other tests of motor coordination. A newly developed test of motor coordination should correlate highly with other tests of motor coordination. It should also have low correlation with tests that measure attitudes. It should also have low correlation with tests that measure attitudes.

41
IOP 301-T Indices and interpretation of validity Validity coefficient Validity coefficient - Magnitude of coefficient - Magnitude of coefficient - Factors affecting validity - Factors affecting validity Coefficient of determination Coefficient of determination Standard error of estimation Standard error of estimation Regression analysis (prediction) Regression analysis (prediction)

42
IOP 301-T Indices and interpretation of validity Validity coefficient Definition It is a correlation coefficient between the criterion and the predictor(s) variables. It is a correlation coefficient between the criterion and the predictor(s) variables.

43
IOP 301-T Indices and interpretation of validity Validity coefficient Differential validity refers to differences in the magnitude of the correlation coefficients for different groups of test-takers. Differential validity refers to differences in the magnitude of the correlation coefficients for different groups of test-takers.

44
IOP 301-T Indices and interpretation of validity Magnitude of validity coefficient Treated in the same way as the Pearson correlation coefficient ! Treated in the same way as the Pearson correlation coefficient !

45
IOP 301-T Indices and interpretation of validity Factors affecting validity Nature of the group Nature of the group Sample heterogeneity Sample heterogeneity Criterion-predictor relationship Criterion-predictor relationship Validity-reliability proportionality Validity-reliability proportionality Criterion contamination Criterion contamination Moderator variables Moderator variables

46
IOP 301-T Indices and interpretation of validity Factors affecting validity Nature of the group Consistency of the validity coefficient for subgroups which differ in any characteristic (e. g. age, gender, educational level, etc, …) Consistency of the validity coefficient for subgroups which differ in any characteristic (e. g. age, gender, educational level, etc, …)

47
IOP 301-T Indices and interpretation of validity Factors affecting validity Sample heterogeneity A wider range of scores results in a higher validity coefficient (range restriction phenomenon) A wider range of scores results in a higher validity coefficient (range restriction phenomenon)

48
IOP 301-T Indices and interpretation of validity Factors affecting validity Criterion-predictor relationship There must be a linear relationship between predictor and criterion. Otherwise, the Pearson correlation coefficient would be of no use! There must be a linear relationship between predictor and criterion. Otherwise, the Pearson correlation coefficient would be of no use!

49
IOP 301-T Indices and interpretation of validity Factors affecting validity Validity-reliability proportionality Reliability has a limiting influence on validity – we simply cannot validate an unreliable measure! Reliability has a limiting influence on validity – we simply cannot validate an unreliable measure!

50
IOP 301-T Indices and interpretation of validity Factors affecting validity Criterion contamination Get rid of bias by measuring contaminated influences. Then correct this influence statistically by use of partial correlation. Get rid of bias by measuring contaminated influences. Then correct this influence statistically by use of partial correlation.

51
IOP 301-T Indices and interpretation of validity Factors affecting validity Moderator variables Variables like age, gender, personality characteristics may help to predict performance for particular variables only – keep them in mind! Variables like age, gender, personality characteristics may help to predict performance for particular variables only – keep them in mind!

52
IOP 301-T Indices and interpretation of validity Coefficient of determination Indicates the proportion of variance in the criterion variable explained by the predictor. Indicates the proportion of variance in the criterion variable explained by the predictor. E.g. If r = 0.9, r 2 = % of the changes in the criterion is accounted for by the predictor. E.g. If r = 0.9, r 2 = % of the changes in the criterion is accounted for by the predictor.

53
IOP 301-T Standard error of estimation (SE) Treated just like the standard deviation. Treated just like the standard deviation. (True and predicted values for the criterion should differ by at most 1.96SE at a 95% confidence level.) (True and predicted values for the criterion should differ by at most 1.96SE at a 95% confidence level.) Indices and interpretation of validity

54
IOP 301-T Indices and interpretation of validity Regression analysis Mainly used to predict values of the criterion variable. Mainly used to predict values of the criterion variable. If r is high, prediction is more accurate. If r is high, prediction is more accurate. Predicted values are obtained from the line of best fit. Predicted values are obtained from the line of best fit.

55
IOP 301-T Indices and interpretation of validity Regression analysis Linear regression It involves one criterion variable but may involve one (simple regression) or more than one predictor variable (multiple regression). It involves one criterion variable but may involve one (simple regression) or more than one predictor variable (multiple regression).

56
IOP 301-T Indices and interpretation of validity Regression analysis

57
IOP 301-T Indices and interpretation of validity Simple linear regression Multiple linear regression

58
IOP 301-T Reliability and Validity A valid test is always reliable (in order for a test to be valid, it needs to be reliable in the first place) A valid test is always reliable (in order for a test to be valid, it needs to be reliable in the first place) A reliable test is not always valid. A reliable test is not always valid. Validity is more important than reliability. Validity is more important than reliability. To be useful, a measuring instrument (test, scale) must be both reasonably reliable and valid. To be useful, a measuring instrument (test, scale) must be both reasonably reliable and valid. Aim for validity first, and then try make the test more reliable little by little, rather than the other way around. Aim for validity first, and then try make the test more reliable little by little, rather than the other way around.

59
IOP 301-T Reliability and Validity IFTHEN Unreliable Unreliable Reliable, but not valid Reliable, but not valid Unreliable and invalid Unreliable and invalid Reliable and valid Reliable and valid Test validity is undermined. Test validity is undermined. Test is not useful. Test is not useful. Test is definitely NOT useful! Test is definitely NOT useful! Test can be used with good results. Test can be used with good results.

60
IOP 301-T Optimising reliability and Validity The more questions the better (the number of test items) Ask questions several times in slightly different ways (homogeneity) Get as many people as you can in your program (sample size n) Get different kinds of people in your program (sample heterogeneity) Linear relationship between the test and the criterion (Pearson correlation coefficient)

61
IOP 301-T Selecting and creating measures Define the construct(s) that you want to measure clearly Identify existing measures, particularly those with established reliability and validity Determine whether those measures will work for your purpose and identify any areas where you may need to create a new measure or add new questions Create additional questions/measures Identify criteria that your measure should correlate with or predict, and develop procedures for assessing those criteria

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google