Presentation is loading. Please wait.

Presentation is loading. Please wait.

Developing a Hiring System Reliability of Measurement.

Similar presentations


Presentation on theme: "Developing a Hiring System Reliability of Measurement."— Presentation transcript:

1 Developing a Hiring System Reliability of Measurement

2 Key Measurement Issues Measurement is imperfect Reliability--how accurately do our measurements reflect the underlying attributes? Validity --how accurate are the inferences we draw from our measurements? –refers to the uses we make of the measurements

3 What is Reliability? The extent to which a measure is free of measurement error Obtained score = –True Score + –Random Error + –Constant Error

4 What is Reliability? Reliability coefficient = % of obtained score due to true score –e.g., Performance measure with r yy =.60 is 60% “accurate” in measuring differences in true performance Different “types” of reliability reflect different sources of measurement error

5 Types of Reliability Test-retest Reliability –Assesses stability (over time/situations) Internal Consistency Reliability –Assesses consistency of content of measure Parallel Forms Reliability –Assesses equivalence of measures –Inter-rater reliability is special case

6 Why Reliability is Critical Accuracy of decisions about individuals Reliability sets upper limit on its validity –Maximum r xy = SQRT (r xx r yy ) Example –Employment test with r xx =.80 –Performance ratings with r yy =.47 –Maximum r xy = SQRT [(.80 *.47)] =.61

7 Developing a Hiring System Validity of Measurement

8 What is Validity? The accuracy of inferences drawn from scores on a measure Example: An employer uses an honesty test to hire employees. –The inference is that high scorers will be less likely to steal. –Validation confirms this inference.

9 Validity vs. Reliability Reliability is a characteristic of the measure –Error in measurement –A measure either is or isn’t reliable Validity refers to the uses of the measures –Error in inferences drawn –May be valid for one purpose but not for another

10 Validity and Job Relatedness Federal regulations require employer to document job-relatedness of selection procedures that have adverse impact Good practice also dictates that selection decisions should be job-related Validation is the typical way of documenting job relatedness

11 Methods of Validation Empirical: showing a statistical relationship between predictor scores and criterion scores –showing that high-scoring applicants are better employees Content: showing a logical relationship between predictor content and job content –showing that the predictor measures the same knowledge or skills that are required on the job

12 Methods of Validation Construct: developing a “theory” of why a predictor is job-relevant Validity Generalization: “Borrowing” the the results of empirical validation studies done on the same job in other organizations

13 Empirical Validation Concurrent Criterion-Related Validation – Predictive Criterion-Related Validation –

14 Concurrent Validation Design Time Period 1 Test current employees Measure employee performance Validity?

15 Predictive Validation Design Time Period 1Time Period 2 Test applicants Hire applicants Obtain criterion measures Validity?

16 Empirical Validation: Limitations

17 Content Validation Inference being tested is that the predictor samples actual job skills and knowledge –not that predictor scores predict job performance Avoids the problems of empirical validation because no statistical relationship is tested –potentially useful for smaller employers

18 Content Validation: Limitations

19 Construct Validation Making a persuasive argument that hiring tool is job-relevant 1. Why attribute is necessary –job & organizational analysis 2. Tool measures the attribute – existing data usually provided by developer of tool

20 Construct Validation Example Validating FOCUS as measure of attention to detail (AD) for QC inspectors Develop rationale for importance of AD Defend FOCUS as measure of AD –Comparison of FOCUS scores with other AD tests –Comparison of FOCUS and related tests –Comparison of scores for people in jobs requiring high or low levels of AD –Evidence of validity in similar jobs

21 Construct Validation Example Validating an integrity (honesty) test Develop rationale for importance of honesty Defend test as measure of honesty –Comparison of test scores with other honesty measures Reference checks, polygraphs, other honesty tests –Comparison of test scores with related tests –Comparison of scores for “honest” and “dishonest” people –Evidence of validity in similar jobs

22 Validity Generalization Logic: A test that is valid in one situation should be valid in equivalent situations Fact: Validities differ across situations Why?

23 Validity Generalization 1.Situations require different attributes vs. 2.“Statistical artifacts”; differences in: Sample sizes Reliability of predictor and criterion measures Criterion contamination/deficiency Restriction of range Two possible explanations why validities differ across situations:

24 VG Implications Validities are larger and more consistent Validities are generalizable to comparable situations Tests that are valid for majority are usually valid for minority groups There is at least one valid test for all jobs It’s hard to show validity with small Ns

25 Validation: Summary Criterion-Related –Predictive –Concurrent Content Construct Validity Generalization “Face Validity”


Download ppt "Developing a Hiring System Reliability of Measurement."

Similar presentations


Ads by Google