2 Outline Selection in HR Selection Process Review of Correlation Criteria for Selection MethodsUsing Selection MethodsUtility AnalysisTaylor-Russell TableDecision-Making in Selection
3 Predicted Job Performance of Each Applicant Selection in HRSelection is the HR term for hiringProblem: we don’t know the applicant’s job performance until after we hire the applicantSolution: we need to predict the applicant’s job performanceSelection methods: the predictors of job performance used to make the selection decisionExamples of selection methods: résumé evaluations, employment interviews, tests of various KSAs, …Selection Methods (Predictors of Job Performance)Predicted Job Performance of Each ApplicantSelection Decision (Positive or Negative: Hire or Don’t Hire)
4 Predicted Job Performance of Each Applicant Selection in HRSelection Methods (Predictors of Job Performance)Predicted Job Performance of Each ApplicantSelection Decision (Positive or Negative: Hire or Don’t Hire)We use the selection methods (which are measures of the applicant’s qualifications) to predict the job performance of each applicant to make the selection decisionPositive selection decision: Hire the applicant if we predict the applicant will have good job performanceNegative selection decision: Don’t hire the applicant if we predict the applicant will have poor job performance
5 Selection ProcessStep 1 Measurement: Measure each applicant’s qualifications using the selection methodsUse “good” selection methods: reliable & validStep 2 Decision Making: Use the qualifications to predict job performance to decide which applicant to hireStep 3 Evaluation: Evaluate the selection processContinuous improvement of management processesSource of figure: Fisher, Schoenfeldt, & Shaw (2006), Figure 7.1, p. 283
6 Review of CorrelationCorrelation measures the degree of relationship between 2 variablesExample: What is the relationship (correlation) between the job interview & job performance?Do applicants who do better on the interview also do better on the job?Strong correlation means the job interview accurately predicts job performanceUse the job interview as a selection methodWeak correlation means the job interview does not predict job performanceDon’t use the job interview as a selection method
7 Review of Correlation (more) Correlation range: −1.0 0.0 +1.0Sign denotes the direction of the relationship between the 2 variablesr > 0: positive relationshipr = 0: no relationshipr < 0: negative relationshipMagnitude denotes the strength of the relationshipr = 1.0: perfect predictionExample: correlation = r = 0.677
8 2 Criteria for Selection Methods 1. Reliability: consistency of measurementDoes the measuring tool give us the same measurement every time we measure the same thing?Example: a reliable ruler vs. an unreliable rulerExample: a reliable programming skills test?2. Validity: Does the measuring tool measure what we really want to measure?Example: Is a ruler a valid measure of temperature? No.Example: Is the programming skills test a valid predictor of job performance of computer programmers? Maybe.
9 Criteria 1: Reliability Reliability: consistency of measurement3 methods of determining reliability:1. Test-retest reliability: measure the same thing twiceExample: Give the programming skills test to people (applicants or employees) twice and correlate their scores2. Inter-rater reliability: see if 2 raters agree in their ratingsExample: Have 2 interviewers witness job interviews, each interviewer rates each applicant, then correlate the ratings3. Internal consistency reliability: “coefficient alpha”Example: Programming skills test
10 Criteria 1: Reliability (more) Example of test-retest reliabilityDelta Intelligence Test (DIT = [0 50])Give current employees the DIT twice: DIT1 & DIT2Put the DIT scores into an Excel spreadsheet or SPSS databaseReliability Coefficient: Correlate DIT1 & DIT2
12 Criteria 2: ValidityValidity: Are we measuring what we want to measure?3 methods of determining validity:1. Content validity: Judge if the content of the selection method is a good match with the content of the jobExample: Are the questions on the programming skills test asking about programming skills that are used on the job?Example: The questions on the test are mostly about writing programs in COBOL while the job uses C++ and JavaBased on the mismatch of the content of the test versus the content of the job, we might judge the programming skills test to have poor validity for our purposes
13 Criteria 2: Validity (more) 3 methods of determining validity (more):2. Criterion-related concurrent validity:Measure 2 things at the same time (concurrently):Selection method (predictor of job performance)Example: Programming skills testJob performanceExample: Job performance ratings of programmersUse our current employees as our sample (we don’t observe the concurrent job performance of the applicants)Example: Use our current programmers — give them the testCorrelate selection method scores with job performance ratings
14 Criteria 2: Validity (more) 3 methods of determining validity (more):3. Criterion-related predictive validity:Measure 2 things at different times:Measure now: Selection method (predictor of job performance)Example: Programming skills testMeasure in the future: Job performanceExample: Job performance ratings of programmersUse the applicants we hire as our sampleExample: Give applicants the test, keep scores on fileCorrelate selection method scores with job performance ratings after the new hires have been on the job for a while
15 Criteria 2: Validity (more) Example of criterion-related concurrent validity:Delta Intelligence TestGive current employees the DITPut the DIT scores (DIT2) into an Excel spreadsheet or SPSS database with job performance ratings (Perf)Validity Coefficient: Correlate DIT2 & Perf
17 Using Selection Methods Terminology:Selection cut-off score: minimum passing score on a selection methodJob performance cut-off score: minimum acceptable level of job performanceSelection ratio (SR) = % of applicants hired = # hired / # applicantsBase rate of success (BRS) = % of applicants who would have acceptable job performance if they were hired = # acceptable job performance / # applicants
18 Using Selection Methods (more) Terminology (more):True positive: applicant is hired and turns out to have acceptable job performanceCorrect decision: we hired a good employeeFalse positive: applicant is hired and turns out to have unacceptable job performanceMistake: we hired a poor performing employee (a dud)False negative: applicant is not hired, but would have had acceptable job performance if hiredMistake: we missed hiring a good personTrue negative: applicant is not hired, and if had been hired would have had unacceptable job performanceCorrect decision: we avoided hiring a dud
19 Using Selection Methods (more) Example: Criterion-related concurrent validity Employee Performance Rating Scale: 7 = truly exceptional 6 = excellent 5 = very good 4 = good 3 = satisfactory 2 = unsatisfactory 1 = very unsatisfactory Validity coefficient = r = Performance cut-off = 3 Selection cut-off = 22False NegativesTrue Positives296443True NegativesFalse PositivesSR = 67%
20 Using Selection Methods (more) Cost of making a false positive selection error (cost of hiring a “dud”):Recruiting & selection costsOrientation & training costsPerformance appraisal costsCosts associated with poor job performanceTermination costsCosts of hiring a replacementRepeat recruiting & selection, orientation & training, etc.
21 Using Selection Methods (more) Cost of making a false negative selection error (cost of failing to hire an applicant who would have good job performance):Recruiting & selection costsOpportunity costs: cost of letting good talent slip through our fingersCosts associated with needing a bigger pool of applicantsIncreases recruiting & selection costs
22 Using Selection Methods (more) What can we do to manage the costs associated with both types of selection mistakes?Change our hiring standardsRaise our hiring standards: raise the selection cut-off scoreExample: raise the minimum passing score on the DITLower our hiring standards: lower the selection cut-off scoreChange which selection methods we useSwitch from a less valid selection method to a more valid selection methodSwitch from a more valid selection method to a less valid selection method
23 Using Selection Methods (more) Example: selection cut-off score from 22 to 26: false positives (hire fewer duds): 3 → 0 (costs down) false negatives (reject more good people): 29 → 64 (costs up) SR (we hire a smaller percentage of the applicant pool, so we’ll need a bigger pool): 67% → 29% (hiring costs up)Some costs down, upFalse NegativesTrue Positives64297True NegativesFalse PositivesSR = 29%
24 Using Selection Methods (more) Example: validity (switch to a selection method that has higher validity than the DIT) : false positives (hire fewer duds: costs down) false negatives (reject fewer good people: costs down)But is the more valid selection method more expensive?If a selection method has perfect validity (r = 1.0), then no selection errorsFalse NegativesTrue Positives296443True NegativesFalse PositivesSR = 67%
25 Utility Analysis We see that there are tradeoffs to be analyzed If we raise our hiring standards (raise the selection cut-off score), there are:Fewer false positives: we hire fewer duds (costs go down)More false negatives: more good talent slips away (costs go up)Selection ratio goes down: we reject more applicants, and so need a bigger pool of applicants (costs go up)If we switch to a more valid selection method, there are:Fewer false negatives: less good talent slips away (costs go down)The more valid method might be more expensive (costs go up)
26 Utility Analysis (more) Goal of utility analysis: Determine the value of the selection system or of changes in the selection systemUtility analysis is a method of cost-benefit analysis“Value” is measured in money termsHow does the current selection system add value to the organization?How would changes to the selection system add value to the organization?Example: Evaluate the effect of changing our hiring standardsExample: raise the selection cut-off scoreExample: Evaluate the effect of switching selection methodsExample: switch to a more valid selection method
27 Utility Analysis (more) Complete utility analysis evaluates both the benefits and the costs (in money terms) of changing a selection procedureMathematical formula (we’ll leave the formula for HR majors & minors when they take MGMT 441: Staffing)Partial utility analysis uses the Taylor-Russell TableTable tells us what percentage of our hires will turn out to have acceptable job performanceDoesn’t use money as the unit of measurementDoesn’t directly consider the costs
28 Taylor-Russell TableTable tells us what percentage of our hires will turn out to have acceptable job performanceIf the table gives an answer of “.67” in a particular situation, it means that 67% of the people hired in the situation will turn out to have acceptable job performanceWhich implies that 100% − 67% = 33% of the people we hire in the situation will turn out to have unacceptable job performance67% good hires (true positives)33% poor hires (false positives)
29 Taylor-Russell Table To use the table, we need to estimate 3 things: Base rate of success (BRS): What percentage of our applicant pool could do the job well?Estimate how good a job of recruiting we’ve done: Do we have a good pool of applicants, or a bad pool, or something in between?If everyone in the pool could do the job well (BRS = 100%), then we could randomly hire anyone in the poolLevel of validity (r): What’s the validity coefficient for our selection method?Selection ratio (SR): What percentage of the applicant pool are we going to hire?
30 Taylor-Russell TableBRS: determines top, middle, or bottom part of tabler: determines which rowSR: determines which columnSource of table: Fisher, Schoenfeldt, & Shaw (2006), Table 7.2, p. 307
31 Taylor-Russell Table Example 1: Example 2: Example 3: Example 4: BRS = 50%r = 0.25SR = 10%Answer = 67%Example 2:SR = 5%Answer = 70%Example 3:r = 0.50Answer = 84%Example 4:Answer = 88%
32 Decision-Making in Selection Most organizations use more than one selection method to make the selection decisionHow do you combine information about an applicant from multiple selection methods to make the selection decision?Example: An organization uses 3 selection methods to hire a computer programmer:Résumé evaluation: résumés are rated on a 100-point scaleProgramming skills test: test scores go up to 100%Interview: interviews are rated by interviewers on a 100-point scale
33 Decision-Making in Selection Additive (compensatory) modelAll applicants go through all selection methodsAdd together the scores on the selection methods to get a total score for each applicantExample: Total = résumé score + test score + interview scoreEstablish a selection cut-off score for the total scoreHire the applicant if the applicant’s total score exceeds the cut-offNote that a high score on one selection method can compensate for a low score on another selection method
34 Decision-Making in Selection Multiple cut-off methodAll applicants go through all selection methodsEstablish a selection cut-off score for each selection methodHire the applicant if their score on every selection method exceeds the corresponding cut-off scoreExample: pass the résumé evaluation and pass the test and pass the interviewWasteful?If an applicant fails the résumé evaluation, why waste time and money on the test and the interview?
35 Decision-Making in Selection Multiple hurdle methodPut the selection methods into a sequenceEach selection method is a “hurdle”Establish a selection cut-off score for each hurdleOnly applicants who pass the first hurdle go on to the second hurdleOnly applicants who then pass the second hurdle go on to the third hurdleOnly applicants who then pass the third hurdle ..., etc.Use cost to determine the sequencePut the cheapest hurdle first, etc.
36 Outline Selection in HR Selection Process Review of Correlation Criteria for Selection MethodsUsing Selection MethodsUtility AnalysisTaylor-Russell TableDecision-Making in Selection