Presentation is loading. Please wait.

Presentation is loading. Please wait.

Applied Psychology in Human Resource Management seventh edition Cascio & Aguinis PowerPoint Slides developed by Ms. Elizabeth Freeman University of South.

Similar presentations


Presentation on theme: "Applied Psychology in Human Resource Management seventh edition Cascio & Aguinis PowerPoint Slides developed by Ms. Elizabeth Freeman University of South."— Presentation transcript:

1 Applied Psychology in Human Resource Management seventh edition Cascio & Aguinis PowerPoint Slides developed by Ms. Elizabeth Freeman University of South Carolina Upstate Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall 14-1

2 Chapter 14 Decision Making for Selection Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall14-2

3 Why do we care about selection decisions? Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall14-3

4 Selection decisions result in employment occupations vocations careers income Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall14-4

5 Selection decisions are critical 1. when applicants exceed jobs 2. when success depends on jobs Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall14-5

6 Classical validity assessments of selection decisions focus on 1. accuracy of decisions or 2. prediction efficiencies Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall14-6

7 Classical validity assessment statistics include 1. simple regressions or 2. multiple regressions Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall14-7

8 Weaknesses of these statistics Simple regression may not consider all relevant variables Multiple regression may allow strong variables to offset weak variables Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall14-8

9 Some placement decisions cannot tolerate such weaknesses in validity 1. Pilots 2. Surgeons 3. Judges Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall14-9

10 Decision theory addresses these weaknesses 1. Recognizes predictions 2. Accounts for the importance of outcomes & consequences a. individuals placed b. organizational survival c. payoffs – cost & benefit Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall14-10

11 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Selection decisions exist due to imbalances between the numbers of people applying for jobs and the numbers of jobs available to be filled 14-11

12 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Personnel selections involve decision makers Decision makers must predict Requiring measurements (behavioral data collection) & Subsequent forecasts (predictions) 14-12

13 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Traditional selections focused on technical psychometrics or classical validity studies 14-13

14 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Decision-making theory focuses on outcomes & consequences to individuals selected to organizations dependent on individual performance to optimize the decision utility (cost benefits) 14-14

15 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Selection decisions try to maximize individual differences Goal = select those who possess the most predictive traits 14-15

16 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Classic Decision-Making Study Conduct job analysis Assess relationship criterion predictor Repeat the process until validation established 14-16

17 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Validation of Classical Selection Decisions Assumes that linear models (linear algebra) are appropriate for job- success prediction 14-17

18 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Efficiency of Linear Models 1. Linear decision-making models are considered robust. 2. In this sense, robust means a strong and powerful way to make a selection decision. 3. Linear models work with a. Least squares regression weights b. Subjective weighting or variables c. Unit weights for variables 14-18

19 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall About Unit Weights – the power of 1 Appropriate when Scaling items Populations change over time Predictors are composites 14-19

20 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall About Unit Weights – the power of 1 Optimal when Applied to new samples Sample < critical sample size a. N = 40, 2 predictors b. N = 105, 6 predictors c. N = 194, 10 predictors 14-20

21 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall About Unit Weights – the power of 1 Other strengths Do not consume degrees of freedom Estimated without error Cannot reverse true relative weights of variables 14-21

22 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Suppressor variables Unit weighting is simple but what about suppressor or moderator variables? 14-22

23 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Suppressor variables not related to the criterion may be related to the predictors act like moderators are understood to exist true effects not easy to demonstrate believed to have modest effects 14-23

24 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Remember Prediction outcome goals 1. identify predictor variables that 2. explain maximum variance & have no relationship to other predictors & have unique relationship to the criterion 14-24

25 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Can you eliminate the personal aspects of selection decisions? 14-25

26 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall You can not remove the personal impact. You can quantify the process to increase selection decision quality. You can combine decision data strategically

27 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Data-combination Strategies A. Data measurement 1. Mechanical 2. Judgmental B. Data Collection 1. Mechanical 2. Judgmental 14-27

28 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Data Measurement 1.Mechanical – statistical a. individual measurements ability tests, aptitude assessments, knowledge inventories b. combine scores, predict outcomes 2. Judgmental – clinical a. include subjective assessments based on impressions b. combine scores, predict outcomes 14-28

29 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Data Collection 1.Pure Mechanical (Statistical) – data is collected and combined by equations a.Scorable application blanks b.Biographical information blanks c. Test batteries 14-29

30 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Data Collection 2. Pure Judgmental (Clinical) – data is collected and combined judgmentally a.open-ended interview ratings b.subjective impression ratings 14-30

31 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Data Collection 3. Modified Mechanical Strategy – a.Behavior Trait Rating – interview information summarized on standard form b.Profile Interpretation – based on objective inventory information, interpret results without knowing or meeting a candidate i. California Personality Inventory ii. Myers Briggs Inventory 14-31

32 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Data Collection 4. Clinical Composite Strategy – most common a.collected objectively & subjectively b.combined subjectively 14-32

33 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Data Collection 5. Mechanical Composite Strategy – a.collected objectively & subjectively b.combined mechanically, for example, multiple regression 14-33

34 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall With all of the data collection and options, what is best? 14-34

35 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Consider: 1. Accuracy of the decisions may depend on accurate weightings 2. Accurate weightings are not possible subjectively (judgmentally) 3. Mechanical methods can continue to adjust findings while humans seem to be limited in ability to absorb more & more information 14-35

36 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Result? a. Collect data objectively b. Collect data subjectively c. Make final decisions on mechanically combined data 14-36

37 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Are there alternative selection decision prediction models? 14-37

38 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Yes, at least 3. Multiple regressions Multiple cutoffs Multiple hurdles 14-38

39 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Multiple Regressions Model 1.Assumptions a. Linearity b. Trait c. Compensatory interaction 2. Larger samples better 3. Strengths – deals with a. nominal data b. nonlinear relationships c. linear & nonlinear interactions d. multiple jobs with same or different predictors 14-39

40 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Multiple-Cutoff Approach 1.Assumptions a. Minimal proficiency required on one or more variables b. Curvilinear relationships 2. Setting a Cutoff One cutoff, Angoff Method or Expectancy Chart 14-40

41 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Multiple-Cutoff Approach Cutoff considerations 1. number of openings per position 2. number of applicants 3. ratio of successful applicants given to historical performance of incumbents 4. typical hiring rate 14-41

42 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Multiple-Cutoff Approach Guidelines 1. Decide - cutoff scores sensibility 2. Recognize - may not be a single best 3. Begin with a job analysis 4. Consider validity & job-relatedness of all assessments 5. Relate minimum cutoffs & performance 6. Use adequate number of judges 7. Consider statistical & legal issues 8. Set cutoffs high enough 14-42

43 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Multiple-Cutoffs by Angoff Method 1. Multiple judges rate item minimums with barely competent person in mind 2. Judges’ ratings per item averaged 3. Items’ averaged ratings are summed for minimally acceptable score Advantages Easy to administer Reliable as other judgmental methods Intuitive appeal as expert judges form the decisions 14-43

44 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Multiple-Cutoffs by Expectancy Charts Compare past performance data as comparison with a given predictor score Analyze the number of successful employees expected given selection ratios Use raw hiring data to determine Number of applicants Number hired Tenure of hires Performance of hires 14-44

45 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Multiple-Hurdle Approach 1.Includes multiple-regression and multiple-cutoff decisions 2.Adds perspective of time Temporarily or permanently hired 3. Appropriate when training is long, complex, expensive Air traffic controllers Surgeons Financial specialists Tenure track professionals 14-45

46 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall From Classical Validity to Decision-Theory Classical validity – explains the highest proportion of variance in selection decisions Decision-Theory –emphasizes selection ratios, base rates, and the organizational impacts of HR judgments 14-46

47 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Decision-Theory Models Selection Ratios – applicants : hires Value influenced by quotas 1. When ratio is 1:1, hire essentially every applicant 2. When the ratio is < 1, hire very few 3. Surface value may support selectivity but increases recruiting expenses 4. Flexible SRs allow satisfactory performance 5. Challenge is the statistical logic 14-47

48 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Decision-Theory Models Selection Ratios – applicants to hires For example, flexible SRs should decrease erroneous acceptances (hiring the wrong candidate) & decrease erroneous rejections (not hiring a good candidate) 14-48

49 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Decision-Theory Models Selection Ratios – applicants to hires With flexible SRs, most organizations concentrate on the erroneous acceptances & ignore the consequences of erroneous rejections 14-49

50 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Decision-Theory Models To understand impact of erroneous acceptances and erroneous rejections, must understand SRs and base rates 14-50

51 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Decision-Theory Models Base Rates (BRs) – successful hires 1. If BRs high (jobs are easy to perform), measures may not matter 2. If BRs low (jobs difficult to perform), measures may be critical 3. Predictors should be added based on their ability to increase predictions 14-51

52 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Decision-Theory Models Utility Considerations Erroneous acceptances are costly a. Risk of life b. Threat of lawsuits c. Injury to existing employees Erroneous rejections hard to quantify beyond out-of-pocket expenses a. Administrative expenses b. Legal defense 14-52

53 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Decision-Theory Models Utility Considerations Potential Decision-Theory Weaknesses Some people may be treated unjustly (falling just below cutoffs) To minimize, use relative decisions predict attributes do not predict measurements 14-53

54 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall How do you determine utility or costs to an organization? 14-54

55 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Decision-Theory Utility Models (the language of business) Taylor Russell Model (TRM) includes 1. Classical validity and 2. Selection ratio (openings to applicants) and 3. Base rate (successful performers) Documents success ratios a. sensitive to contextual issues b. sensitive to organizational issues 14-55

56 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Decision-Theory Utility Models (the language of business) Naylor Shine - more applicable than TRM 1.Assumes linear relationship between validity and utility of selections 2.Defines utility as increase in average criterion scores 3. Assumes that new predictors added to existing selection battery 4. Does not require employee classifications 5. May use when performance cannot be put into dollars (turnover prediction) 14-56

57 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Decision-Theory Utility Models (the language of business) Naylor-Shine Model answers questions a. What is average performance? b. What will the mean criterion score be? c. What SR should be used? 14-57

58 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Decision-Theory Utility Models (the language of business) Brogden-Cronbach-Gleser Model (BCG) 1.Assumes linearity only for test scores & actual performance 2.Assumes normal distribution of test scores & performance 3. Supports decisions of low validity selection when standard deviations are high 14-58

59 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Decision-Theory Utility Models (the language of business) BCG Model ALSO 1.Separates recruitment & selection costs 2.Allows alternative SD methods 3. Integrates selection with budgets 14-59

60 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Decision-Theory Utility Models (the language of business) BCG Model Weaknesses Does not look at time value of money Ignores concept of risk Ignores taxation payoffs 14-60

61 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Decision-Theory Utility Models (the language of business) BCG Model consideration selection decision costs are long-term investments, not short-term a. Developmental in scope b. Productivity increases c. Labor cost reductions d. Total employees required reduced e. Financial returns 14-61

62 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Decision-Theory Utility Models (the language of business) BCG Model Challenges 1.Top scoring candidates refuse offers 2.Expected & actual performances 3. General economic conditions 4. Managerial resistance a. may not trust the motivation b. may not believe in the source c. may prefer individual responsibility d. may not understand the context 14-62

63 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Strategic Context Selection Profit maximization Cost minimization Assume selection systems have fixed validity & standard deviations 1. assumption may be false 2. assumption may be true 3. risks associated with either need to be considered 14-63

64 Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall Summary Classical validity 1. measurement & prediction 2. multiple regressions 3. compensatory variables Noncompensatory models 1. Taylor-Russell 2. Naylor-Shine 3. Brogden-Cronbach-Gleser 4. useful for planning 14-64

65 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Printed in the United States of America. Copyright © 2011 Pearson Education, Inc. publishing as Prentice Hall 14-65


Download ppt "Applied Psychology in Human Resource Management seventh edition Cascio & Aguinis PowerPoint Slides developed by Ms. Elizabeth Freeman University of South."

Similar presentations


Ads by Google