Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evaluating Candidates and Identifying the Short List

Similar presentations


Presentation on theme: "Evaluating Candidates and Identifying the Short List"— Presentation transcript:

1 Evaluating Candidates and Identifying the Short List
Sherri Irvin Presidential Research Professor of Philosophy and WGS

2 Screening: systematic with clear criteria
Template for each applicant with clearly defined criteria as specified in the job ad Required qualifications (first screening) Preferred qualifications (second screening)

3 Obstacles to identifying excellent candidates
Implicit bias: tendency to underrate the credentials of women, candidates of color, people with disabilities, and other members of underrepresented groups Rater drift: tendency for evaluators’ standards to shift over time, so similar credentials are rated differently Overemphasis on “fit”: tendency to discount the achievements of people whose methods, topics, or social identities are marginalized in the field Matthew Effect: tendency of further advantages to be heaped on those who have experienced early advantages, thereby inflating their credentials

4 Recent research shows that race, gender & related factors play a major role in hiring

5 Experiment on race and hiring
Resumes with white-sounding names received 50% more callbacks than those with African-American-sounding names. Bertrand, Marianne, and Sendhil Mullainathan. “Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination.” The American Economic Review 94, no. 4 (2004):

6 Experiment on motherhood/gender and hiring
Resumes of mothers received significantly lower scores for competence, organizational commitment and lower salary and hiring recommendations (1.8x less). Resumes of non-mothers received 2.1x the callbacks of mothers. Correll, Shelley J., and Stephen Benard. “Getting a job: Is there a motherhood penalty?” American Journal of Sociology 112, no. 5 (2007): Advantage for fathers

7 Effect of screened auditions on success of female musicians
Use of screens during auditions accounts for 1/3 of the increase in the number of female musicians in orchestras. Goldin, Claudia, and Cecilia Rouse. Orchestrating impartiality: The impact of “blind” auditions on female musicians. No. w5903. National Bureau of Economic Research, 1997.

8 Experiment on gender and hiring in academic psychology
Men received a positive hiring recommendation >70% of the time; women received a positive hiring recommendation only 55% of the time. Gender of reviewer did not matter. Women and men discriminated equally regarding gender of candidate. Steinpreis, Rhea E., Katie A. Anders, and Dawn Ritzke. “The impact of gender on the review of the curricula vitae of job applicants and tenure candidates: A national empirical study.” Sex Roles 41, no. 7-8 (1999):

9 Experiment on hiring of student lab managers in university science labs
The female student was rated as less competent and less hireable. Faculty offered less mentoring and proposed a 14% lower salary. Gender and age of reviewer did not matter. Women and men discriminated equally regarding gender of candidate. Moss-Racusin, Corinne A., John F. Dovidio, Victoria L. Brescoll, Mark J. Graham, and Jo Handelsman. "Science faculty’s subtle gender biases favor male students." Proceedings of the National Academy of Sciences 109, no. 41 (2012):

10 Experiment on race and gender in finalist pools for academic jobs
“When there was only one woman or minority candidate in a pool of four finalists, their odds of being hired were statistically zero.” The odds increased dramatically with two women or two minority candidates. Johnson, Stefanie K., David R. Hekman, and Elsa T. Chan. “If there’s only one woman in your candidate pool, there’s statistically no chance she’ll be hired.” Harvard Business Review April 26, 2016.

11 Implicit bias is exacerbated by
Evaluator factors: cognitive load (too much on your mind) stress hurry fatigue, hunger, thirst belief that one is unbiased (probably false) Evaluation task factors: vague criteria lack of structure lack of accountability

12 Strategies

13 Strategies: evaluator conditions
Improve evaluator conditions Lighten workload Provide drinks and snacks Make sure dossiers are easy to access and review

14 Strategies: counteracting bias
Have a member of the diversity committee (DC) available as a consultant on the search DC member can flag dossiers with potential bias triggers for careful attention Social identity (gender, race, LGBTQ identity, disability) where disclosed in the dossier Marginalized methods or topics Rationale: implicit bias sometimes functions by leading us not to notice achievement

15 Strategies: counteracting bias
Consider anonymizing materials where feasible This must be done carefully (pronouns in letters, etc.) Can ask candidates to anonymize some materials DC member can assist or verify Not a cure-all: nature of research or organization membership sometimes suggests social identity Be aware that submitted materials may be affected by bias Letters of recommendation Student evaluations of teaching

16 Strategies: accountability
Create clear criteria and well-structured ratings templates Formalize your assessment where possible (e.g., with a points system; if you deviate, be clear about why) Justify short list by appeal to criteria This strategy helps with implicit bias, rater drift & overemphasis on “fit”

17 Getting to the final list

18 Initial interviews Consider forgoing initial (long-list) interviews
Interview situation is not representative of actual job tasks Submitted materials are most reliable source of info Vividness of interview swamps more reliable info Time and energy is better spent looking more carefully at candidate materials

19 Initial interviews If you decide to interview:
Provide questions in advance Ask same questions of all candidates Provide consistent interviewing conditions Interviewer behavior affects performance Rank-order candidates prior to interview If you change your ranking, be clear about why What new info did you get that is relevant to criteria?

20 Final thoughts about the Matthew Effect
Access to opportunities is facilitated by privilege (race, gender, economic class, etc.) These opportunities tend to snowball careful mentorship in undergrad  top-ranked graduate school  invitation to co-author  cushy post-doc  three publications

21 Experiment on which student queries receive responses
Queries from women and students of color are far more likely to be ignored, and requests for meetings more likely denied. The effect is worse in disciplines where pay is higher. Discrimination was just as bad in fields with higher ratios of women and faculty of color. Milkman, Katherine L., Modupe Akinola, and Dolly Chugh. "What happens before? A field experiment exploring how pay and representation differentially shape bias on the pathway into organizations." Journal of Applied Psychology 100, no. 6 (2015): 1678.

22 Final thoughts about the Matthew Effect
As you design criteria, look for ways of assessing merit that do not simply codify the results of privilege One publication that challenges traditional assumptions, published during a 4/4 teaching load, v. three conventional publications published during a teaching-free post-doc…


Download ppt "Evaluating Candidates and Identifying the Short List"

Similar presentations


Ads by Google