Presentation is loading. Please wait.

Presentation is loading. Please wait.

R ESPONSE P ROCESSES IN SJT P ERFORMANCE : T HE R OLE OF G ENERAL AND S PECIFIC K NOWLEDGE James A. GrandMatthew T. Allen Kenneth Pearlman 27 th Annual.

Similar presentations


Presentation on theme: "R ESPONSE P ROCESSES IN SJT P ERFORMANCE : T HE R OLE OF G ENERAL AND S PECIFIC K NOWLEDGE James A. GrandMatthew T. Allen Kenneth Pearlman 27 th Annual."— Presentation transcript:

1 R ESPONSE P ROCESSES IN SJT P ERFORMANCE : T HE R OLE OF G ENERAL AND S PECIFIC K NOWLEDGE James A. GrandMatthew T. Allen Kenneth Pearlman 27 th Annual Conference of the Society for Industrial & Organizational Psychology April 27, 2012 The views, opinions, and/or findings contained in this presentation are solely those of the authors and should not be construed as an official Department of the Army or Department of Defense position, policy, or decision, unless so designated by other documentation.

2 Turning a critical eye towards SJT construct validity & its assumptions What do SJTs measure? – An alternative take on an old question – A response process model for SJT performance Predictions of the response process model – Empirical support? Implications for interpretation & validity of SJTs Response Processes in SJT Performance 2

3 The authors would like to thank the U.S. Army Research Institute for the Behavioral and Social Sciences (ARI) for allowing the use of their data for this research More information about the study can be found in: 3 Knapp, D. J., McCloy, R. A., & Heffner, T. S. (Eds.) (2004). Validation of measures designed to Maximize 21 st -century Army NCO performance (TR 1145). Arlington, VA: U.S. Army Research Institute for the Behavioral and Social Sciences.

4 Established evidence of criterion validity between SJT and job performance (cf., Chan & Schmitt, 2002; Clevenger et al., 2001; McDaniel et al., 2001) – Estimates in low.20s (corrected validities near mid-.30s ) (Chan & Schmitt, 2005) Agreement on construct validity is less certain... 4 First-order Constructs Multiple, distinguishable dimensions Specific a priori subscales Oswald et al. (2004) Career orientation Perseverance Multicultural appreciation Second-order Constructs Singular, high-level dimension Broad focal target Lievens et al. (2005) Interpersonal skills Mumford et al. (2008) Team role knowledge “Practical Intelligence” Tacit knowledge or “common sense” Everyday reasoning Sternberg et al. (2002) “Practical know-how” Chan & Schmitt (2005) Contextual knowledge

5 Virtually all perspectives approach and treat SJT measurement in a manner consistent with Classical Test Theory SJTs are NOT tests! (at least in the traditional sense of the word) – Low-fidelity simulations (Motowidlo et al., 1990) – Measurement methods capable of capturing a variety of constructs (Chan & Schmitt, 2005; McDaniel & Nguyen, 2001) 5 X = T + E Observed Score = True Score + Error

6 6 “SJT performance clearly involves cognitive processes. [...] Addressing basic questions about these underlying cognitive processes and eventually understanding them could provide the key to explicating constructs measured by SJTs” (Chan & Schmitt, 2005) “So far, there does not exist any theory about how people answer SJTs or about what makes some SJT items more or less difficult than others.” (p. 1044, Lievens & Sackett, 2007)

7 Rather than conceptualize SJTs as though they measure a static construct or “true score,” SJTs capture sophistication of a respondent’s reasoning process By their nature, SJTs capture similarity between respondent reasoning and that implied by keyed responses 7 ? ≠

8 Generic dual-process accounts of human reasoning, judgment, and decision-making (Evans, 2008) 8 System 1 Implicit, intuitive, and automatic reasoning Decisions guided by general heuristics, which are informed by domain experiences High capacity, low effort processing System 2 Systematic, rational, and analytic reasoning Decisions guided by controlled, rule-based evaluations and conscious reflection Low capacity, high effort processing

9 Dual-process accounts have been applied in a variety of perceptual, reasoning, and decision-making tasks (see Evans, 2008) – Extensions of dual-process model serve as foundation for much of judgment & decision-making literature (e.g., Gigerenzer et al., 1999; Kahneman & Frederick, 2002, 2005) 9 Central Tenets of Dual-Process Models Because of limits on our cognitive capacity and information processing... System 1 reasoning is primary determinant of judgment/decision- making in most situations System 2 reasoning is typically engaged to evaluate the quality of decisions or in attempts to consciously contrast alternatives

10 Two predictions based on dual-process account relative to SJT performance: – Beliefs about the general effectiveness of various behaviors, dispositions, or approaches serve as baseline heuristic for reasoning across many situations (cf., Motowidlo et al., 2006; Motowidlo & Beier, 2010) » “It is good to be thorough and conscientious in one’s work.” – Domain experience/knowledge leads to development of more conditional, refined, and nuanced heuristics (Hunt, 1994; Klein, 1998; Phillips et al., 2004) » “It is good to be thorough and conscientious in one’s work, but you can generally skimp on Task X and still do just fine.” – Thus, generalized heuristics/beliefs/temperaments become less predictive of SJT performance as experience increases 10 As job experience increases, the predictive validity of domain-general heuristics on SJT performance will decrease

11 Two predictions based on dual-process account relative to SJT performance: – Common for respondents to identify/rate most and least effective/preferred SJT response options (McDaniel & Nguyen, 2001) – Identifying most effective option should engage System 1 reasoning » Select most reasonable option based on intuitive heuristic, less effortful processing – Identifying least/less effective option should engage System 2 reasoning » “Play out”/evaluate consequences of remaining options, more effortful processing – Thus, identifying least/less effective option more g -loaded than identifying most/more effective option 11 Cognitive ability will be more strongly related to assessment of less preferred SJTs options than more preferred options

12 Concurrent validation study on predictors of current and future expected job performance of Army NCOs ( n = 1,838) (Knapp et al., 2004) – Primarily interested in predicting leadership performance/potential – Sample: 12

13 Domain-general heuristic measures – Differential attractiveness: individuals who more strongly endorse a trait/quality perceive behaviors which reflect that trait/quality as more effective (Motowidlo et al., 2006; Motowidlo & Beier, 2010) – Temperament inventories » Assessment of Individual Motivation (AIM) 1.Multidimensional 38-item forced choice measure ( α ≈.60 all scales) » Biographical Information Questionnaire (BIQ) 1.Multidimensional 156-item self-report biodata questionnaire ( α ≈.70 all scales) General cognitive aptitude (ASVAB ) 40-item SJT on leadership/interpersonal skills (Knapp et al., 2002) – 5 response alternatives, SMEs rated all options – Respondents chose most & least effective options » Responses recoded to SME ratings Response Processes in SJT Performance 13

14 14 As job experience increases, the predictive validity of domain-general heuristics on SJT performance will decrease Regression Summary  Main effect of temperament  Main effect of experience  Significant interaction Relationship stronger for less experienced Results consistent across all scales & SJT scores

15 15 As job experience increases, the predictive validity of domain-general heuristics on SJT performance will decrease Leadership Physical Conditioning Agreeableness Work Orientation Adjustment Dependability Interpersonal Skill Leadership Openness Tolerance for Ambiguity Social Maturity Social Perceptiveness AIM Scales BIQ Scales

16 Cognitive ability will be more strongly related to assessment of less preferred SJTs options than more preferred options 16

17 Most research on SJT measurement, development, and validity has largely been atheoretical (but see Motowidlo & Beier, 2010) – Dual-process account appears to be a reasonable response process model – Currently working on more explicit empirical examination (see also Foldes et al., 2010) What does having a response process model buy us? – SJT construct validity: Constructs vs. Reasoning » Could label it “practical intelligence,” but even that depends on... – Interpretation of SJT performance » Who is selected as the “experts” holds significant importance » Extent to which respondents reason/process information in a manner similar to “experts” – Response elicitation affects SJT interpretation » Most likely option/ratings = more heavily influenced by heuristic reasoning » Least likely option/ratings = more heavily influenced by cognitive reasoning 17

18 R ESPONSE P ROCESSES IN SJT P ERFORMANCE : T HE R OLE OF G ENERAL AND S PECIFIC K NOWLEDGE James A. GrandMatthew T. Allen Kenneth Pearlman 27 th Annual Conference of the Society for Industrial & Organizational Psychology April 27, 2012 The views, opinions, and/or findings contained in this presentation are solely those of the authors and should not be construed as an official Department of the Army or Department of Defense position, policy, or decision, unless so designated by other documentation.


Download ppt "R ESPONSE P ROCESSES IN SJT P ERFORMANCE : T HE R OLE OF G ENERAL AND S PECIFIC K NOWLEDGE James A. GrandMatthew T. Allen Kenneth Pearlman 27 th Annual."

Similar presentations


Ads by Google