Presentation on theme: "Optimal Experimental Design in Experiments With Samples of Stimuli Jacob Westfall University of Colorado Boulder David A. Kenny Charles M. Judd University."— Presentation transcript:
Optimal Experimental Design in Experiments With Samples of Stimuli Jacob Westfall University of Colorado Boulder David A. Kenny Charles M. Judd University of Connecticut University of Colorado Boulder
Studies involving participants responding to stimuli (hypothetical data matrix): Subject #
Just in domain of implicit prejudice and stereotyping: – IAT (Greenwald et al.) – Affective Priming (Fazio et al.) – Shooter task (Correll et al.) – Affect Misattribution Procedure (Payne et al.) – Go/No-Go task (Nosek et al.) – Primed Lexical Decision task (Wittenbrink et al.) – Many non-paradigmatic studies
“How many stimuli should I use?” “How similar or variable should the stimuli be?” “When should I counterbalance the assignment of stimuli to conditions?” “Is it better to have all participants respond to the same set of stimuli, or should each participant receive different stimuli?” “Should participants make multiple responses to each stimulus, or should every response by a participant be to a unique stimulus?” Hard questions
Stimuli as a source of random variation Judd, C. M., Westfall, J., & Kenny, D. A. (2012). Treating stimuli as a random factor in social psychology: A new and comprehensive solution to a pervasive but largely ignored problem. Journal of Personality and Social Psychology, 103(1),
Power analysis in crossed designs Power determined by several parameters: – 1 effect size (Cohen’s d) – 2 sample sizes p = # of participants q = # of stimuli – Set of Variance Partitioning Coefficients (VPCs) VPCs describe what proportion of the random variation in the data comes from which sources Different designs depend on different VPCs
Definitions of VPCs V P : Participant variance Variance in participant intercepts V S : Stimulus variance Variance in stimulus intercepts V P×C : Participant-by-Condition variance Variance in participant slopes V S×C : Stimulus-by-Condition variance Variance in stimulus slopes V P×S : Participant-by-Stimulus variance Variance in participant-by-stimulus intercepts
Four common experimental designs
Stimuli-within-Condition design vs. Participants-within-Condition design
Fully Crossed design vs. Counterbalanced design
For power = 0.80, need q ≈ 50
For power = 0.80, need p ≈ 20
Maximum attainable power In crossed designs, power asymptotes at a maximum theoretically attainable value that depends on: – Effect size – Number of stimuli – Stimulus variability Under realistic assumptions, maximum attainable power can be quite low!
When q = 16, max power =.84
Minimum number of stimuli to use? A reasonable rule of thumb: Use at least 16 stimuli per condition! (preferably more)
Implications of maximum attainable power Think hard about your experimental stimuli before you begin collecting data! – Once data collection begins, maximum attainable power is pretty much determined.
Conclusion There is a growing awareness and appreciation in experimental psychology of the importance of running adequately powered studies. – (Asendorpf et al., 2013; Bakker, Dijk, & Wicherts, 2012; Button et al., 2013; Ioannidis, 2008; Schimmack, 2012) Discussions of how to maximize power almost always focus simply on recruiting as many participants as possible. We hope that the present research begins the discussion of how stimuli ought to be sampled in order to maximize statistical power.
The end URL for power app: JakeWestfall.org/power/ Manuscript reference: Westfall, J., Kenny, D. A., & Judd, C. M. (under review). Statistical Power and Optimal Design in Experiments in Which Samples of Participants Respond to Samples of Stimuli. Journal of Experimental Psychology: General.