Question 1: What is the baseline of high power?

Slides:



Advertisements
Similar presentations
FUNDAMENTAL RESEARCH ISSUES © 2012 The McGraw-Hill Companies, Inc.
Advertisements

OVERVIEW OF RESEARCH METHODS l How are Research Methods Important? How are Research Methods Important? l What is Descriptive Research? What is Descriptive.
OVERVIEW OF RESEARCH METHODS
C82MCP Diploma Statistics School of Psychology University of Nottingham 1 Overview of Lecture Independent and Dependent Variables Between and Within Designs.
Chapter One: The Science of Psychology
Program Evaluation. Program evaluation Methodological techniques of the social sciences social policy public welfare administration.
Chapter One: The Science of Psychology. Ways to Acquire Knowledge Tenacity Tenacity Refers to the continued presentation of a particular bit of information.
STUDYING BEHAVIOR © 2009 The McGraw-Hill Companies, Inc.
 Descriptive Methods ◦ Observation ◦ Survey Research  Experimental Methods ◦ Independent Groups Designs ◦ Repeated Measures Designs ◦ Complex Designs.
METHODS IN BEHAVIORAL RESEARCH NINTH EDITION PAUL C. COZBY Copyright © 2007 The McGraw-Hill Companies, Inc.
© 2006 by The McGraw-Hill Companies, Inc. All rights reserved. 1 Chapter 12 Testing for Relationships Tests of linear relationships –Correlation 2 continuous.
+ EXPERIMENTAL INVESTIGATIONS An experimental investigation is one in which a control is identified. The variables are measured in an effort to gather.
Helpful hints for planning your Wednesday investigation.
Chapter 9 Introduction to the t Statistic
Methods of Presenting and Interpreting Information Class 9.
Research Designs for Explanation Experimental, Quasi-experimental, Non-experimental, Observational.
Chapter 8 Introducing Inferential Statistics.
Experimental Research
Assessing Impact: Approaches and Designs
BIAS AND CONFOUNDING Nigel Paneth.
Chapter 2: The Research Enterprise in Psychology
Dependent-Samples t-Test
Approaches to social research Lerum
Selecting the Best Measure for Your Study
The Science of Social Psychology
Principles of Quantitative Research
Experimental Psychology
26134 Business Statistics Week 5 Tutorial
Rational Influence Tactics Harsh Influence Tactics
Reasoning in Psychology Using Statistics
Experimental Psychology PSY 433
Inference and Tests of Hypotheses
Design (3): quasi-experimental and non-experimental designs
Experimental Design.
Hypothesis Testing, Validity, and Threats to Validity
Understanding Results
BIAS AND CONFOUNDING
DUET.
THE SCIENTIFIC METHOD Science is a method to understand the constantly changing environment.
In other words the relationship between variables
Critical Reading of Clinical Study Results
Module 02 Research Strategies.
Chapter Three Research Design.
Introduction to Inferential Statistics
Inferential statistics,
Research Methods 3. Experimental Research.
Introduction and Literature Review
2 independent Groups Graziano & Raulin (1997).
Introduction to Design
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2018 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Lesson 5. Lesson 5 Extraneous variables Extraneous variable (EV) is a general term for any variable, other than the IV, that might affect the results.
Single-Case Designs.
Introduction to Scientific Inquiry
Introduction to Scientific Inquiry
Introduction.
One-Way Analysis of Variance
Introduction to Experimental Design
Independent Variables in Experimental Design
Product moment correlation
Critical Appraisal วิจารณญาณ
Reasoning in Psychology Using Statistics
CH. 1.6 Scientific Inquiry Ms. De Los Rios 7th G.
Chapter 3 The Idea Of Causation on Social Research
Research Methods & Statistics
PSY 250 Hunter College Spring 2018
Reasoning in Psychology Using Statistics
Scientific Method Review.
Designing Experimental Investigations
CS 594: Empirical Methods in HCC Experimental Research in HCI (Part 1)
Misc Internal Validity Scenarios External Validity Construct Validity
Presentation transcript:

Question 1: What is the baseline of high power? Psychological Science’s Preoccupation with the Powerful: A Quantitative Review of Experimental Designs, Attribution of Results, and Effect Sizes in Social Power Research Christilene du Plessis1 • Michael Schaerer2 • Andy J. Yap2 • Stefan Thau2 1Rotterdam School of Management, Erasmus University • 2INSEAD Overview Question 3: What happens when high power is only compared to a control condition? Meta-analysis of published effect sizes Motivation: Is there evidence that the effect size of high power is inflated? IV: Did study include high power, low power, and/or control condition? DV: Effect size of study DVs (Cohen’s d) Control Variables: Manipulation type, DV type, experimental design (manipulation checks, mediators, moderators), and experimental setting (subject nationality, location, collection method). Model: Robust variance estimation (RVE) model with nested effect sizes. Results: Effect size for studies comparing high power to a control condition was significantly larger when no low power condition was included (d=.51, SE=.03, m=47, k=93) than when it was present (d=.37, SE=.03, m=51, k=83), β=-.13, SE=.05, 95% CI [-.22;-.03], p=.009. This pattern did not change when the control variables were added, β=-.17, SE=.05, 95% CI [-.26;-.08], p<.001. Table 3: Observed effect size as a function of experimental design. This large-scale review of 293 studies finds that the way social power has been studied has introduced a bias in the literature. Research assumes that powerfulness is the driving causal force behind power’s far- reaching effects – at the expense of understanding powerlessness. Powerfulness likely attracts more interest because those in high power positions are more salient, their decisions have profound impact on others and thus studying the powerful has been more appealing. However, powerlessness is equally, if not more, important as the majority of people are in low power positions, powerfulness and powerlessness always co-occur (Emerson, 1962), and even those in high power positions often feel powerless. We use an innovative approach to meta-analytic research by using study design as independent variable. We distinguish between 3-cell designs (studies that include high power, low power and control groups) and 2- cell designs (studies that include any two conditions). This first ever quantitative review of power allows us to uncover critical gaps in the literature by answering three questions: The validity of the power construct may be jeopardized. It is widely assumed that power operates linearly from high to low power (e.g., Guinote, 2007; Keltner et al. 2003; Magee and Smith, 2013). However, this assumption is not always appropriate. Illustrative Experiment Motivation: High power has been demonstrated to increase objectifiation. Implicitly, it has been assumed that low power decreases objectivication Design: 259 Mturk participants either described relationship with their subordinate (high power), peer (control condition), or superior (low power). DV: Tendency to objectify interaction partner (Gruenfeld et al. 2008) Results: Participants in the low power condition were more likely to objectify their partner than participants in the control condition (p<.001), and just as likely to as participants in the high power condition (p=.88). Discussion: using 3-cell designs can lead to different conclusions Figure 2. Effect of power(lessness) on objectification There are three reasons why comparing high power to only a control condition may limit our understanding of social power: Effect may be curvilinear; The power construct may be more complex than assumed (e.g., it may consist of multiple constructs); The manipulation may be confounded The effect size of high power may be inflated The a-priori omission of low power can lead to effect size inflation because: Failure to detect confounds may result in the confound bolstering the observed effect; or Patterns in which low power is the generative force and the high power effect is small (e.g., HP = C < LP) go undetected and a null effect of high power (e.g., HP = C) would lead to an abandonment of a high-power hypothesis. Question 1: What is the baseline of high power? 2-cell designs are used most often; low power or control conditions are interchangeably used as baseline. 99% of studies include a high power condition, 84% include a low power condition, and only a third include a control condition (34%). Only 17% of studies had an experimental design with three cells. Grouping studies by paper, only 26% of papers included at least one study with a 3-cell design. Only 5% used 3-cell designs exclusively. Implications Question 2: What happens when high power is only compared to low power? For future social power research Powerfulness should not be assumed as causal driver by default. Consider methods for determining the contribution of high and low power to an effect (e.g., compare subjective and objective outcomes measures; within-subject designs) Use manipulations that allow for the identification of more complex relationships such as U-shaped patterns (e.g., decision weights) For experimental design Compliment 2-cell design studies with at least one 3-cell design study (only 31% of papers in our sample did this). Include process measures, test boundary conditions, rule out alternative explanations, or acknowledge limitations of study Avoid making inferences for the powerless when a low power condition has not been included. For psychology broadly Other research areas (e.g., diversity, status, or accountability) may benefit from using experimental designs as an independent variable in quantitative reviews and meta-analyses. Effects tend to be attributed to high power. A content analysis of the discussion section of studies lacking a control condition, by two authors ( = .92), showed that effects were attributed to high power in more than half of the cases. Figure 1. Relative attribution of the effects The coding was replicated on MTurk: Again, the majority of studies made directional attributions, mostly in favor of powerfulness (42% high power; 13% low power).