Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Meta-Analysis Joseph Stevens, Ph.D., University of Oregon (541) 346-2445, © Stevens 2006.

Similar presentations


Presentation on theme: "Introduction to Meta-Analysis Joseph Stevens, Ph.D., University of Oregon (541) 346-2445, © Stevens 2006."— Presentation transcript:

1 Introduction to Meta-Analysis Joseph Stevens, Ph.D., University of Oregon (541) 346-2445, stevensj@uoregon.edustevensj@uoregon.edu © Stevens 2006

2 What is Meta-Analysis (MA)? Term coined by Gene Glass in his 1976 AERA Presidential address An alternative to the traditional literature review Allows the reviewer to quantitatively combine and analyze the results from multiple studies

3 What is Meta-Analysis (MA)? Traditional literature review is based on the reviewer’s analysis and synthesis of study themes or conclusions MA collects the essential empirical results from multiple studies and draw conclusions about the “overall” effect across studies no matter what the original study conclusions were Thus a MA becomes a research study on research studies, hence the term "meta".

4 Growth and Development of MA MA has developed substantially both in methods and in applications (Larry Hedges, Ingram Olkin, John Hunter, and Frank Schmidt) Literature review should be as systematic as primary research and study characteristics and design should provide a context for interpreting study results and conclusions (Glass) MA now widely used in many disciplines (e.g., education, social sciences, medicine)

5 Conducting a Meta-Analysis Researcher first collects studies on a particular topic Information about studies is then collated and coded Results of each study are translated into a common metric, the study effect size Analysis is then conducted to summarize effect size across studies or analyze relationships between covariates and effect size

6 Effects of MA An important consequence of the development of MA is the way it has changed our thinking about research  Increased focus on a number of important issues in science including publication biases  How to understand and summarize statistical results  Importance of effect size and statistical power

7 Effect Size in MA Effect size makes meta-analysis possible  it is the “dependent variable”  it standardizes findings across studies such that they can be directly compared Any standardized index can be an “effect size” (e.g., standardized mean difference, correlation coefficient, odds-ratio) as long as:  It is comparable across studies  It represents the magnitude and direction of the relationship of interest  It is independent of sample size Different meta-analyses may use different effect size indices

8 Which Studies to Review? Should be as inclusive as possible  Need to find all studies  Include unpublished studies Apples and Oranges  A priori inclusion and exclusion criteria  Revision of criteria as MA proceeds  More than one sample of studies for different purposes

9 Which Studies? Significant findings are more likely to be published than nonsignificant findings (File drawer problem) Critical to try to identify and retrieve all studies that meet your eligibility criteria Potential sources for identification of documents  computerized bibliographic databases  authors working in the research domain  conference programs  dissertations  review articles  reference lists  hand searching relevant journals  government reports, bibliographies, clearinghouses

10 Strengths of Meta-Analysis Imposes a discipline on the process of summing up research findings Represents findings in a more differentiated and sophisticated manner than conventional reviews Capable of finding relationships across studies that are obscured in other approaches Protects against over-interpreting differences across studies Can handle a large numbers of studies (this would overwhelm traditional approaches to review)

11 Weaknesses of Meta-Analysis Requires a good deal of effort Mechanical aspects don’t lend themselves to capturing more qualitative distinctions between studies “Apples and oranges”; comparability of studies is often in the “eye of the beholder” Most meta-analyses include “blemished” studies Selection bias posses continual threat  negative and null finding studies that you were unable to find  outcomes for which there were negative or null findings that were not reported Analysis of between study differences is fundamentally correlational

12 Examples of Different Types of Effect Sizes: Standardized Mean Difference (continuous outcome)  group contrast research treatment groups naturally occurring groups Odds-Ratio (dichotomous outcome)  group contrast research treatment groups naturally occurring groups Correlation Coefficient  association between variables research

13 The Standardized Mean Difference Represents a standardized group comparison on a continuous outcome measure. Uses the pooled standard deviation (some situations use control group standard deviation). Commonly called “Cohen’s d” or occasionally “Hedges’ g”.

14 The Correlation Coefficient Represents the strength of association between two continuous measures. Generally reported directly as “r” (the Pearson product moment coefficient).

15 Odds-Ratios The Odds-Ratio is based on a 2 by 2 contingency table, such as the one below. The Odds-Ratio is the odds of success in the treatment group relative to the odds of success in the control group.

16 Converting results into a common metric Can convert p-values t, F, etc. into the standardized effect size metric being used in the meta-analysis (e.g., d, r)

17 Interpreting Effect Size Results Cohen’s “Rules-of-Thumb”  standardized mean difference effect size small = 0.20 medium = 0.50 large = 0.80  correlation coefficient small = 0.10 medium = 0.25 large = 0.40

18 Interpreting Effect Size Results Rules-of-thumb do not take into account the context of the intervention  a “small” effect may be highly meaningful for an intervention that requires few resources and imposes little on the participants  small effects may be more meaningful for serious and fairly intractable problems Cohen’s rules-of-thumb do, however, correspond to the distribution of effects across meta-analyses found by Lipsey and Wilson (1993)

19 Interpreting Effect Size Results Findings must be interpreted within the bounds of the methodological quality of the research base synthesized. Studies often cannot simply be grouped into “good” and “bad” studies. Some methodological weaknesses may bias the overall findings, others may merely add “noise” to the distribution.

20 Traditional Narrative reviews Vote-counting


Download ppt "Introduction to Meta-Analysis Joseph Stevens, Ph.D., University of Oregon (541) 346-2445, © Stevens 2006."

Similar presentations


Ads by Google