Practical Meta-Analysis -- D. B. Wilson 1 Interpreting Effect Size Results Cohen’s “Rules-of-Thumb”  standardized mean difference effect size small =

Slides:



Advertisements
Similar presentations
Introduction to Hypothesis Testing
Advertisements

CHOOSING A RESEARCH PROJECT © LOUIS COHEN, LAWRENCE MANION, KEITH MORRISON.
AP Statistics Section 5.2 B More on Experiments
Conducting Meta-Analyses Marsha Sargeant, M.S. D esign A nd S tatistical A nalysis L aboratory University of Maryland, College Park Department of Psychology.
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
Practical Meta-Analysis
Effect Size Overheads1 The Effect Size The effect size (ES) makes meta-analysis possible. The ES encodes the selected research findings on a numeric scale.
Effect Size and Meta-Analysis
Effect Size Joseph Stevens, Ph.D., University of Oregon (541) , © Stevens 2006.
The Campbell Collaborationwww.campbellcollaboration.org C2 Training: May 9 – 10, 2011 Interpretation of Effect Sizes.
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
FUNDAMENTAL RESEARCH ISSUES © 2012 The McGraw-Hill Companies, Inc.
Correlation AND EXPERIMENTAL DESIGN
CHAPTER 2 THE RESEARCH PROCESS. 1. Selection of topic  2. Reviewing the literature  3. Development of theoretical and conceptual frameworks  4.
Chapter 12 Quasi-Experimental, Correlational, and Naturalistic Observational 2012 Wadsworth, Cengage Learning.
1 Managing Threats to Randomization. Threat (1): Spillovers If people in the control group get treated, randomization is no more perfect Choose the appropriate.
Introduction to Meta-Analysis Joseph Stevens, Ph.D., University of Oregon (541) , © Stevens 2006.
Correlation. Introduction Two meanings of correlation –Research design –Statistical Relationship –Scatterplots.
Topics - Reading a Research Article Brief Overview: Purpose and Process of Empirical Research Standard Format of Research Articles Evaluating/Critiquing.
Practical Meta-Analysis -- D. B. Wilson
Practical Meta-Analysis -- D. B. Wilson 1 Practical Meta-Analysis David B. Wilson.
Funded through the ESRC’s Researcher Development Initiative Department of Education, University of Oxford Sessions : Effect Size Calculation.
Quasi-experimental Design CRJS 4466EA. Introduction Quasi-experiment Describes non-randomly assigned participants and controls subject to impact assessment.
This Week: Testing relationships between two metric variables: Correlation Testing relationships between two nominal variables: Chi-Squared.
© 2011 Pearson Prentice Hall, Salkind. Introducing Inferential Statistics.
Research and Evaluation Center Jeffrey A. Butts John Jay College of Criminal Justice City University of New York August 7, 2012 How Researchers Generate.
Program Evaluation. Program evaluation Methodological techniques of the social sciences social policy public welfare administration.
Outcome Based Evaluation for Digital Library Projects and Services
Chapter 3 Research Methods Used to Study Child Behavior Disorders.
Collection of Data Chapter 4. Three Types of Studies Survey Survey Observational Study Observational Study Controlled Experiment Controlled Experiment.
Systematic reviews to support public policy: An overview Jeff Valentine University of Louisville AfrEA – NONIE – 3ie Cairo.
1 Experimental Research Cause + Effect Manipulation Control.
CAUSAL INFERENCE Presented by: Dan Dowhower Alysia Cohen H 615 Friday, October 4, 2013.
Methodology Matters: Doing Research in the Behavioral and Social Sciences ICS 205 Ha Nguyen Chad Ata.
Single-Subject Experimental Research
Therapeutic Equivalence & Active Control Clinical Trials Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer Institute.
STUDYING BEHAVIOR © 2009 The McGraw-Hill Companies, Inc.
Developing a Review Protocol. 1. Title Registration 2. Protocol 3. Complete Review Components of the C2 Review Process.
Evidence Based Practice RCS /9/05. Definitions  Rosenthal and Donald (1996) defined evidence-based medicine as a process of turning clinical problems.
1 Components of the Deterministic Portion of the Utility “Deterministic -- Observable -- Systematic” portion of the utility!  Mathematical function of.
META-ANALYSIS, RESEARCH SYNTHESES AND SYSTEMATIC REVIEWS © LOUIS COHEN, LAWRENCE MANION & KEITH MORRISON.
 Descriptive Methods ◦ Observation ◦ Survey Research  Experimental Methods ◦ Independent Groups Designs ◦ Repeated Measures Designs ◦ Complex Designs.
METHODS IN BEHAVIORAL RESEARCH NINTH EDITION PAUL C. COZBY Copyright © 2007 The McGraw-Hill Companies, Inc.
Development and the Role of Meta- analysis on the Topic of Inflammation Donald S. Likosky, Ph.D.
McGraw-Hill/Irwin Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. Using Nonexperimental Research.
Collection of Data Jim Bohan
CHAPTER OVERVIEW Say Hello to Inferential Statistics The Idea of Statistical Significance Significance Versus Meaningfulness Meta-analysis.
Meta-Analysis Effect Sizes effect sizes – r, d & OR computing effect sizes estimating effect sizes & other things of which to be careful!
Experimental Control Definition Is a predictable change in behavior (dependent variable) that can be reliably produced by the systematic manipulation.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
Single-Subject and Correlational Research Bring Schraw et al.
Practical Meta-Analysis -- D. B. Wilson 1 Interpreting Effect Size Results Cohen’s “Rules-of-Thumb” –standardized mean difference effect size small = 0.20.
Funded through the ESRC’s Researcher Development Initiative Department of Education, University of Oxford Session 2.1 – Revision of Day 1.
Discuss how researchers analyze data obtained in observational research.
The population in a statistical study is the entire group of individuals about which we want information The population is the group we want to study.
URBDP 591 A Lecture 16: Research Validity and Replication Objectives Guidelines for Writing Final Paper Statistical Conclusion Validity Montecarlo Simulation/Randomization.
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 27 Systematic Reviews of Research Evidence: Meta-Analysis, Metasynthesis,
Chapter 11 Meta-Analysis. Meta-analysis  Quantitative means of reanalyzing the results from a large number of research studies in an attempt to synthesize.
Cedric D. Murry APT Instructor of Applied Technology in research and development.
Chapter 22 Inferential Data Analysis: Part 2 PowerPoint presentation developed by: Jennifer L. Bellamy & Sarah E. Bledsoe.
Research on the general effectiveness of school- based behavioral interventions targeting overt, physically aggressive behavior will be synthesized using.
STA248 week 121 Bootstrap Test for Pairs of Means of a Non-Normal Population – small samples Suppose X 1, …, X n are iid from some distribution independent.
Chapter 9 Introduction to the t Statistic
OT 460 Week One Fall  Evaluation Process:  What client wants and needs to do  Determination of what the client can do and has done  Identify.
Effect Sizes.
Statistics Use of mathematics to ORGANIZE, SUMMARIZE and INTERPRET numerical data. Needed to help psychologists draw conclusions.
What we’ll cover today Transformations Inferential statistics
Meta-analysis, systematic reviews and research syntheses
Counterfactual Analysis
Presentation transcript:

Practical Meta-Analysis -- D. B. Wilson 1 Interpreting Effect Size Results Cohen’s “Rules-of-Thumb”  standardized mean difference effect size small = 0.20 medium = 0.50 large = 0.80  correlation coefficient small = 0.10 medium = 0.25 large = 0.40  odds-ratio small = 1.50 medium = 2.50 large = 4.30

Practical Meta-Analysis -- D. B. Wilson 2 Interpreting Effect Size Results These do not take into account the context of the intervention They do correspond to the distribution of effects across meta-analyses found by Lipsey and Wilson (1993)

Practical Meta-Analysis -- D. B. Wilson 3 Interpreting Effect Size Results Rules-of-Thumb do not take into account the context of the intervention  a “small” effect may be highly meaningful for an intervention that requires few resources and imposes little on the participants  a small effect may be meaningful if the intervention is delivered to an entire population (prevention programs for school children)  small effects may be more meaningful for serious and fairly intractable problems Cohen’s Rules-of-Thumb do, however, correspond to the distribution of effects across meta-analyses found by Lipsey and Wilson (1993)

Practical Meta-Analysis -- D. B. Wilson 4 Translation of Effect Sizes Original metric Success Rates (Rosenthal and Rubin’s BESD)  Proportion of “successes” in the treatment and comparison groups assuming an overall success rate of 50%  Can be adapted to alternative overall success rates Example using the sex offender data  Assuming a comparison group recidivism rate of 15%, the effect size of 0.45 for the cognitive-behavioral treatments translates into a recidivism rate for the treatment group of 7%

Practical Meta-Analysis -- D. B. Wilson 5 Translation of Effect Sizes Odds-ratio can be translated back into proportions  you need to “fix” either the treatment proportion or the control proportion Example: an odds-ratio of 1.42 translates into a treatment success rate of 59% relative to a success rate of 50% for the control group

Practical Meta-Analysis -- D. B. Wilson 6 Methodological Adequacy of Research Base Findings must be interpreted within the bounds of the methodological quality of the research base synthesized. Studies often cannot simply be grouped into “good” and “bad” studies. Some methodological weaknesses may bias the overall findings, others may merely add “noise” to the distribution.

Practical Meta-Analysis -- D. B. Wilson 7 Confounding of Study Features Relative comparisons of effect sizes across studies are inherently correlational! Important study features are often confounding, obscuring the interpretive meaning of observed differences If the confounding is not severe and you have a sufficient number of studies, you can model “out” the influence of method features to clarify substantive differences

Practical Meta-Analysis -- D. B. Wilson 8 Final Comments Meta-analysis is a replicable and defensible method of synthesizing findings across studies Meta-analysis often points out gaps in the research literature, providing a solid foundation for the next generation of research on that topic Meta-analysis illustrates the importance of replication Meta-analysis facilitates generalization of the knowledge gain through individual evaluations

Practical Meta-Analysis -- D. B. Wilson 9 Application of Meta-Analysis to Your Own Research Areas