Statistical Models in Meta- Analysis T i ~ N(  i  i 2 )  i ~ N(  2 )

Slides:



Advertisements
Similar presentations
Meta-Regression & Mixed Effects T i ~ N(  i  i 2 )  i ~ N(  2 )
Advertisements

Topic 12: Multiple Linear Regression
Likelihood & Hierarchical Models  ij  j,  j 2   j ,  2 
Inference for Regression
Simple Linear Regression. G. Baker, Department of Statistics University of South Carolina; Slide 2 Relationship Between Two Quantitative Variables If.
Linear regression models
Simple Linear Regression
Chapter 12 Simple Linear Regression
Chapter 13 Multiple Regression
PSY 307 – Statistics for the Behavioral Sciences
Correlation and Regression. Spearman's rank correlation An alternative to correlation that does not make so many assumptions Still measures the strength.
Confidence intervals. Population mean Assumption: sample from normal distribution.
BHS Methods in Behavioral Sciences I
Correlation. Two variables: Which test? X Y Contingency analysis t-test Logistic regression Correlation Regression.
Tuesday, October 22 Interval estimation. Independent samples t-test for the difference between two means. Matched samples t-test.
Chapter Topics Types of Regression Models
Simple Linear Regression Analysis
Overview of Meta-Analytic Data Analysis. Transformations Some effect size types are not analyzed in their “raw” form. Standardized Mean Difference Effect.
Heterogeneity in Hedges. Fixed Effects Borenstein et al., 2009, pp
Ch. 14: The Multiple Regression Model building
Statistics 350 Lecture 17. Today Last Day: Introduction to Multiple Linear Regression Model Today: More Chapter 6.
Simple Linear Regression and Correlation
Copyright ©2006 Brooks/Cole, a division of Thomson Learning, Inc. More About Regression Chapter 14.
Linear Regression/Correlation
AM Recitation 2/10/11.
Regression Analysis Regression analysis is a statistical technique that is very useful for exploring the relationships between two or more variables (one.
Inference for regression - Simple linear regression
Overview of Meta-Analytic Data Analysis
  Meta-Analysis of Correlated Data. Common Forms of Dependence Multiple effects per study –Or per research group! Multiple effect sizes using same.
Analysis of Variance: Some Review and Some New Ideas
Simple Linear Regression Models
1 1 Slide Simple Linear Regression Coefficient of Determination Chapter 14 BA 303 – Spring 2011.
Hypothesis Testing Using the Two-Sample t-Test
Introduction to Linear Regression
Correlation and Regression Used when we are interested in the relationship between two variables. NOT the differences between means or medians of different.
EQT 373 Chapter 3 Simple Linear Regression. EQT 373 Learning Objectives In this chapter, you learn: How to use regression analysis to predict the value.
Correlation and Prediction Error The amount of prediction error is associated with the strength of the correlation between X and Y.
Regression Models Residuals and Diagnosing the Quality of a Model.
Inference for Regression Simple Linear Regression IPS Chapter 10.1 © 2009 W.H. Freeman and Company.
The Campbell Collaborationwww.campbellcollaboration.org C2 Training: May 9 – 10, 2011 Introduction to meta-analysis.
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 13 Multiple Regression Section 13.3 Using Multiple Regression to Make Inferences.
STA 286 week 131 Inference for the Regression Coefficient Recall, b 0 and b 1 are the estimates of the slope β 1 and intercept β 0 of population regression.
Simple Linear Regression (SLR)
Simple Linear Regression (OLS). Types of Correlation Positive correlationNegative correlationNo correlation.
Regression Analysis © 2007 Prentice Hall17-1. © 2007 Prentice Hall17-2 Chapter Outline 1) Correlations 2) Bivariate Regression 3) Statistics Associated.
Analysis Overheads1 Analyzing Heterogeneous Distributions: Multiple Regression Analysis Analog to the ANOVA is restricted to a single categorical between.
© 2011 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license.
Fixed- v. Random-Effects. Fixed and Random Effects 1 All conditions of interest – Fixed. Sample of interest – Random. Both fixed and random-effects meta-analyses.
Chapter 12 Simple Linear Regression n Simple Linear Regression Model n Least Squares Method n Coefficient of Determination n Model Assumptions n Testing.
1 1 Slide The Simple Linear Regression Model n Simple Linear Regression Model y =  0 +  1 x +  n Simple Linear Regression Equation E( y ) =  0 + 
Statistics for Managers Using Microsoft Excel, 5e © 2008 Prentice-Hall, Inc.Chap 14-1 Statistics for Managers Using Microsoft® Excel 5th Edition Chapter.
Analysis of Variance 11/6. Comparing Several Groups Do the group means differ? Naive approach – Independent-samples t-tests of all pairs – Each test doesn't.
Chi Square Test for Goodness of Fit Determining if our sample fits the way it should be.
Simple linear regression. What is simple linear regression? A way of evaluating the relationship between two continuous variables. One variable is regarded.
The “Big Picture” (from Heath 1995). Simple Linear Regression.
Simple linear regression. What is simple linear regression? A way of evaluating the relationship between two continuous variables. One variable is regarded.
INTRODUCTION TO MULTIPLE REGRESSION MULTIPLE REGRESSION MODEL 11.2 MULTIPLE COEFFICIENT OF DETERMINATION 11.3 MODEL ASSUMPTIONS 11.4 TEST OF SIGNIFICANCE.
1 Consider the k studies as coming from a population of effects we want to understand. One way to model effects in meta-analysis is using random effects.
Inference about the slope parameter and correlation
Quantitative Methods Simple Regression.
Hedges’ Approach.
BA 275 Quantitative Business Methods
CHAPTER 29: Multiple Regression*
Analysis of Variance: Some Review and Some New Ideas
Review of Chapter 2 Some Basic Concepts: Sample center
Statistical Process Control
Chapter 13 Group Differences
Simple Linear Regression
Simple Linear Regression
Inferences 10-3.
Presentation transcript:

Statistical Models in Meta- Analysis T i ~ N(  i  i 2 )  i ~ N(  2 )

Start with a Question! Male mating history and female fecundity in the Lepidoptera: do male virgins make better partners? Torres-Villa and Jennions 2005 Agrotis segetum - wikipedia

Metric: Compare Virgins v. Experience Males in Number of Eggs Produced Hedge's D: D converted from raw data or correlations

Data Visualization: Why?

Assumptions: No Effect of Sample Size

Assumption: No Bias in the Pool of Studies

Visualize the Data: Forest Plot gm_mod <- rma(Hedges.D, Var.D, data=lep, method="FE") forest(gm_mod, cex=1.3, slab=lep$Species) text(6, 27, "Hedge's D [95% CI]", pos=2, cex=1.3)

Order by Effect Size (order="obs")

Order by Precision (order="prec")

Groupings to Investigate Hypotheses

Data Visualization: Why See the data! Find pathological behaviors Understand influence of studies on results Evaluate sources of heterogeneity

A First Stab at a Model: Fixed Effects Starting model: there is only one grand mean, everything else is error T i  i  e i where  i  e i ~ N(0,  i 2 )

A First Stab at a Model: Fixed Effects Starting model: there is only one grand mean, everything else is error T i  i,  i 2 ) where  i 

There’s a Grand Mean. Everything else is noise.

Meta-Analytic Mean: Estimation Study Weight: Grand Mean: Weighted mean normalized over all weights! w i /sum(w i ) gives you the individual study weight

Is our Effect Different from 0? Study Weight: Variance: T-test: (  -  0 )/s  with K-1 DF Confidence Interval  ± s  t  /2,k-1

Fitting a Fixed Effect Model in R > rma(Hedges.D, Var.D, data=lep, method="FE") Fixed-Effects Model (k = 25) Test for Heterogeneity: Q(df = 24) = , p-val = Model Results: estimate se zval pval ci.lb ci.ub <

Why is a Grand Mean Not Enough Maybe there's a different mean for each study, drawn from a distribution of effects? Maybe other factors lead to additional variation between studies? Maybe both! If so, we expect more heterogeneity between studies that a simple residual error would predict.

Assessing Heterogeneity Q t follows a  2 distribution with K-1 DF Test for Heterogeneity: Q(df = 24) = , p-val =

How Much Variation is Between Studies? Percent of heterogeneity due to between study variation If there is no heterogeneity, I 2 =0

Solutions to Heterogeneity Heterogeneity Allow Variation in  Random Effects Model Model Drivers of Heterogeneity (Fixed) Model Drivers & Allow Variation in  Mixed Effects Model

The Random Effects Approach Each study has an effect size, drawn from a distribution We partition within and between study variation We can 'borrow' information from each study to help us estimate the effect size for each study

A First Stab at a Model: Fixed Effects Starting model: there is only one grand mean, everything else is error T i  i,  i 2 ) where  i 

The Random Effects Model Each study's mean is drawn from a distribution, everything else is error T i  i,  i 2 )  i ,  2 

The Random Effects Model Each study's mean is drawn from a distribution, everything else is error T i  i  e i where  i  N ,  2  e i ~ N(0,  i 2 )

There is a Grand Mean Study Means are from a distribution Also, there’s additional error…

What is Different? Fixed Effect Study Weight: Random Effect Study Weight: Estimate  and    in exactly the same way as before!

What is  2 ? Different approximations We will use the DerSimonian & Laird's method Varies with model structure Can be biased! (use ML)

What is  2 ? Q t, w i, etc. are all from the fixed model This formula varies with model structure

Fitting a Random Effect Model in R > rma(Hedges.D, Var.D, data=lep, method="DL") Random-Effects Model (k = 25; tau^2 estimator: DL) tau^2 (estimated amount of total heterogeneity): (SE = ) tau (square root of estimated tau^2 value): I^2 (total heterogeneity / total variability): 47.47% H^2 (total variability / sampling variability): 1.90 Test for Heterogeneity: Q(df = 24) = , p-val = Model Results: estimate se zval pval ci.lb ci.ub <

What if we have a group variation drives excessive Heterogeneity? Moths (Heterocera) Butterflies (Rhopalocera)

A Fixed Effect Group Model Starting model: there are group means, everything else is error T i  im,  i 2 ) where  im  m

Estimation of Fixed Effect Models with Groups Just add group structure to what we have already done

Estimation of Fixed Effect Models with Groups Study Weight: Group Mean: Weighted mean normalized over all weights in group m! Group Variance:

So…Does Group Structure Matter? Qt = Qm + Qe We can now partition total variation into that driven by the model versus residual heterogeneity Kinda like ANOVA, no?

Within Group Residual Heterogeneity Just the sum of the residuals from a group means K m -1 Degrees of Freedom where Km is the sample size within a group Tells you whether there is excessive residual heterogeneity within a group

Total Residual Heterogeneity Just the sum of the residuals from the group means n - M Degrees of Freedom Is there still excessive heterogeneity in the data?

Modeled Heterogeneity Just the sum of the residuals from the group means W m = sum of group weights M-1 Degrees of Freedom for a  2 test Does our model explain some heterogeneity?

Put it to the Test! > rma(Hedges.D ~ Suborder, Var.D, data=lep, method="FE") Fixed-Effects with Moderators Model (k = 25) Test for Residual Heterogeneity: QE(df = 23) = , p-val = Test of Moderators (coefficient(s) 2): QM(df = 1) = , p-val =

Put it to the Test! > rma(Hedges.D ~ Suborder, Var.D, data=lep, method="FE")... Model Results: estimate se zval pval ci.lb ci.ub intrcpt < SuborderR