1 Consider the k studies as coming from a population of effects we want to understand. One way to model effects in meta-analysis is using random effects.

Slides:



Advertisements
Similar presentations
BPS - 5th Ed. Chapter 241 One-Way Analysis of Variance: Comparing Several Means.
Advertisements

Hypothesis Testing Steps in Hypothesis Testing:
Irwin/McGraw-Hill © Andrew F. Siegel, 1997 and l Chapter 12 l Multiple Regression: Predicting One Factor from Several Others.
Inference for Regression
1 SSS II Lecture 1: Correlation and Regression Graduate School 2008/2009 Social Science Statistics II Gwilym Pryce
Objectives (BPS chapter 24)
EVAL 6970: Meta-Analysis Fixed-Effect and Random- Effects Models Dr. Chris L. S. Coryn Spring 2011.
Chapter 12 Simple Regression
Tuesday, October 22 Interval estimation. Independent samples t-test for the difference between two means. Matched samples t-test.
Copyright © 2006 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Slide Are the Means of Several Groups Equal? Ho:Ha: Consider the following.
Heterogeneity in Hedges. Fixed Effects Borenstein et al., 2009, pp
Chapter 9 - Lecture 2 Computing the analysis of variance for simple experiments (single factor, unrelated groups experiments).
AM Recitation 2/10/11.
Inference for regression - Simple linear regression
Overview of Meta-Analytic Data Analysis
Linear Regression Inference
Copyright © 2013, 2010 and 2007 Pearson Education, Inc. Chapter Inference on the Least-Squares Regression Model and Multiple Regression 14.
+ Chapter 12: Inference for Regression Inference for Linear Regression.
Testing Hypotheses about Differences among Several Means.
Lecture 8 Simple Linear Regression (cont.). Section Objectives: Statistical model for linear regression Data for simple linear regression Estimation.
Inference for Regression Simple Linear Regression IPS Chapter 10.1 © 2009 W.H. Freeman and Company.
+ Chapter 12: More About Regression Section 12.1 Inference for Linear Regression.
Understanding Your Data Set Statistics are used to describe data sets Gives us a metric in place of a graph What are some types of statistics used to describe.
Analysis Overheads1 Analyzing Heterogeneous Distributions: Multiple Regression Analysis Analog to the ANOVA is restricted to a single categorical between.
Chapter 10 The t Test for Two Independent Samples
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 11: Models Marshall University Genomics Core Facility.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 12 More About Regression 12.1 Inference for.
Section 6.4 Inferences for Variances. Chi-square probability densities.
1 Probability and Statistics Confidence Intervals.
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
Lecture 7: Bivariate Statistics. 2 Properties of Standard Deviation Variance is just the square of the S.D. If a constant is added to all scores, it has.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 12 More About Regression 12.1 Inference for.
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 10: Comparing Models.
Chapter 13 Linear Regression and Correlation. Our Objectives  Draw a scatter diagram.  Understand and interpret the terms dependent and independent.
1 One goal in most meta-analyses is to examine overall, or typical effects We might wish to estimate and test the value of an overall (general) parameter.
Variability. The differences between individuals in a population Measured by calculations such as Standard Error, Confidence Interval and Sampling Error.
Independent-Samples t test
Biostatistics Lecture /5 & 6/6/2017.
Exploring Group Differences
CHAPTER 12 More About Regression
The simple linear regression model and parameter estimation
Dependent-Samples t-Test
Estimation.
ESTIMATION.
DTC Quantitative Methods Bivariate Analysis: t-tests and Analysis of Variance (ANOVA) Thursday 20th February 2014  
ECO 173 Chapter 10: Introduction to Estimation Lecture 5a
Chapter 4. Inference about Process Quality
Inferences for Regression
Comparing Three or More Means
Basic Practice of Statistics - 5th Edition
CHAPTER 12 More About Regression
Meta-analysis statistical models: Fixed-effect vs. random-effects
Lecture 4: Meta-analysis
Chapter 11 Simple Regression
ECO 173 Chapter 10: Introduction to Estimation Lecture 5a
AP Biology Intro to Statistics
CHAPTER 29: Multiple Regression*
CHAPTER 26: Inference for Regression
Review: What influences confidence intervals?
Section 11.2 Day 2.
Gerald Dyer, Jr., MPH October 20, 2016
I. Statistical Tests: Why do we use them? What do they involve?
CHAPTER 12 More About Regression
Product moment correlation
CHAPTER 12 More About Regression
One-Factor Experiments
Inferences for Regression
MGS 3100 Business Analysis Regression Feb 18, 2016
Statistical inference for the slope and intercept in SLR
Presentation transcript:

1 Consider the k studies as coming from a population of effects we want to understand. One way to model effects in meta-analysis is using random effects – think of ANOVA or HLM models. Or if we are only interested in the set of studies we have in hand – all the “levels” of interest are right here – this is a fixed-effects model. We also assume all studies come from a single population effect with this model. Modeling Study Outcomes

2 Let us call the effect sizes T i, for k independent studies i = 1 to k. We will have T 1, T 2,..., T k We begin with a model for each effect, just as in primary research. Modeling Study Outcomes

3 In meta-analysis, we model the study outcome T i. The simplest model is the random-effects model. For studies i = 1 to k, T i = θ i + e i Observed Population Residual deviation study parameter due to sampling outcome for study ierror We assume that parameters (the θ i ) vary, as do the sample values T i. Modeling Study Outcomes

4 Graphic representation of random-effects model This is from Viechtbauer (2007) in Zeitschrift fur Psychologie. Smaller n’s lead to more sampling variation (and wider CIs). Reverse is true for larger studies like this one.

5 Graphic representation of random-effects model Our problem is that we must work backwards... We begin with the sample data – our set of effects, but we do not know what parameters generated these effects. However, we do know each effect’s sampling variation based on v i and the CIs.

6 Graphic representation of random-effects model We will plot the confidence intervals using the variance formula for our effect size (let’s say these are d values, estimating  s). ----o o o o--- |____|____|____|____|____|____| Each o represents a d value – this one (say, d 4 ) appears to be about Its CI goes from about 0.17 to d

7 Graphic representation of random-effects model Depending on how wide the CIs are we may “see” more or less variation in the true  values: --o-- |____|____|____|____|____|____| d These CIs make the effects look farther apart (more variable) because these studies are very precise.

8 Graphic representation of random-effects model The same effects look different with different CIs drawn around them: o o o o |____|____|____|____|____|____| d These wide CIs make it seem like the true effects probably do not vary greatly – or if they do, the studies are too imprecise to detect it.

9 If all population parameters are equal (θ i = θ), we have the fixed-effects model: T i = θ + e i for i = 1 to k. Observed Single Residual deviation study population due to sampling outcome parameter error All studies are modeled as having the same effect θ. Fixed-Effects (FE) Model

10 Graphic representation of fixed-effects model The “distribution” in the first panel is now one value. In this case the distributions below would all shift to be in line with the single  value, and the effects would be closer together (see x’s not o’s). x x x x x x xx

11 In these models the e i represents sampling error, so that the variance of T i is V(T i ) = V(θ i ) + V(e i ). If θ is a constant, then V(T i ) = V(e i ) = V i (V i will be our symbol for the FE variance.) Under the fixed-effects model, all variation is conceived as being due to sampling error. This is why all distributions lined up in the last slide. Variances under fixed-effects model

12 If the variance of T i is V(T i ) = V(θ i ) + V(e i ), and the θ i do vary, then we say V(θ i ) =  2 θ and then V(T i ) =  2 θ + V i This is the RE variance. Under the random-effects model, variation comes from sampling error AND true differences in effects. Variances under random-effects (RE) model

13 More specific fixed-effects models are for correlationsr i = ρ + e i and for effect sizesd i = δ + e i. V(e i ) or V i is estimated as above, e.g., for r i V i = Var(r i ) = (1 - r i 2 ) 2 /(n i -1), and for d i V i = (n i E + n i C ) + d i 2. (n i E *n i C ) 2*(n i E + n i C ) Fixed-Effects Model

14 In the RE case each population has a different effect, and we estimate the amount of uncertainty (variation) due to those differences or we try to predict it. So... Under the simple fixed-effects model, we estimate a “common” effect; Under the simple random-effects model, we estimate an “average” effect.

15 One goal in most meta-analyses is to examine overall, or typical effects, and to test them, as in H 0 :  = 0 Here the  could represent any effect size such as  or . We can write the hypothesis using more specific symbols, e.g., H 0 :  = 0 Estimating Common or Average Effects

16 Under the random-effects model, we test H 0 : . = 0 e.g., H 0 : . = 0 The average of the  i valuesthe population is zero correlations is zero We will earn how to do this test shortly. Estimating Common or Average Effects

17 Testing Consistency or Homogeneity Another hypothesis we typically test in meta- analysis is that all studies arise from one population H 0 :  1 =... =  k =  orH 0 :  2  = 0 This is also the test of whether the fixed- effects model is appropriate.

18 Testing Consistency or Homogeneity The test statistic is where w i = 1/V i and w i is an inverse variance weight from the FE model.

19 Testing Consistency or Homogeneity Parts of the Q statistic may look familiar Q is a weighted variance, and under H 0, Q ~ chi-square with k-1 df Squared deviation from mean

20 Testing Consistency or Homogeneity A large Q means our results do not all agree, or we may say the results are “heterogeneous” or “inconsistent”. Some researchers prefer not to test Q but to simply assume that the effects vary, and to estimate their true variance.

21 Testing Consistency or Homogeneity We can see that the Q statistic can get large for two reasons 1) T i is far from the mean, T. 2) V i is small so weight w i = 1/V i is large (which happens when the sample is big)

22 LLIM ULIM T min max ‑ 1 2 * ‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑ * ‑ | [ ‑‑‑ * ‑‑‑ ] | ‑ | [ ‑‑ | ‑ * ‑‑‑ ] | ‑ ‑ 0.14 | [ ‑‑‑‑ * ‑ | ‑‑ ] | | | [ ‑‑‑‑‑‑‑‑‑‑ * ‑‑‑‑‑‑‑‑‑‑ ] | ‑ | [ ‑‑‑‑‑‑ | ‑‑‑ * ‑‑‑‑‑‑‑‑‑‑ ] | ‑ ‑ 0.06 | [ ‑‑ *| ‑ ] | ‑ ‑ 0.02 | [ ‑‑‑ * ‑‑ ] | ‑ ‑ 0.32 | [ ‑‑‑‑‑‑ * ‑‑‑‑ | ‑ ] | ‑ | [| ‑‑‑ * ‑‑‑‑ ] | | | [ ‑‑‑‑‑‑ * ‑‑‑‑‑‑‑ ] | ‑ | [| ‑‑‑‑‑‑‑ * ‑‑‑‑‑‑‑‑ ] | ‑ | [ ‑‑‑ | ‑‑ * ‑‑‑‑‑ ] | ‑ ‑ 0.02 | [ ‑‑‑‑‑‑‑‑ * ‑‑‑‑‑‑‑ ] | ‑ | [ ‑‑‑‑ | ‑‑ * ‑‑‑‑‑‑‑‑ ] | ‑ ‑ 0.18 | [ ‑‑‑‑ * ‑‑ | ‑ ] | ‑ ‑ 0.06 | [ ‑‑‑‑ *| ‑‑‑ ] | | [ ‑‑‑ * ‑‑‑‑ ] | ‑ | [ ‑ |* ‑‑ ] | ‑ ‑ 0.07 | [ ‑‑‑‑ *| ‑‑‑ ] | * ‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑ * This is the teacher expectancy data. This plot shows confidence intervals for the effect sizes, which are all d values. We’ve computed T  V i here, and we can see one interval that appears to be larger than the others. The effect size is above 1 SD!! Also there is a fair amount of spread – there’s no place where a line crosses all of the intervals (a quick test for homogeneity).

23 This output shows SPSS GLM output for Q and the FE mean. Do NOT use the F test or SEs from this output!!!

24 Q is a chi-square with 18 df (k-1 = 18). The Q is significant at the.01 level (with p <.01). This means that the FE mean of 0.06 shown above should NOT be used to represent the overall result for these data. We need a random effects mean, which we will discuss later. This result is not surprising given the spread in the CI plot. We will return to the teacher expectancy data later.

25 Other Indices: The Birge ratio The ratio of a chi-square to its degrees of freedom can provide a scale-free index of variability. Birge used the value Q/df which we will call B. Since the df is the expected value of each chi-square, when the chi-square shows only random variation (and thus is not much larger than its df), B is close to 1.

26 The Birge ratio The ratio B therefore is larger than 1 when the results of a set of studies are heterogeneous (i.e., more varied than we’d expect from just sampling error). So we can compute B Total = Q Total /(k-1) and we can use B Total to compare different-sized data sets to see whether one is more heterogeneous.

27 Other Indices: I squared The Q test has been used in one other way to get an index of heterogeneity. It is something like a percentage and is called I 2. We compute I 2 = 100*[Q Total - (k-1)]/Q total = 100*[1 - (k-1)/Q] If Q is much larger than its degrees of freedom, then the numerator [Q Total - (k-1)] will be large. If Q < k-1 there is little variation, and we set the value of I 2 to zero.

28 Q, the Birge ratio, and I 2 for the teacher expectancy data Q Total (k-1) B Total = Q Total /(k-1) I 2 ______________________________________ This reflects a significant, but only moderate amount of variation.

29 This output shows SPSS output for the Q and RE mean for these data. Some things we have not yet discussed. Fixed-effects Homogeneity Test (Q) P-value for Homogeneity test (P).0074 Birge's ratio, ratio of Q/(k-1) I-squared, ratio 100 [Q-(k-1)]/Q.4976 Variance Component based on Homogeneity Test (QVAR).0259 Variance Component based on S 2 and v-bar (SVAR).0804 RE Lower Conf Limit for T_Dot (L_T_DOT) Weighted random-effects average of effect size based on QVAR (T_DOT).1143 RE Upper Conf Limit for T_Dot (U_T_DOT).2696