Presentation is loading. Please wait.

Presentation is loading. Please wait.

Fixed-Effects and Random-Effects Models

Similar presentations


Presentation on theme: "Fixed-Effects and Random-Effects Models"— Presentation transcript:

1 Fixed-Effects and Random-Effects Models
Created for the third edition of the Users' Guides to the Medical Literature.

2 Outline Introduction Fixed-effects vs random-effects models: an analogy Models for combining data Practical considerations When results differ between the 2 models Examples of differences in point estimates and confidence intervals Conclusion

3 Objective To explore the differences between fixed-effects and random-effects models used in meta-analyses in greater detail This Education Guide presents an advanced topic in systematic reviews. More information on systematic reviews and meta-analyses can be found in the Education Guides “Appraising Evidence From a Systematic Review and Meta-analysis” and “Understanding and Applying the Results of a Systematic Review and Meta-analysis,” also available on

4 Patient Five A’s of EBM Ask Act Acquire Apply Appraise
EBM, evidence-based medicine. This slide returns to the evidence cycle first explained in the Education Guide “An Approach to Evidence-Based Medicine.” This Education Guide focuses on the “Appraise” step in the evidence cycle, as we discuss how to appraise evidence from a network meta-analysis that includes either a fixed-effects or a random-effects model. Appraise

5 Models for Combining Data
In a meta-analysis, results from 2 or more primary studies can be combined statistically using a fixed-effects or a random-effects model You can consider the differences between the 2 models by looking at Underlying assumptions Statistical considerations How choice of model affects results

6 However ... This is a controversial area within the field of meta-analysis Even expert statisticians may disagree with the characterizations on the subsequent slides Our approach is largely consistent with the Cochrane Collaboration

7 Outline Introduction Fixed-effects vs random-effects models: an analogy Models for combining data Practical considerations When results differ between the 2 models Examples of differences in point estimates and confidence intervals Conclusion

8 Analogy You enroll 50 teachers in a study of a new math curriculum
For each teacher, you randomize the classes; half of the classes receive the old curriculum and half receive the new one You then evaluate the effectiveness of the curricula in optimizing student test scores What is this experiment trying to answer? There is more than 1 possibility, with more than 1 underlying assumption

9 Two Possible Scenarios
Among these 50 teachers and no others, what is the effect of the 2 curricula on student examination scores? Assumption: effect of new vs old curriculum is the same in all teachers Among all teachers who might ever teach this course, of whom these 50 are a random sample, what is the impact of the 2 curricula on examination scores? Assumption: effect of new vs old curriculum differs among teachers

10 Differences Between Scenarios
In terms of the questions: are we interested in the effect of the curricula in these 50 teachers or the effect in all teachers? In terms of assumptions: the relative effect of the old and new curricula is the same in each of these 50 teachers vs different across teachers

11 Completing the Analogy
Substitute “studies” for teachers and “therapies” for curricula and you have the questions and assumptions for fixed-effects (question 1) and random-effects (question 2) models

12 Fixed-Effects Methods
It is important to know that there are different statistical approaches for fixed-effects models Inverse variance Mantel-Haenszel method Peto odds ratio No one knows the best approach Rarely, choice of method may yield noticeable differences in results For the inverse variance method, studies are combined and weighted by the inverse of the variance. This method runs into problems when studies are small or have low event rates. Better applications of a fixed-effects model in this case are the Mantel-Haenszel method or the Peto odds ratio.

13 Random-Effects Methods
Multiple methods also exist for random-effects models that differ in how they approximate between-study variability Most commonly used is DerSimonian and Laird method Random-effects model methods also can weight studies using either the inverse variance method or the Mantel-Haenszel method

14 Outline Introduction Fixed-effects vs random-effects models: an analogy Models for combining data Practical considerations When results differ between the 2 models Examples of differences in point estimates and confidence intervals Conclusion

15 What Is a Fixed-Effects Model?
A fixed-effects model assumes that there is a single true value underlying all results of studies included in the meta-analysis If all studies that address the same question were infinitely large and completely free of bias, they would yield identical estimates of effect Observed estimates of effect differ from one another only because of random error The fixed-effects model assumes that any differences in patients enrolled, the way the intervention was administered, and the way the outcome was measured have no (or minimal) impact on the magnitude of effect.

16 What Is a Fixed-Effects Model?
A fixed-effects model does not consider between-study variability in results; the error term comes only from within-study variation Called study variance This model aims to estimate this common-truth effect and the uncertainty about it

17 What Is a Random-Effects Model?
A random-effects model assumes that the studies included are a random sample of a population of studies that address the question posed in the meta-analysis Because there are inevitably differences in the patients, interventions, and outcomes among studies, each study estimates a different underlying true effect These effects will have a normal distribution

18 What Is a Random-Effects Model?
The pooled estimate in a random-effects model is not a single effect of the intervention Rather, it is the mean effect across the different populations, interventions, and methods of outcome evaluation This model takes into account both within-study variability and between-study variability

19 Comparison of Models Fixed-Effects Models Random-Effects Models
Conceptual considerations Estimates effect in this sample of studies Assumes effects are the same in all studies Estimates effect in a population of studies from which the available studies are a random sample Assumes effects differ across studies and the pooled estimate is the mean effect Statistical considerations Variance is only derived from within-study variance Variance is derived from both within-study and between-study variances Practical considerations Narrow CI Large studies have much more weight than small studies Wider CI Large studies have more weight than small studies, but the gradient is smaller than in fixed-effects models

20 Outline Introduction Fixed-effects vs random-effects models: an analogy Models for combining data Practical considerations When results differ between the 2 models Examples of differences in point estimates and confidence intervals Conclusion

21 Differences in Results
Sometimes, results are similar from study to study For statistical pooling, this will mean that between-study variability can be fully explained by chance Between-study variance estimated to be 0 Corresponds to an I2 of 0% Under these circumstances, fixed-effects and random-effects models will give identical results The I2 statistic is an estimate of heterogeneity. I2 can be calculated from Cochran Q according to the formula: I2 = 100% × (Cochran Q – degrees of freedom). Any negative values of I2 are considered equal to 0, so that the range of I2 values is 0% to 100%, indicating no heterogeneity to high heterogeneity, respectively.

22 Differences in Results
In approximately 40% of Cochrane meta-analyses of binary outcomes of RCTs, results are sufficiently similar across trials that variability can be explained by chance and I2 is 0 This situation occurs in a smaller percentage of meta-analyses of epidemiologic studies RCTs, randomized clinical trials.

23 Differences in Results
In another 40% or so of meta-analyses of RCTs, the estimated between-study variance is not 0 but not large Both fixed-effects and random-effects models provide quite similar results In the final 20%, the between-study variability is large, and fixed-effects and random-effects models yield disparate results that may have important implications

24 Effect of Model Choice on Precision
Because estimation of variance under the random-effects model includes between-study variability, when results vary across studies, the CI of the combined estimate will be wider The random-effects model generally produces a more conservative assessment of the precision of the summary estimate than the fixed-effects model

25 Hypothetical Example of Significant Variability
A, Random CI Wider Than Fixed CI Figure A shows 4 studies of equal sample size (you can tell because the width of the CI is the same in all 4 studies). There is a large amount of variability among the studies. As a result, the CI is much narrower for the fixed-effects model than for the random-effects model.

26 Hypothetical Example of Significant Variability
B, Hypothetical Example of Minimal Variability: Random CI Is Similar to Fixed CI Figure B illustrates a situation in which the results do not vary much among studies (ie, low heterogeneity), thus CIs of the 2 models become similar or the same.

27 In both models, larger studies have larger weight
Effect of Model Choice on Point Estimate In both models, larger studies have larger weight A random-effects model gives smaller studies proportionally greater weight in the summary estimate Direction and magnitude of summary estimate are influenced relatively more by smaller studies These models thus generate summary estimates closer to null result than fixed-effects estimates if smaller study results are closer to null result than those from larger studies Studies with more events or more precise results may also have larger weight in both fixed-effects and random-effects models.

28 Effect of Model Choice on Point Estimate
If smaller studies are farther from the null result than larger studies, a random-effects model will produce larger estimates of beneficial or harmful effects than will a fixed-effects model Summary estimate derived from the random-effects model may be more susceptible to overestimates from small studies

29 Hypothetical Example of Significant Variability and Small Studies That Have Different Estimates Than Large Studies Random CI Wider Than Fixed CI and Point Estimate of Random Closer to Small Studies This figure shows the effect of small studies on the summary estimate using both the fixed-effects and the random-effects models.

30 Outline Introduction Fixed-effects vs random-effects models: an analogy Models for combining data Practical considerations When results differ between the 2 models Examples of differences in point estimates and confidence intervals Conclusion

31 Four Guidelines to Follow
Which model to believe? Statisticians and clinical trialists can be passionate about fixed-effects and random-effects models, and viewpoints differ The following 4 guidelines may help you decide which model to believe when results differ between the 2 models

32 Guideline 1 If there is little variability among studies, fixed-effects and random-effects point estimates and CIs will vary little

33 Guideline 2 Uncertainty about accuracy/applicability of a point estimate increases with increasing variability in study results Random-effects model captures this uncertainty with wider CIs It is also conceptually appealing We are interested not just in available studies but in applying them to a wider population It is also likely that true effects differ across populations and thus across studies

34 Guideline 3 Fixed-effects model is preferable when one study is much larger and more trustworthy than one or more smaller studies that address the same question and yield quite different results

35 Guideline 4 Fixed-effects model also may be preferable when the number of studies included in a meta-analysis is very small (< 5), leading to concern about inaccurate estimation of between-study variance

36 Outline Introduction Fixed-effects vs random-effects models: an analogy Models for combining data Practical considerations When results differ between the 2 models Examples of differences in point estimates and confidence intervals Conclusion

37 Example 1 You are a surgeon evaluating a patient presenting with a localized renal tumor You have 2 treatment options: partial or radical nephrectomy You are interested in knowing the relative impact of each of the 2 procedures on cancer-specific mortality A systematic review and meta-analysis compared the 2 interventions

38 Meta-analysis Comparing Partial and Radical Nephrectomy on Cancer-Specific Mortality
Kim SP, Thompson RH, Boorjian SA, et al. Comparative effectiveness for survival and renal function of partial and radical nephrectomy for localized renal tumors: a systematic review and meta-analysis. J Urol. 2012;188(1):51-57.

39 Example 1 The study authors presented results using both models
Under the fixed-effects model, results are statistically significant in favor of partial nephrectomy HR, 0.71; 95% CI, ; P < .01 However, using the random-effects model, results are no longer significant HR, 0.79; 95% CI, ; P = .17 HR, hazard ratio.

40 Example 1 This analysis was associated with substantial heterogeneity
The extreme differences in results substantially reduce confidence in the summary estimate of effect This reduced confidence is reflected in the wider CI of the random-effects model, which in this instance is more appropriate

41 Example 2 You are evaluating a patient presenting with myocardial infarction and recall that intravenous magnesium has been used in this setting You find a systematic review and meta-analysis that evaluated the effect of magnesium on mortality for patients with myocardial infarction

42 Meta-analysis Comparing Magnesium to Control Therapy in Patients With Acute Myocardial Infarction
Li J, Zhang Q, Zhang M, Egger M. Intravenous magnesium for acute myocardial infarction. Cochrane Database Syst Rev. 2007;(2):CD

43 Example 2 The meta-analysis included 22 trials, and the analysis was associated with moderate heterogeneity (I2 = 64%) Most of these trials were relatively small, but 2 were not Many of the small trials found an apparently significant reduction in mortality The 2 largest trials, however, found no benefit The 2 large trials are the Medical Research Council Adjuvant Gastric Chemotherapy trial, with 6000 patients, and the Fourth International Study of Infarct Survival (ISIS-4) trial, with more than patients.

44 Example 2 In this case, we are inclined to believe the results of the 2 large studies The random-effects models results are therefore misleading The figure also shows the relative weight of each trial For example, the largest trial (ISIS-4) has a relative weight of almost 75% under the fixed-effects model, but only 18% under the random-effects model

45 Outline Introduction Fixed-effects vs random-effects: an analogy
Models for combining data Practical considerations When results differ between the 2 models Examples of differences in point estimates and confidence intervals Conclusion

46 Conclusion There is no right answer as to which model is best
With the knowledge you have of the differences between the 2 models, you can make your own choice It may make little difference which model data analysts choose, but understanding the implications of their choice will help you make sense of situations in which large variability in study results exists

47 Terms of Use: Users Guides to the Medical Literature Education Guides
PowerPoint Usage Guidelines JAMAevidence users may display, download, or print out PowerPoint slides and images associated with the site for personal and educational use only. Educational use refers to classroom teaching, lectures, presentations, rounds, and other instructional activities, such as displaying, linking to, downloading, printing, and making and distributing multiple copies of said isolated materials in both print and electronic format. Users will only display, distribute, or otherwise make such PowerPoint slides and images from the applicable JAMAevidence materials available to students or other persons attending in-person presentations, lectures, rounds, or other similar instructional activities presented or given by User. Commercial use of the PowerPoint slides and images are not permitted under this agreement. Users may modify the content of downloaded PowerPoint slides only for educational (non-commercial) use; however, the source and attribution may not be modified. Users may not otherwise copy, print, transmit, rent, lend, sell, or modify any images from JAMAevidence or modify or remove any proprietary notices contained therein, or create derivative works based on materials therefrom. They also may not disseminate any portion of the applicable JAMAevidence site subscribed to hereunder through electronic means except as outlined above, including mail lists or electronic bulletin boards.

48 Created by Gordon Guyatt, MD, Kate Pezalla, MA, and Annette Flanagin, RN, MA


Download ppt "Fixed-Effects and Random-Effects Models"

Similar presentations


Ads by Google