Presentation is loading. Please wait.

Presentation is loading. Please wait.

Methods for synthesizing evidence of the effects of healthcare interventions in systematic reviews of complex interventions (including statistical approaches)

Similar presentations


Presentation on theme: "Methods for synthesizing evidence of the effects of healthcare interventions in systematic reviews of complex interventions (including statistical approaches)"— Presentation transcript:

1 Methods for synthesizing evidence of the effects of healthcare interventions in systematic reviews of complex interventions (including statistical approaches) Joanne McKenzie, School of Public Health and Preventive Medicine 19th Cochrane Colloquium Madrid 2011

2 Google’s take on complexity …
Methods for synthesizing evidence of the effects of healthcare interventions October 2011

3 Acknowledgements Sue Brennan, Monash University, Australia
Sophie Hill, La Trobe University, Australia

4 Outline Components of complexity in a systematic review
Summary to synthesis (pros and cons) Available methods Outcome categorisation Conclusions and questions raised Methods for synthesizing evidence of the effects of healthcare interventions October 2011

5 Splitting Lumping Intervention Setting Condition Outcome Study design
Narrow (e.g. audit cycles) Intervention Any form (e.g. any QI intervention) One setting (e.g. primary care) Setting All settings Outcomes: as a result of, primarily, condition and setting, we may have consistency in the outcomes reported e.g. primary care, diabetes, practitioner process outcome = test ordering of HbA1c. Greater diversity in setting and condition, will lead to greater diversity in outcomes, and inconsistency in outcomes across studies. Study designs: Systematic reviews of complex interventions often include study designs beyond randomised trials because randomisation is often not feasible (such as mass media interventions). The designs are often targeted at clusters (e.g. organisations, practices, practitioners) and measurement is undertaken at the level of the patients, so correlation within clusters needs to be accounted for; this frequently does not occur. Non-randomised studies are often at a higher risk of bias. These features all lead to heterogeneous data. Condition One condition (e.g. diabetes) Any condition Consistent outcomes (e.g. test ordering) Diverse outcomes Outcome One design (e.g. RCT) Study design Multiple designs (e.g. RCT, ITS, CBA) Methods for synthesizing evidence of the effects of healthcare interventions October 2011 5

6 Synthesis Summary Exploring heterogeneity Vote counting
Summary of effect estimates Meta-analysis text tabular text tabular harvest plots descriptive statistics box and whisker plots meta-analysis predictive intervals forest plots Slide depicts possible ways of considering the difference between summary and synthesis. It includes some of the methods that might be used to summarise and synthesize effects from complex reviews. It is not a comprehensive list of the methods available. Often reviewers assume there is a dichotomy of narrative summary vs meta-analysis, but there is a range of options. Different synthesis approaches are often under utilised in systematic reviews of complex interventions. When data is not presented in an interpretable way, it will lead to readers and decision makers using some ad hoc approaches to make sense of the data; they may end up using a method that you ultimately may not want them to. Some of the pros and cons of these methods are considered in this talk. Exploring heterogeneity sub-group analysis meta-regression graphical approaches Methods for synthesizing evidence of the effects of healthcare interventions October 2011 6

7 Summary Pros Cons Results of studies summarised:
in the text of a publication without the use of a synthesis method in tables, providing a structured method for presenting data Pros Text: Provides an assembly of the available research meeting the inclusion/exclusion criteria. Tables: More likely to report all results of all outcomes (i.e. may be less likely to selectively include results). Results available for others to synthesize. Cons Text: Results summarised, not synthesized. Little structure to reporting results may lead to selective reporting (privileging of findings above others). Interpretation of results difficult/not possible. Tables: Results summarised, not synthesized. Overwhelming amount of information which is difficult for a reader to interpret (often multiple outcomes per study). Tables: Provides a structured method for presenting data. E.g. comparison, outcome (professional performance (adherence to recommended practice), patient outcome), study design, potential effect modifiers. Methods for synthesizing evidence of the effects of healthcare interventions October 2011 7

8 Summary Pros Cons Results of studies summarised:
in the text of a publication without the use of a synthesis method in tables, providing a structured method for presenting data Pros Text: Provides an assembly of the available research meeting the inclusion/exclusion criteria. Tables: More likely to report all results of all outcomes (i.e. may be less likely to selectively include results). Results available for others to synthesize. Cons Text: Results summarised, not synthesized. Little structure to reporting results may lead to selective reporting (privileging of findings above others). Interpretation of results difficult/not possible. Tables: Results summarised, not synthesized. Overwhelming amount of information which is difficult for a reader to interpret (often multiple outcomes per study). Methods for synthesizing evidence of the effects of healthcare interventions October 2011 8

9 Systematic review investigating the effects of patient-held medical records for patients with chronic disease (Ko H et al. Patient-held medical records for patients with chronic disease: a systematic review. Qual Saf Health Care 2010;19(5):e41) This example illustrates the dichotomy that occurs between meta-analysis and narrative summary. “… a statistical meta-analysis was not appropriate, so we undertook a narrative synthesis.” Methods for synthesizing evidence of the effects of healthcare interventions October 2011 9

10 Provides an example of a narrative summary.
For each study, the number of statistically significant effects are tabulated. Only intervention effects and confidence intervals are reported for statistically significant outcomes. Not an efficient approach to presenting data. Importantly, as a reader, the results are difficult to interpret. Does this fairly represent the evidence base? Methods for synthesizing evidence of the effects of healthcare interventions October 2011 1010

11 Quality improvement education for clinicians Boonyasai, JAMA 2007
Splitting Lumping Intervention Any form: QI education for trainees (10), QI education for nontrainees (5), ‘other’ interventions with an educational component (5), education within a QI collaborative (19) Setting Any setting: ambulatory (22), inpatient/nursing home (10), mixed clinical (3), educational (4) Illustration of complexity across all domains: multiple forms of the intervention; multiple settings, conditions, and study designs. Then discuss outcomes (see point below) ; provides an example of how the reviewers dealt with heterogeneity in outcomes. Outcome: as a way of categorising the effects, the authors defined five outcome categories and placed each outcome measures in a trial into the outcome categories (example shown on next slide). The first four outcome categories represent measures where inference is intended at the level of the practitioner, while in the last category, inference is intended at the level of the patient. [Intervention: all intervention target practicing clinicians except ‘QI education for trainees’] [Intervention must involve teaching ‘QI theory’] [Intervention: there was large variation in the ‘intensity’ of the education – from informal weekly discussions of QI over 10 weeks, to 4 year program integrated into medical curriculum] [Setting was general; only dentists and clinical laboratory workers were excluded.] [Study designs: unrestrictive criteria regarding the type of study design. Included in addition to RCTs, non-randomised studies (pre/post evaluations (no control group), time series, non-randomised controlled studies).] Condition Any condition/aspect of care: preventive care, diabetes, asthma, hypertension, HIV, renal failure, palliative care, coronary heart failure, stroke, pain management, falls prevention, neonatal infection, anticipatory guidance, compensation, wait times Any outcome: attitude (6), knowledge (10), skill/behaviour (6), process (27), patient (18) Outcome Study design Multiple designs: RCT (8), nonrandomised trial (14), pre/post or time series (17) Methods for synthesizing evidence of the effects of healthcare interventions October 2011 1111

12 27/39 studies measured process outcomes (3 example studies)
Study 1 Study 3 Study 2 Example of process outcomes from the results table for three studies (results not shown). Hanson et al 2005: Setting: 9 US nursing homes Learners: physicians, nurses, physician assistants, pharmacists, rehabilitation staff, nursing assistants, nurse practitioners Study: Uncontrolled pre/post Outcomes: measured by chart review Margolis et al 2004: Setting: 22 intervention and 22 control US paediatric practices Learners: paediatricians, family practitioners, nurses and administrators Study: RCT time-series Outcomes: measured by chart review and family interview Rosenthal et al 2005: Study: RCT pre/post Outcomes: measured by family interview. Note: anticipatory guidance: in Rosenthal study, process outcomes were around delivery of anticipatory guidance which is defined as: the psychologic preparation of a person to help relieve the fear and anxiety of an event expected to be stressful. An example is the preparation of a child for surgery by explaining what will happen and what it will feel like and showing equipment or the area of the hospital where the child will be. It is also used to prepare parents for the normal growth and development of their child. ( 27/39 studies measured process outcomes (3 example studies) 15 clinical areas [Boonyasai JAMA 2007]

13 Outcome categorisation (EPOC)
Example CQI review: Healthcare professional performance (binary, continuous) e.g. adherence to recommended practice Patient outcomes (binary, continuous) e.g. pain, quality of life, function, mortality e.g. patient experience of care, patient evaluation of care co-ordination, length of stay Other outcomes e.g. resource use In Cochrane, outcome categorisation is also used. This slide provides an example from an Effective Practice and Organisation of Care (EPOC) review assessing the effects of Continuous quality improvement: effects on professional practice and healthcare outcomes. Representative of the general EPOC approach to outcome categorisation. Outcome categorisation is a helpful way to start simplifying some of the complexity around outcomes, and is a necessary step to be able to synthesize the effects. Grouping like with like is a way of dealing with outcome heterogeneity. Forms the focus of the quantitative synthesis. Helpful for the reader and decision maker. Helps with the conduct of the systematic review and presentation of the data. [Brennan Cochrane Database Syst Rev 2009] Methods for synthesizing evidence of the effects of healthcare interventions October 2011 1313

14 Outcome categorisation (CCRG)
Table 4.2: Outcomes of importance to consumers, communication and participation: a new taxonomy Groups and orientation Outcome category (a) Consumer Knowledge and understanding Communication Involvement in care Evaluation of care Support Skills acquisition Health status and well being Health behaviour Treatment outcomes (b) Healthcare provider Consultation processes (c) Health service delivery Service delivery level Related to research Societal or governmental Outcome categorisation developed in the Consumers and Communications Review Group. The consumer group is the most detailed because most of the interventions captured by the reviews coordinated by the Consumers and Communication Review Group are intending to principally to change an aspect of a consumer’s health. In the “Consumer group”, there are nine broad outcome categories, spanning knowledge through to treatment outcomes. The “Healthcare provide group” includes two main categories of outcomes for health professionals. The “Health service delivery group” includes categories of outcomes at the service delivery level, those related to research, and those at a societal or government level. [Hill Wiley-Blackwell 2011] Methods for synthesizing evidence of the effects of healthcare interventions October 2011 1414

15 Synthesis Pros Cons Vote counting:
“Is there any evidence of an effect?” # studies showing harm compared with # studies showing benefit (regardless of stat. sig. or size of results) Sign test used to assess the stat. sig. of evidence of an effect in either direction Pros Provides a method for synthesizing effects when standard meta-analytical methods difficult to apply (e.g. variances of effect estimates not available). Cons Provides no information on the magnitude of effects (e.g. equal importance given to risk difference of 5% and 50%) . No account of differential weighting across the studies. Problems when stat. sig. used to define # positive and # negative studies (unit of analysis errors, underpowered studies). Significance testing approach. Variances of effect estimates may not be available when, e.g., dispersion statistics are not reported, or there are unit of analysis errors. Some have also used this approach, when not using outcome categorisation, and there is no consistency in outcome measures across the studies. Methods for synthesizing evidence of the effects of healthcare interventions October 2011 1515

16 Synthesis Pros Cons Vote counting:
“Is there any evidence of an effect?” # studies showing harm compared with # studies showing benefit (regardless of stat. sig. or size of results) Sign test used to assess the stat. sig. of evidence of an effect in either direction Pros Provides a method for synthesizing effects when standard meta-analytical methods difficult to apply (e.g. variances of effect estimates not available). Cons Provides no information on the magnitude of effects (e.g. equal importance given to risk difference of 5% and 50%) . No account of differential weighting across the studies. Problems when stat. sig. used to define # positive and # negative studies (unit of analysis errors, underpowered studies). Note: Ioannidis et al (Reasons or excuses for avoiding meta-analysis in forest plots. BMJ 2008;336(7658):1413-5) provides a good example about vote counting based on statistical significance. “… if an intervention is effective but two studies are done with 40% power each, the chance of both of them getting a significant result is only 16%.” Methods for synthesizing evidence of the effects of healthcare interventions October 2011 1616

17 Effectiveness of teaching quality improvement to clinicians
The review included 39 studies. In this example vote counting is based on direction of effect and statistical significance. Data synthesis: Outcomes classified as beneficial effects, no effects, or detrimental effects based on whether differences from baseline (in uncontrolled studies) or controls were statistically significant at the p-value level of 0.05 or less. If authors did not calculate statistical significance, an effect was counted as beneficial or detrimental if the absolute change or difference in an outcome measure was greater than 10%. Chi2 test (2-tailed) used to assess differences in proportions. [Boonyasai JAMA 2007] Methods for synthesizing evidence of the effects of healthcare interventions October 2011 1717

18 Harvest plots Crowther Res Syn Meth 2011
Visual plot of vote counting results (hypothesis testing approach) to display the “distribution of evidence”. Harvest plot groups studies based on whether they demonstrate a positive, negative, or no effect. Plot can be ‘visually’ weighted and annotated to highlight study characteristics. E.g. risk of bias domains (e.g. allocation concealment), proximal vs distal outcomes, study design. SR investigated the effect of nutritional interventions in bone marrow transplantation (Crowther, 2009, Bone Marrow Transplant). In this example, taller bars represent greater study quality. In this example, the categories “Decreased infections”, “No change in infections”, and “Increased infections”, are based statistical significance and direction of effect, i.e. significantly decreased, no statistically significant effect, statistically increased. (Crowther 2011 Res Syn Meth). Crowther Res Syn Meth 2011

19 Synthesis Pros Cons Summary of effect estimates:
“What is the range and distribution of effects?” ‘Median-of-medians’ approach (EPOC). One outcome chosen per outcome category (selection process independent of result & stat. sig.). Effect size associated with this outcome used to ‘characterise’ the outcome of the study. Pros Provides a method for synthesizing results when difficult to undertake a meta-analysis (e.g. missing variances of effects, unit of analysis errors). Provides information on the magnitude and range of effects (IQR, range). Cons Does not weight effects; small studies are as influential as large studies. Doesn’t use all available data for a particular outcome category. The outcome chosen could be through a set of pre-specified options: E.g. selection of the primary outcome, outcome used in the sample size calculation (if primary outcome not specified), selection of the median outcome with the median effect estimate. Or, a panel could examine each studies set of outcomes and rank the importance of these outcomes, independent of the results, using pre-specified criteria. Arguments proposed for using this approach include: i) arguments based around heterogeneity i.e. reviews of complex interventions exhibit greater heterogeneity of effect estimates due to differences in interventions, setting, conditions, outcomes, and study designs; therefore description of the range and distribution of effects is a preferable alternative (Grimshaw Qual Saf Health Care 2003), ii) practical issues resulting from primary studies frequently having common methodological problems (e.g. unit of analysis errors), or do not report the necessary information for meta-analysis. Methods for synthesizing evidence of the effects of healthcare interventions October 2011 1919

20 Synthesis Pros Cons Summary of effect estimates:
“What is the range and distribution of effects?” ‘Median-of-medians’ approach (EPOC). One outcome chosen per outcome category (selection process independent of result & stat. sig.). Effect size associated with this outcome used to ‘characterise’ the outcome of the study. Pros Provides a method for synthesizing results when difficult to undertake a meta-analysis (e.g. missing variances of effects, unit of analysis errors). Provides information on the magnitude and range of effects (IQR, range). Cons Does not weight effects; small studies are as influential as large studies. Doesn’t use all available data for a particular outcome category. Methods for synthesizing evidence of the effects of healthcare interventions October 2011 2020

21 Printed educational materials: effects on professional practice and health care outcomes
An example from an EPOC review investigating the effects of printed educational materials on professional practice. Five process outcomes reported in this study. These were ranked and the median effect (a difference between groups of .5%) was used to ‘characterise’ the effects in this study. [Farmer Cochrane Database Syst Rev 2008] Methods for synthesizing evidence of the effects of healthcare interventions October 2011 2121

22 Audit and feedback: effects on professional practice and health care outcomes
Effects across studies can then be graphically display. Commonly box and whisker plots are employed, displaying medians, IQRs, ranges, outlying effects. In this example, the distribution of effects is compared across different categorisations of the interventions (investigation of heterogeneity). [Jamtvedt Cochrane Database Syst Rev 2006 ] Methods for synthesizing evidence of the effects of healthcare interventions October 2011 2222

23 Synthesis Pros Cons Meta-analysis:
“What is the average intervention effect?” (random effects meta-analysis) Prediction intervals can be calculated to complement information of random effects meta-analysis; “What is the potential effect of an intervention in an individual study?” Pros Provides a combined estimate of average intervention effect (random effects), and certainty in this estimate (95% CI). Weights estimates of effect; small studies are (generally) less influential compared with large studies. Predictive intervals can be calculated; helpful when there is unexplained heterogeneity. Forest plots display study effect estimates and CIs; can display pooled effect; familiar. Cons Requires variances of the effects. Argued that a meta-analytic estimate (average effect) may be of little value when there is heterogeneity. Particularly if there is inconsistency in the direction of effect. Same as for the ‘median-of-medians’ approach: could choose one outcome per outcome category. Requires variances of the effects: for non-randomised study designs this information may be less well reported. In addition, often correlation within studies has not been taken into account and SEs are too small. Appropriately weights studies and provides confidence interval for certainly in the pooled estimate of effect. Forest plots are well understood. Can choose to provide a pooled estimate of effect or not. Potential point to note: when there is a large amount of heterogeneity, and the distribution of weights becomes more similar (effectively the arithmetic mean), the ‘median-of-medians’ approach may provide a similar estimate to the meta-analytic estimate. Might be interesting to investigate this in a methodological study. Methods for synthesizing evidence of the effects of healthcare interventions October 2011 2323

24 Synthesis Pros Cons Meta-analysis:
“What is the average intervention effect?” (random effects meta-analysis) Prediction intervals can be calculated to complement information of random effects meta-analysis; “What is the potential effect of an intervention in an individual study?” Pros Provides a combined estimate of average intervention effect (random effects), and certainty in this estimate (95% CI). Weights estimates of effect; small studies are (generally) less influential compared with large studies. Predictive intervals can be calculated; helpful when there is unexplained heterogeneity. Forest plots display study effect estimates and CIs; can display pooled effect; familiar. Cons Requires variances of the effects. Argued that a meta-analytic estimate (average effect) may be of little value when there is heterogeneity. Particularly if there is inconsistency in the direction of effect. Arguments against meta-analysing are that there is likely to be too much heterogeneity (interventions, conditions, settings, outcomes, designs) and that an average effect might be misleading and of limited value to decision makers (e.g. Grimshaw Qual Saf Health Care 2003). An average effect may ‘smooth’ (or mask) important differences. Methods for synthesizing evidence of the effects of healthcare interventions October 2011 2424

25 Exploring heterogeneity
Sub-group analysis, meta-regression, other statistical approaches: “What factors modify the size of the intervention effect?” Can be used to investigate components (‘active ingredients’) of multifaceted interventions which may modify effects. Pros Provides hypotheses regarding what (set) of factors might be necessary for the intervention to be effective. Cons Observational analysis; may suffer confounding bias; aggregation bias; overfitting and spurious claims of association. Investigation of intervention components when there are many is difficult (e.g. assumptions of additivity, correlation between combinations of components). Requires variances of effects, measurement of factors. Technical issues with baseline compliance. Methods for synthesizing evidence of the effects of healthcare interventions October 2011 2525

26 Exploring heterogeneity
Sub-group analysis, meta-regression, other statistical approaches: “What factors modify the size of the intervention effect?” Can be used to investigate components (‘active ingredients’) of multifaceted interventions which may modify effects. Pros Provides hypotheses regarding what (set) of factors might be necessary for the intervention to be effective. Cons Observational analysis; may suffer confounding bias; aggregation bias; overfitting and spurious claims of association. Investigation of intervention components when there are many is difficult (e.g. assumptions of additivity, correlation between combinations of components). Requires variances of effects, measurement of factors. Technical issues with baseline compliance. Observational analysis (weaker interpretation cf effects observed in randomised trials); may suffer from confounding bias. Aggregation bias (ecological fallacy). Often a small number of studies, with multiple factors, leads to overfitting and spurious claims of association. In practice meta-regression investigating which components modify the effects of an intervention is difficult when there are many components. Problems occur from the need to fit interaction terms (can’t assume additive effects); too few studies; combinations of components highly correlated (e.g. some components always occur together). Practical limitation also occurs from no measure of variance of the effect, or no data on the factor. Reviewers often interested in examining the relationship between effect sizes based on an adherence outcome and a baseline measure of adherence. This suffers from technical problems (reg. to the mean). Methods for synthesizing evidence of the effects of healthcare interventions October 2011 2626

27 Audit and feedback: effects on professional practice and health care outcomes
Example bubble plot investigating the association between baseline compliance (“Baseline compliance with the targeted behaviours for dichotomous outcomes was treated as a continuous variable ranging from zero to 100%, based on the mean value of pre-intervention level of compliance in the audit and feedback group and control group.”) and the adjusted risk ratio of compliance with a desired practice. The bubbles provide the precision of each effect estimate (in this example, the bubbles represented the number of healthcare professionals who participated). Baseline compliance is a frequently investigated factor. Theory is that the effect of the intervention may be less when there is a high level of compliance at baseline. This analysis suffers from technical issues (regression to the mean), so that even when there is truly no association, an association may be observed. [Jamtvedt Cochrane Database Syst Rev 2006] Methods for synthesizing evidence of the effects of healthcare interventions October 2011 2727

28 Re-analysis of audit & feedback review
Control theory utilised in this example. Control theory suggests that behaviour change is most likely if feedback is accompanied by comparison with a behavioural target and by action plans. Interventions were coded for these three techniques (feedback, performance target, action plan). Results in table are presented for the log odds ratio. Interpretation: e.g. if the intervention included additional behavioural change techniques (beyond feedback, performance target, and action plan), the odds of the effect were 91% higher (95%CI: 31%, 177%) Authors used theory (behaviour change) to categorise intervention components (feedback, performance target, action plan), and investigated if components modified the effects. [Gardner Soc Sci Med 2010]

29 Conclusions Diversity of interventions, settings, conditions, outcomes, and study designs complicates the synthesis of evidence. A range of ‘synthesis’ approaches are available; some are clearly better than others. Limitations in quantitative synthesis should be acknowledged, but may be preferable to “qualitative interpretation of results, or hidden quasi-quantitative analysis …” [Ioannidis BMJ 2008] Before making a decision not to synthesize data, review authors should consider what readers/decision makers might do (e.g. selection of favourable effects, count up #favourable results or stat. sig. results). Methods for synthesizing evidence of the effects of healthcare interventions October 2011 2929

30 Conclusions Diversity of interventions, settings, conditions, outcomes, and study designs complicates the synthesis of evidence. A range of ‘synthesis’ approaches are available; some are clearly better than others. Limitations in quantitative synthesis should be acknowledged, but may be preferable to “qualitative interpretation of results, or hidden quasi-quantitative analysis …” [Ioannidis BMJ 2008] Before making a decision not to synthesize data, review authors should consider what readers/decision makers might do (e.g. selection of favourable effects, count up #favourable results or stat. sig. results). Methods for synthesizing evidence of the effects of healthcare interventions October 2011 3030

31 Conclusions Diversity of interventions, settings, conditions, outcomes, and study designs complicates the synthesis of evidence. A range of ‘synthesis’ approaches are available; some are clearly better than others. Limitations in quantitative synthesis should be acknowledged, but may be preferable to “qualitative interpretation of results, or hidden quasi-quantitative analysis …” [Ioannidis BMJ 2008] Before making a decision not to synthesize data, review authors should consider what readers/decision makers might do (e.g. selection of favourable effects, count up #favourable results or stat. sig. results). Ioannidis presents an example of quasi-quantitative analysis where “… the reviewers of interventions to promote physical activity in children and adolescents “used scores to indicate effectiveness—that is, whether there was no difference in effect between control and intervention group (0 score), a positive or negative trend (+ or −), or a significant difference (P<0.05) in favour of the intervention or control group (++ or −−, respectively) If at least two thirds (66.6%) of the relevant studies were reported to have significant results in the same direction then we considered the overall results to be consistent.” [Van Sluijs et al. Effectiveness of 32 interventions to promote physical activity in children and adolescents: systematic review of controlled trials. BMJ 2007;335:703]. Methods for synthesizing evidence of the effects of healthcare interventions October 2011 3131

32 Conclusions Diversity of interventions, settings, conditions, outcomes, and study designs complicates the synthesis of evidence. A range of ‘synthesis’ approaches are available; some are clearly better than others. Limitations in quantitative synthesis should be acknowledged, but may be preferable to “qualitative interpretation of results, or hidden quasi-quantitative analysis …” [Ioannidis BMJ 2008] Before making a decision not to synthesize data, review authors should consider what readers/decision makers might do (e.g. selection of favourable effects, count up #favourable results or stat. sig. results). Methods for synthesizing evidence of the effects of healthcare interventions October 2011 3232

33 Questions raised Will New Zealand win the 2011 Rugby World Cup?
Are arguments for not undertaking a meta-analysis based on too much clinical and methodological heterogeneity consistent with the ‘median-of-medians’ approach? Are there other statistical approaches that may make better use of available data? E.g. meta-regression methods that adjust for correlated effects within studies (e.g. Hedges Res Syn Meth 2010). The measures of effect used in complex reviews typically adjust for baseline imbalance (e.g. adj. RR, adj RD, adj OR). Do these estimators achieve the desired effect? Will New Zealand win the 2011 Rugby World Cup? Arguments surrounding not undertaking a meta-analysis because of too much heterogeneity (interventions, conditions, settings, outcomes, designs) but then calculating and reporting a median effect (which conclusions may be based) seem to be in contradiction. Still combining apples and oranges when the focus is on the median effect. Measures of effect: measures that adjust for baseline imbalance are commonly used in complex reviews, e.g. RDadj = (r_(int, 1) – r_(ctrl, 1)) - (r_(int, 0) – r_(ctrl, 0)). Rationale is that in non-randomised studies, baseline imbalance is more likely, and these measures will adjust for this baseline imbalance. It’s not clear that these measures will appropriately adjust for baseline imbalance in non-randomised studies. Methods for synthesizing evidence of the effects of healthcare interventions October 2011 3333

34 References Boonyasai et al. Effectiveness of teaching quality improvement to clinicians: a systematic review. JAMA 2007;298(9): Brennan et al. Continuous quality improvement: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev 2009, Issue / CD pub2. Crowther M, Avenell A, MacLennan G, Mowatt G. A further use for the Harvest plot: a novel method for the presentation of data synthesis. Res Syn Meth 2011 Gardner B et al. Using theory to synthesise evidence from behaviour change interventions: the example of audit and feedback. Soc Sci Med 2010;70(10): Farmer et al Printed educational materials: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2008(3): CD Hedges et al. Robust variance estimation in meta-regression with dependent effect size estimates. Res Syn Meth 2010;1(1):39-65. Hill et al. Identifying outcomes of importance to consumers, communication and participation. In Hill S (ed). The Knowledgeable patient: Communication and participation in health. Wiley-Blackwell 2011. Ioannidis JP et al. Reasons or excuses for avoiding meta-analysis in forest plots. BMJ 2008;336(7658): Jamtvedt et al. Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2006(2):CD Methods for synthesizing evidence of the effects of healthcare interventions October 2011


Download ppt "Methods for synthesizing evidence of the effects of healthcare interventions in systematic reviews of complex interventions (including statistical approaches)"

Similar presentations


Ads by Google