Presentation is loading. Please wait.

Presentation is loading. Please wait.

EBM Conference (Day 2). Funding Bias “He who pays, Calls the Tune” Some Facts (& Myths) Is industry research more likely to be published No Is industry.

Similar presentations


Presentation on theme: "EBM Conference (Day 2). Funding Bias “He who pays, Calls the Tune” Some Facts (& Myths) Is industry research more likely to be published No Is industry."— Presentation transcript:

1 EBM Conference (Day 2)

2 Funding Bias “He who pays, Calls the Tune” Some Facts (& Myths) Is industry research more likely to be published No Is industry research comparatively poor quality No (?) Does Funding Influence Outcomes & Recommendations Yes

3 Funding Influence Funding gives an OR of 4-5.3 that, –Study outcomes favor the drug being studied –Drug will be recommended as Treatment of choice Resources –Lexchin et al, BMJ, 2003; 326: 1167-70. –Als-Nielsen et al, JAMA, 2003; 290(7): 921-8 –Melander et al, BMJ, 2003; 326: 1171-73

4 How does it Happen? 1) “Pick your Battles.” Fund studies likely to be +ve 2) “Pick your enemies.” Comparators should be poor drug, too low dose, not absorbed, etc. 3) “Give them what they want” Good Methods but ITT, reporting & others 4) “Let Sleeping Dogs Lie.” Don’t publish trials with bad outcomes; And Get good mileage out of good results.

5 Selective Publication and Reporting - Circles: trials done by industry (red = favorable outcome, blue = no better than placebo) - Green Diamonds: publications off one trial. -Yellow Boxes: publications off more than one trial -Three Trials find their way into 15 publications (5 each)

6 Not Uncommon Rare cross referencing Changing authors & Definitions Publications from Single Trails: - if trial +ve = 90% - if trial –ve = 29%

7 Bottom Line Don’t Blame Industry Entirely –The authors of those papers are Doctors! Be Skeptical (Don’t “buy” in to advertising) Always “Cheque” Funding Source Then, Check Methods, Including ITT Then, Check that Recommendation Matches Outcome & Treatment Effect

8 Systematic Reviews

9 Objectives 1) Recognize the different types of syntheses literature 2) Apply the User Guide Principles – Discuss major threats to validity – Understand heterogeneity & Confidence Intervals.

10 Syntheses Articles Summarize results of many studies or present understanding of condition (s) Main Types –Reviews –Systematic Reviews –Meta-analyses

11 Systematic Review: Process Ask: A Defined Question (population, intervention/exposure, outcome, methodology) Acquire (relevance & quality) –Conduct literature Search (with defined info sources, restrictions, review abstracts, etc) –Inclusion & Exclusion (exclude by title/abstract, repeat for remaining full articles, assess agreement on remainder) Appraise (abstract data on participants, interventions/comparators, results, method quality then Assess agreement on validity assessment) Analysis (determine method of pooling, pool (?), decide on missed data, explore heterogeneity (sensitivity & sub group analysis), explore publication bias)

12 Possible Conclusions of Systematic Reviews Determining –Evidence of Benefit –Evidence of Harm –Evidence of no effect –No evidence of effect

13 Getting through: Systematic reviews

14 Validity Summary Are the results Valid –Did the reviews explicitly address a sensible question? –Was the search for relevant studies detailed and exhaustive? –Were the included studies of high quality? –Were the assessments of study relevance and quality reproducible?

15 Did the reviews explicitly address a sensible question? Is the review too narrow or too broad? –Patients / Populations –Intervention –Comparison –Outcomes Is the underlying biology or sociology such that, across the range of interventions and outcomes included, the effect should be similar E.g. Too Broad: Impact of Treatments for All cancers E.g. Too Narrow: Impact of 81 mg ASA on incidence of thrombotic stroke in males age 50- 70?

16 Did the reviews explicitly address a sensible question? A good Question will allow you to check –Pooled results to see if effect was similar across studies &,… –Across a range of patients, exposures and outcomes, If so, the findings can be broadly applied.

17 Was the search for relevant studies detailed and exhaustive? Search Strategies –Bibliographic databases (Medline, EMBASE, etc) –Trials Databases (Cochrane Central Register of Controlled Trials, etc) –Citation Tracking (Science citation index, etc) –Unpublished Studies (Key researchers, theses, industry trials, etc). Should describe Search Strategy with Keywords, sources, years, etc Was Publication Bias* Considered? * When only certain studies are published because of findings or statistical significance of their results

18 Were the included studies of high quality? Key Questions –Clear relevance & methodological criteria –All included studies assessed by those criteria Should have Standard Checklists and/or Sentinel Criteria E.g. Sentinel Criteria –Therapy – Randomized & AC; –Dx – Representative Patients & Reasonable Gold Standard

19 Were the assessments of study relevance and quality reproducible? Was an explicit approach used to extract data from the primary studies? –Should have all significant details of research design, population, intervention, outcome, results and missing information presented. Was the selection carried out thru a “double- blind” process? –Two or more reviewers (select & appraise), look for agreement beyond chance, separate selection from data abstraction.

20 Summary: What are the Results? Were the results similar from study to study? What are the overall results of the review? How precise were the results?

21 Were the results similar from study to study? How similar are the point estimates (best estimate of effect)? Do CI overlap? Attempt to explain Heterogeneity? –Variable patients, interventions, controls, outcomes, and methods?

22 What are the overall results of the review? Effect Size? Threat = “vote counting”, Fix with –Were studies of diff pop size weighted different in producing a summary of effect size? –Were studies of different quality weighted differently in producing a summary effect size

23 How precise were the results? Confidence Intervals on Average effect –Range of average effect sizes within which it is likely that the true effect lies (95% of the time) –Precision Drops with Variable point estimates Wide CI around point estimates Small number of studies or subjects per study

24 E.g.: CI & Results (BMJ 2003; 326: 621)

25 Applicability: How can I apply the results? How can I interpret the results for my setting? Were all clinically important outcomes considered? Are the benefits worth the costs and potential risks?

26 How can I interpret the results for my setting? Does the interpretation provide a clear summary? Is the conclusion clearly justified by the data? –The authors should makes sure the Conclusions state the basis of the judgment, put the results in context and identify areas for new research? –Concerns = Subgroup analyses

27 Were all clinically important outcomes considered? Threats: –Adverse effects tend to be ignored –Multiple outcomes tend to be ignored E.g. effect of HRT on heart disease, cancer, affect, etc

28 Are the benefits worth the costs and potential risks? Threats: –? systematic methods of judging values

29 Summary A good systematic review is the best place to start when seeking evidence about effects of health care User’s Guide boils down to –Did they find all important studies? Did synthesis weight for quality? –Is heterogeneity explained?

30 End


Download ppt "EBM Conference (Day 2). Funding Bias “He who pays, Calls the Tune” Some Facts (& Myths) Is industry research more likely to be published No Is industry."

Similar presentations


Ads by Google