The New York Academy of Medicine Teaching Evidence Assimilation for Collaborative Healthcare New York, August 8, 2012 Yngve Falck-Ytter, MD, AGAF for the.

Slides:



Advertisements
Similar presentations
Evidence-based Dental Practice Developing guidelines or clinical recommendations Slide #1 This lecture follows the previous online lecture on evidence.
Advertisements

Katrina Abuabara, MD, MA1 Esther E Freeman MD, PhD2;
Assessing the Impact of the IOM Report on the Future of the National Guideline Clearinghouse Richard N. Shiffman, MD, MCIS Yale School of Medicine New.
Participation Requirements for a Guideline Panel PGIN Representative.
1 The U.S. Preventive Services Task Force: The Challenge of Transparency Dr. Albert Siu New York Academy of Medicine.
Critically Evaluating the Evidence: Tools for Appraisal Elizabeth A. Crabtree, MPH, PhD (c) Director of Evidence-Based Practice, Quality Management Assistant.
Summarising findings about the likely impacts of options Judgements about the quality of evidence Preparing summary of findings tables Plain language summaries.
Grading of Recommendations Assessment, Development and Evaluation (GRADE) Methodology.
Clinical Policy / Practice Guideline Development Andy Jagoda, MD, FACEP Professor of Emergency Medicine Mount Sinai School of Medicine New York, New York.
Chapter 7. Getting Closer: Grading the Literature and Evaluating the Strength of the Evidence.
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Introduction to evidence based medicine
Critical Appraisal of Clinical Practice Guidelines
Are the results valid? Was the validity of the included studies appraised?
Illustrating the GRADE Methodology: The Cather Associated-UTI Case Study TEACH Level II Workshop 5 NYAM August 9 th, 2013 Craig A Umscheid, MD, MSCE, FACP.
AHRQ Annual Meeting 2009: "Research to Reform: Achieving Health System Change" September 14, 2009 Yngve Falck-Ytter, M.D. Case Western Reserve University,
Evidence-Based Medicine in Clinical Practice.
AGA Practice Guidelines Committee Meeting, Chicago May 31, 2009 Yngve Falck-Ytter, M.D. Assistant Professor of Medicine Case Western Reserve University.
AHRQ Annual Meeting 2009: "Research to Reform: Achieving Health System Change" September 14, 2009 Yngve Falck-Ytter, M.D. Case Western Reserve University,
The New York Academy of Medicine Teaching Evidence Assimilation for Collaborative Healthcare New York, August 8, 2013 Yngve Falck-Ytter, MD, AGAF for the.
AHRQ Annual Meeting 2010: “Better Care, Better Health: Delivering on Quality for All Americans" September 28, 2010 Yngve Falck-Ytter, M.D. Associate Professor.
Dr.F Eslamipour DDS.MS Orthodontist Associated professor Department of Oral Public Health Isfahan University of Medical Science.
Evidence Based Medicine
Brief summary of the GRADE framework Holger Schünemann, MD, PhD Chair and Professor, Department of Clinical Epidemiology & Biostatistics Professor of Medicine.
Systematic Reviews.
GRADE example application of Jan Brożek. My potential conflicts of interest GRADE working group Cochrane Collaboration.
Grading Strength of Evidence Prepared for: The Agency for Healthcare Research and Quality (AHRQ) Training Modules for Systematic Reviews Methods Guide.
Evidence-Based Public Health Nancy Allee, MLS, MPH University of Michigan November 6, 2004.
Systematic Review Module 7: Rating the Quality of Individual Studies Meera Viswanathan, PhD RTI-UNC EPC.
The New York Academy of Medicine Teaching Evidence Assimilation for Collaborative Healthcare New York, August 10, 2011 Yngve Falck-Ytter, MD, AGAF for.
Evidence-Based Medicine Presentation [Insert your name here] [Insert your designation here] [Insert your institutional affiliation here] Department of.
Clinical Writing for Interventional Cardiologists.
Evidence-Based Medicine: What does it really mean? Sports Medicine Rounds November 7, 2007.
Vanderbilt Sports Medicine Chapter 5: Therapy, Part 2 Thomas F. Byars Evidence-Based Medicine How to Practice and Teach EBM.
Stakeholder Summit on Using Quality Systematic Reviews to Inform Evidence-based Guidelines US Cochrane Center June 4 and 5, 2009 Yngve Falck-Ytter, M.D.
Evidence Based Practice RCS /9/05. Definitions  Rosenthal and Donald (1996) defined evidence-based medicine as a process of turning clinical problems.
WHO GUIDANCE FOR THE DEVELOPMENT OF EVIDENCE-BASED VACCINE RELATED RECOMMENDATIONS August 2011.
Sifting through the evidence Sarah Fradsham. Types of Evidence Primary Literature Observational studies Case Report Case Series Case Control Study Cohort.
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
Developing evidence-based guidelines at WHO. Evidence-based guidelines at WHO | January 17, |2 |
EVALUATING u After retrieving the literature, you have to evaluate or critically appraise the evidence for its validity and applicability to your patient.
Lecture 2: Evidence Level and Types of Research. Do you recommend flossing to your patients? Of course YES! Because: I have been taught to. I read textbooks.
BEST PRACTICE PORTAL BEST PRACTICE PORTAL project presentation to the Scientific Committee Ferri et al Lisbon, 16th July 2010.
European Patients’ Academy on Therapeutic Innovation Ethical and practical challenges of organising clinical trials in small populations.
GDG Meeting Wednesday November 9, :30 – 11:30 am.
Levels of Evidence Dr Chetan Khatri Steering Committee, STARSurg.
Dallas 2015 TFQO: Name EVREVs: Names and #COI Taskforce: Name Insert Short PICO title Total of 12 (no studies) to 20 slides (maximum) using standard format.
GRADE Grading of Recommendations Assessment, Development and Evaluation British Association of Dermatologists April 2014.
Clinical Practice Guidelines: Can we fix Babel? Eddy Lang Department Chair, Emergency Alberta Health Services Associate Professor University of Calgary.
Evidence-Based Mental Health PSYC 377. Structure of the Presentation 1. Describe EBP issues 2. Categorize EBP issues 3. Assess the quality of ‘evidence’
CRITICALLY APPRAISING EVIDENCE Lisa Broughton, PhD, RN, CCRN.
Standards for Developing Trustworthy Clinical Practice Guidelines Standards for Developing Trustworthy Clinical Practice Guidelines Institute of Medicine.
Methodological quality assessment of observational studies Nicole Vogelzangs Department of Psychiatry & EMGO + institute.
Approach to guideline development
Building an Evidence-Based Nursing Practice
for Overall Prognosis Workshop Cochrane Colloquium, Seoul
Why this talk? you will be seeing a lot of GRADE
Developing a guideline
ACOEM Council on Education and Academic Affairs
Evidence-based Medicine
Conflicts of interest Major role in development of GRADE
Evidence-Based Practice I: Definition – What is it?
Overview of the GRADE approach – selected slides
Chapter 7 The Hierarchy of Evidence
STROBE Statement revision
WHO Guideline development
Ethical Issues in Medical Writing
EAST GRADE course 2019 Creating Recommendations
Evidence-Based Public Health
From the Evidence Analysis to the Creation of Evidence Based Guidelines 1.
Presentation transcript:

The New York Academy of Medicine Teaching Evidence Assimilation for Collaborative Healthcare New York, August 8, 2012 Yngve Falck-Ytter, MD, AGAF for the GRADE team Associate Professor, Case Western Reserve University, Case & VA Medical Center Chief of Gastroenterology, VA Medical Center, Cleveland 1

It’s evident – or is it?

Question to the audience A. Training, experience and knowledge of respected colleagues B. Patient preferences C. Convincing evidence (non experimental) from case reports, case series, disease mechanism D. RCTs, systematic reviews of RCTs and meta- analyses E. All of the above Decisions in your medical practice are based on: 3

Evidence-based clinical decisions Research evidence Patient values and preferences Clinical circumstances Expertise Haynes et al

Are guidelines evidence-based?  1,275 recommendations evaluated from NGC  Not reliably identifiable rec. in 32%  Not executable as written  Common problem: statement of fact only  Variability in recommendation strength:  Absent 53%, inaccurate 7%  Why is it so hard? 5 Hussain T, Michel G, Schiffman R. Int J Med Inform 2009

6 Before GRADE Level of evidence I II III IV V Source of evidence SR, RCTs Cohort studies Case-control studies Case series Expert opinion A Grades of recomend. B C D

7 Before GRADE Level of evidence Ia Ib II III IV V Source of evidence Meta-analysis RCTs Cohort studies Case-control studies Case series Expert opinion A Grades of recomend. B C D

Is there any guidance here? P: In patients with acute hepatitis C … I : Should anti-viral treatment be used … C: Compared to no treatment … O: To achieve viral clearance? EvidenceRecommendationOrganization BClass IAASLD (2009) VA (2006)II-1-/-SIGN (2006)1+AAGA (2006)-/-“Most authorities…”UK (2008)IIbB (firm evidence) 8

Question to the audience A. …you are thoroughly confused B. …you start treatment because treatment is recommended C. …you don’t start treatment because guidelines don’t recommend it D. …you look at the evidence yourself because past experience tells you that guidelines don’t help By now… 9

10 Just until recently… AASLD AGA ACGASGE AMultiple RCTs or meta-analysis Good Consistent, well-designed, well conducted studies […] 1. Multiple published, well-controlled (?) randomized trials or a well designed systemic (?) meta- analysis A. RCTs BSingle randomized trial, or non- randomized studies C Only consensus opinion of experts, case studies, or standard-of-care FairLimited by the number, quality or consistency of individual studies […] Poor… important flaws, gaps in chain of evidence… 2. One quality- published (?) RCT, published well- designed cohort/ case-control studies 3. Consensus of authoritative (?) expert opinions based on clinical evidence or from well designed, but uncontrolled or non-rand. clin. trials B. RCT with important limitations C. Obser- vational studies D. Expert opinion

Institute of Medicine  March 2011 report: “Clinical Practice Guidelines We Can Trust”  Establishing transparency  Management of conflict of interest  Guideline development group composition  Evidence based on systematic reviews  Method for rating strength of recommendations  Articulation of recommendations  External review  Updating 11

Grades of Recommendations Assessment, Development and Evaluation 12

60+ Organizations

Where GRADE fits in Prioritize problems, establish panel Find/appraise or prepare: Systematic review Searches, selection of studies, data collection and analysis (Re-) Assess the relative importance of outcomes Prepare evidence profile: Quality of evidence for each outcome and summary of findings Guidelines: Assess overall quality of evidence Decide direction and strength of recommendation Draft guideline Consult with stakeholders and / or external peer reviewer Disseminate guideline Implement the guideline and evaluate GRADE 14

I B IIVIII GRADE is outcome-centric Quality Old system Outcome #1 Outcome #2 Outcome #3 GRADE 15

Importance of outcomes 16 Intermediate outcomes Positive hepatitis B core antibody Amnestic response to re-challenge Loss of protective surface antibody Question (PICO) Should health care worker receive booster vaccination vs. not? Final health outcomes Mortality Liver cancer Liver cirrhosis Chronic hepatitis B infection Acute symptom. infection

GRADE expands quality of evidence determinants Methodological limitations Inconsistency of results 17 Risk of bias Allocation concealment Failure of blinding Losses to follow-up Incomplete reporting Indirectness of evidence Imprecision of results Publication bias

18 GRADE: Quality of evidence Although quality of evidence is a continuum, we suggest using 4 categories:  High  Moderate  Low  Very low For guidelines: The extent to which our confidence in an estimate of the treatment effect is adequate to support a particular recommendation.

Determinants of quality  RCTs start high  Observational studies start low 19

Quality of evidence: beyond risk of bias Definition: The extent to which our confidence in an estimate of the treatment effect is adequate to support a particular recommendation Methodological limitations Inconsistency of results Indirectness of evidence Imprecision of results Publication bias Risk of bias: Allocation concealment Blinding Intention-to-treat Follow-up Stopped early Sources of indirectness: Indirect comparisons Patients Interventions Comparators Outcomes 20

All phase II and III licensing trial for antidepressant drugs between 1987 and 2004 (74 trials – 23 were not published)

22 Quality assessment criteria Lower if… Quality of evidence High Moderate Low Very low Study limitations (design and execution) Inconsistency Indirectness Imprecision Publication bias Higher if… What can raise the quality of evidence? Study design RCTs  Observational studies 

BMJ 2003;327:1459–61 23

24

Question to the audience A. High B. Moderate C. Low D. Very low You review all colonoscopies for average risk screening in your health system and document a percentage of patient who developed a perforation after the procedure (evidence of free air on imaging). No comparison group without colonoscopy available. Rate the quality of evidence for the outcome perforation: 25

Question to the audience A. High B. Moderate C. Low D. Very low A systematic review of observational studies showed a relationship between front sleeping position (versus back position) and sudden infant death syndrome (SIDS): OR 2.93 (1.15, 7.47). Rate the quality of evidence for the outcome SIDS: 26

27 Quality assessment criteria Lower if… Quality of evidence High Moderate Low Very low Study limitations (design and execution) Inconsistency Indirectness Imprecision Publication bias Higher if… Study design RCTs  Observational studies  Large effect (e.g., RR 0.5) Very large effect (e.g., RR 0.2) Evidence of dose-response gradient All plausible confounding… …would reduce a demonstrated effect …would suggest a spurious effect when results show no effect

28 Conceptualizing quality We are very confident that the true effect lies close to that of the estimate of the effect. High Low Our confidence in the effect is limited: The true effect may be substantially different from the estimate of the effect. Moderate We are moderately confident in the estimate of effect: The true effect is likely to be close to the estimate of effect, but possibility to be substantially different. Very low We have very little confidence in the effect estimate: The true effect is likely to be substantially different from the estimate of effect.

Design Limitations Incon- sistency Indirect- ness Imprecision Publication bias GRADE Evidence Profile Importance Overall Quality Relative and Absolute Risk 29

PICOPICO Clinical question Rate importance Select outcomes Very low Low Moderate High Formulate recommendations: For or against (direction) Strong or weak (strength) By considering:  Quality of evidence  Balance benefits/harms  Values and preferences Revise if necessary by considering:  Resource use (cost) Quality rating outcomes across studies Outcome Critical Important Critical Less important Grade down or up OutcomeImportant Overall quality of evidence 30

From evidence to recommendations 31 RCT Obser- vational study High level recommen- dation Lower level recommen- dation Old system Quality of evidence Balance between benefits, harms & burdens Patients’ values & preferences GRADE

Strength of recommendation Although the strength of recommendation is a continuum, we suggest using two categories: “Strong” and “Weak” “The strength of a recommendation reflects the extent to which we can, across the range of patients for whom the recommendations are intended, be confident that desirable effects of a management strategy outweigh undesirable effects.” 32

4 determinants of the strength of recommendation Factors that can weaken the strength of a recommendation Explanation  Lower quality evidenceThe higher the quality of evidence, the more likely is a strong recommendation.  Uncertainty about the balance of benefits versus harms and burdens The larger the difference between the desirable and undesirable consequences, the more likely a strong recommendation warranted. The smaller the net benefit and the lower certainty for that benefit, the more likely is a weak recommendation warranted.  Uncertainty or differences in patients’ values The greater the variability in values and preferences, or uncertainty in values and preferences, the more likely weak recommendation warranted.  Uncertainty about whether the net benefits are worth the costs The higher the costs of an intervention – that is, the more resources consumed – the less likely is a strong recommendation warranted. 33

Developing recommendations 34

Implications of a strong recommendation  Population: Most people in this situation would want the recommended course of action and only a small proportion would not  Health care workers: Most people should receive the recommended course of action  Policy makers: The recommendation can be adapted as a policy in most situations 35

Implications of a conditional recommendation  Population: The majority of people in this situation would want the recommended course of action, but many would not  Health care workers: Be prepared to help people to make a decision that is consistent with their own values/decision aids and shared decision making  Policy makers: There is a need for substantial debate and involvement of stakeholders 36

Systematic review Guideline development PICOPICO Outcome Formulate question Rate importance Critical Important Critical Less important Create evidence profile with GRADEpro Summary of findings & estimate of effect for each outcome Rate overall quality of evidence across outcomes based on lowest quality of critical outcomes RCT start high, obs. data start low 1.Risk of bias 2.Inconsistency 3.Indirectness 4.Imprecision 5.Publication bias Grade down Grade up 1.Large effect 2.Dose response 3.Confounders Rate quality of evidence for each outcome Select outcomes Very low Low Moderate High Formulate recommendations: For or against (direction) Strong or weak (strength) By considering:  Quality of evidence  Balance benefits/harms  Values and preferences Revise if necessary by considering:  Resource use (cost) “We recommend using…” “We suggest using…” “We recommend against using…” “We suggest against using…” Outcomes across studies 37

GRADE’s limitations  Evidence rating for alternative management strategies, not risk or prognosis per se.  Does not eliminate disagreements in interpreting the evidence – judgments on thresholds continue to be necessary  Requires some training in methodology to be applied optimally

What GRADE isn’t  Not another “risk of bias” tool  Not a quantitative system (no scoring required)  Not eliminate COI, but able to minimize  Not “expensive”  Builds on well established principles of EBM  Some degree of training is needed for any system  Proportionally adds minimal amount of extra time to a systematic review

Evidence review stage What format of evidence do you use? Using mainly systematic reviews (SR) Mainly using single study data Don’t have the resources Search for SR Ready to use SR Not ready to use SR Use GRADE without evidence profiles Have the resources Do it in- house Utilize the full GRADE framework (± evidence Profiles) Out- source Update SRAd hoc reviews 40 $$$ $

Conclusion Using internationally accepted and standardized rating system for evidence and recommendations (such as GRADE) adds value: 1. Criteria for evidence assessment across a range of questions, settings and outcomes 2. Sensible, transparent, systematic 3. Balance between simplicity and methodological rigor 41

42