The New York Academy of Medicine Teaching Evidence Assimilation for Collaborative Healthcare New York, August 10, 2011 Yngve Falck-Ytter, MD, AGAF for.

Slides:



Advertisements
Similar presentations
Evidence-based Dental Practice Developing guidelines or clinical recommendations Slide #1 This lecture follows the previous online lecture on evidence.
Advertisements

Holger Schünemann, MD, PhD From Evidence to EMS Practice: Building the National Model Washington, September 4,
Critically Evaluating the Evidence: Tools for Appraisal Elizabeth A. Crabtree, MPH, PhD (c) Director of Evidence-Based Practice, Quality Management Assistant.
Summarising findings about the likely impacts of options Judgements about the quality of evidence Preparing summary of findings tables Plain language summaries.
Grading of Recommendations Assessment, Development and Evaluation (GRADE) Methodology.
Introduction to Critical Appraisal : Quantitative Research
Chapter 7. Getting Closer: Grading the Literature and Evaluating the Strength of the Evidence.
Practicing Evidence Based Medicine
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Introduction to evidence based medicine
Critical Appraisal of Clinical Practice Guidelines
Are the results valid? Was the validity of the included studies appraised?
Illustrating the GRADE Methodology: The Cather Associated-UTI Case Study TEACH Level II Workshop 5 NYAM August 9 th, 2013 Craig A Umscheid, MD, MSCE, FACP.
AHRQ Annual Meeting 2009: "Research to Reform: Achieving Health System Change" September 14, 2009 Yngve Falck-Ytter, M.D. Case Western Reserve University,
AGA Practice Guidelines Committee Meeting, Chicago May 31, 2009 Yngve Falck-Ytter, M.D. Assistant Professor of Medicine Case Western Reserve University.
AHRQ Annual Meeting 2009: "Research to Reform: Achieving Health System Change" September 14, 2009 Yngve Falck-Ytter, M.D. Case Western Reserve University,
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
The New York Academy of Medicine Teaching Evidence Assimilation for Collaborative Healthcare New York, August 8, 2013 Yngve Falck-Ytter, MD, AGAF for the.
Dr.F Eslamipour DDS.MS Orthodontist Associated professor Department of Oral Public Health Isfahan University of Medical Science.
Evidence Based Medicine
Brief summary of the GRADE framework Holger Schünemann, MD, PhD Chair and Professor, Department of Clinical Epidemiology & Biostatistics Professor of Medicine.
Systematic Reviews.
GRADE example application of Jan Brożek. My potential conflicts of interest GRADE working group Cochrane Collaboration.
Grading Strength of Evidence Prepared for: The Agency for Healthcare Research and Quality (AHRQ) Training Modules for Systematic Reviews Methods Guide.
Evidence-Based Public Health Nancy Allee, MLS, MPH University of Michigan November 6, 2004.
Systematic Review Module 7: Rating the Quality of Individual Studies Meera Viswanathan, PhD RTI-UNC EPC.
AGA Clinical Practice & Quality Management Committee Teleconference 17 Oct 2008 Yngve Falck-Ytter, M.D. Assistant Professor of Medicine Case Western Reserve.
Plymouth Health Community NICE Guidance Implementation Group Workshop Two: Debriding agents and specialist wound care clinics. Pressure ulcer risk assessment.
Clinical Writing for Interventional Cardiologists.
Evidence-Based Medicine: What does it really mean? Sports Medicine Rounds November 7, 2007.
Vanderbilt Sports Medicine Chapter 5: Therapy, Part 2 Thomas F. Byars Evidence-Based Medicine How to Practice and Teach EBM.
Grading the quality of evidence
Wipanee Phupakdi, MD September 15, Overview  Define EBM  Learn steps in EBM process  Identify parts of a well-built clinical question  Discuss.
Stakeholder Summit on Using Quality Systematic Reviews to Inform Evidence-based Guidelines US Cochrane Center June 4 and 5, 2009 Yngve Falck-Ytter, M.D.
Evidence Based Practice RCS /9/05. Definitions  Rosenthal and Donald (1996) defined evidence-based medicine as a process of turning clinical problems.
META-ANALYSIS, RESEARCH SYNTHESES AND SYSTEMATIC REVIEWS © LOUIS COHEN, LAWRENCE MANION & KEITH MORRISON.
Objectives  Identify the key elements of a good randomised controlled study  To clarify the process of meta analysis and developing a systematic review.
WHO GUIDANCE FOR THE DEVELOPMENT OF EVIDENCE-BASED VACCINE RELATED RECOMMENDATIONS August 2011.
Sifting through the evidence Sarah Fradsham. Types of Evidence Primary Literature Observational studies Case Report Case Series Case Control Study Cohort.
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
Developing evidence-based guidelines at WHO. Evidence-based guidelines at WHO | January 17, |2 |
EVALUATING u After retrieving the literature, you have to evaluate or critically appraise the evidence for its validity and applicability to your patient.
Lecture 2: Evidence Level and Types of Research. Do you recommend flossing to your patients? Of course YES! Because: I have been taught to. I read textbooks.
BEST PRACTICE PORTAL BEST PRACTICE PORTAL project presentation to the Scientific Committee Ferri et al Lisbon, 16th July 2010.
The New York Academy of Medicine Teaching Evidence Assimilation for Collaborative Healthcare New York, August 8, 2012 Yngve Falck-Ytter, MD, AGAF for the.
GDG Meeting Wednesday November 9, :30 – 11:30 am.
EVIDENCE-BASED MEDICINE AND PHARMACY 1. Evidence-based medicine 2. Evidence-based pharmacy.
GRADE Grading of Recommendations Assessment, Development and Evaluation British Association of Dermatologists April 2014.
Corso di clinical writing. What to expect today? Core modules IntroductionIntroduction General principlesGeneral principles Specific techniquesSpecific.
Methodological Issues in Implantable Medical Device(IMDs) Studies Abdallah ABOUIHIA Senior Statistician, Medtronic.
Clinical Practice Guidelines: Can we fix Babel? Eddy Lang Department Chair, Emergency Alberta Health Services Associate Professor University of Calgary.
Evidence-Based Mental Health PSYC 377. Structure of the Presentation 1. Describe EBP issues 2. Categorize EBP issues 3. Assess the quality of ‘evidence’
CRITICALLY APPRAISING EVIDENCE Lisa Broughton, PhD, RN, CCRN.
Standards for Developing Trustworthy Clinical Practice Guidelines Standards for Developing Trustworthy Clinical Practice Guidelines Institute of Medicine.
Methodological quality assessment of observational studies Nicole Vogelzangs Department of Psychiatry & EMGO + institute.
Approach to guideline development
for Overall Prognosis Workshop Cochrane Colloquium, Seoul
Why this talk? you will be seeing a lot of GRADE
Developing a guideline
Evidence-based Medicine
Conflicts of interest Major role in development of GRADE
Donald E. Cutlip, MD Beth Israel Deaconess Medical Center
Evidence-Based Practice I: Definition – What is it?
Overview of the GRADE approach – selected slides
Randomized Trials: A Brief Overview
Research Designs, Threats to Validity and the Hierarchy of Evidence and Appraisal of Limitations (HEAL) Grading System.
Chapter 7 The Hierarchy of Evidence
WHO Guideline development
EAST GRADE course 2019 Creating Recommendations
What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic. Ask What is a review?
Presentation transcript:

The New York Academy of Medicine Teaching Evidence Assimilation for Collaborative Healthcare New York, August 10, 2011 Yngve Falck-Ytter, MD, AGAF for the GRADE team Associate Professor, Case Western Reserve University, Case & VA Medical Center Chief of Gastroenterology, VA Medical Center, Cleveland 1

It’s evident – or is it?

Question to the audience A. Training, experience and knowledge of respected colleagues B. Patient preferences C. Convincing evidence (non experimental) from case reports, case series, disease mechanism D. RCTs, systematic reviews of RCTs and meta- analyses E. All of the above Decisions in your medical practice are based on: 3

Evidence-based clinical decisions Research evidence Patient values and preferences Clinical circumstances Expertise Haynes et al

Is presenting evidence enough? End users of systematic reviews (e.g., health care provider) underutilize SRs because:  SR tend to be very long  Perceived as complex  Difficulty understanding the effect size  Difficulty understanding imprecision  Difficulties in assessing the confidence in the estimate of effect  Difficulties in translating relative effects to absolute effects to be expected for their patients Pagliaro et al 2009

How do you do it right? 6

Reasons for grading evidence?  People draw conclusions about the  quality of evidence and strength of recommendations  Systematic and explicit approaches can help to  protect against errors, resolve disagreements  communicate information and fulfill needs  be transparent about the process  Change practitioner behavior  However, wide variation in approaches GRADE working group. BMJ &

8 Before GRADE Level of evidence I II III IV V Source of evidence SR, RCTs Cohort studies Case-control studies Case series Expert opinion A Grades of recomend. B C D

9 Before GRADE Level of evidence Ia Ib II III IV V Source of evidence Meta-analysis RCTs Cohort studies Case-control studies Case series Expert opinion A Grades of recomend. B C D

Is there any guidance here? P: In patients with acute hepatitis C … I : Should anti-viral treatment be used … C: Compared to no treatment … O: To achieve viral clearance? EvidenceRecommendationOrganization BClass IAASLD (2009) VA (2006)II-1-/-SIGN (2006)1+AAGA (2006)-/-“Most authorities…”UK (2008)IIbB (firm evidence) 10

Question to the audience A. …you are thoroughly confused B. …you start treatment because treatment is recommended C. …you don’t start treatment because guidelines don’t recommend it D. …you look at the evidence yourself because past experience tells you that guidelines don’t help By now… 11

12 Just until recently… AASLD AGA ACGASGE AMultiple RCTs or meta-analysis Good Consistent, well-designed, well conducted studies […] 1. Multiple published, well-controlled (?) randomized trials or a well designed systemic (?) meta- analysis A. RCTs BSingle randomized trial, or non- randomized studies C Only consensus opinion of experts, case studies, or standard-of-care FairLimited by the number, quality or consistency of individual studies […] Poor… important flaws, gaps in chain of evidence… 2. One quality- published (?) RCT, published well- designed cohort/ case-control studies 3. Consensus of authoritative (?) expert opinions based on clinical evidence or from well designed, but uncontrolled or non-rand. clin. trials B. RCT with important limitations C. Obser- vational studies D. Expert opinion

Institute of Medicine  March 2011 report: “Clinical Practice Guidelines We Can Trust”  Establishing transparency  Management of conflict of interest  Guideline development group composition  Evidence based on systematic reviews  Method for rating strength of recommendations  Articulation of recommendations  External review  Updating 13

Grades of Recommendations Assessment, Development and Evaluation 14

50+ Organizations

Where GRADE fits in Prioritize problems, establish panel Find/appraise or prepare: Systematic review Searches, selection of studies, data collection and analysis (Re-) Assess the relative importance of outcomes Prepare evidence profile: Quality of evidence for each outcome and summary of findings Guidelines: Assess overall quality of evidence Decide direction and strength of recommendation Draft guideline Consult with stakeholders and / or external peer reviewer Disseminate guideline Implement the guideline and evaluate GRADE 16

I B IIVIII GRADE is outcome-centric Quality Old system Outcome #1 Outcome #2 Outcome #3 GRADE 17

Importance of outcomes 18 Intermediate outcomes Positive hepatitis B core antibody Amnestic response to re-challenge Loss of protective surface antibody Question (PICO) Should health care worker receive booster vaccination vs. not? Final health outcomes Mortality Liver cancer Liver cirrhosis Chronic hepatitis B infection Acute symptom. infection

GRADE expands quality of evidence determinants Methodological limitations Inconsistency of results 19 Risk of bias Allocation concealment Failure of blinding Losses to follow-up Incomplete reporting Indirectness of evidence Imprecision of results Publication bias

20 GRADE: Quality of evidence Although quality of evidence is a continuum, we suggest using 4 categories:  High  Moderate  Low  Very low For guidelines: The extent to which our confidence in an estimate of the treatment effect is adequate to support a particular recommendation.

Determinants of quality  RCTs start high  Observational studies start low 21

22 What is the study design?

Assessment of detailed design and execution (risk of bias or methodological limitations) For RCTs:  Lack of allocation concealment  No true intention to treat principle  Inadequate blinding  Loss to follow-up  Early stopping for benefit Methodological limitations Inconsistency of results Indirectness of evidence Imprecision of results Publication bias 23

Cochrane risk of bias tool 1. Random sequence generation 2. Allocation concealment 3. Blinding of participants and personnel 4. Blinding of outcome assessment 5. Incomplete outcome data 6. Selective reporting 7. Other bias Judgment: low risk of bias, high risk of bias, unclear 24

Cochrane Risk of bias graph 25

 Look for explanation for inconsistency  patients, intervention, comparator, outcome, methods  Judgment  variation in size of effect  overlap in confidence intervals  statistical significance of heterogeneity I2I2 Methodological limitations Inconsistency of results Indirectness of evidence Imprecision of results Publication bias 26

27 Q: Is there heterogeneity here? Neurological or vascular complications or death within 30 days of endovascular treatment (stent, balloon angioplasty) vs. surgical carotid endarterectomy (CEA)

 Indirect comparisons  Interested in head-to-head comparison  Drug A versus drug B  Telaprevir versus boceprevir in hepatitis C treatment  Differences in  patients (early cirrhosis vs. end-stage cirrhosis)  interventions (CRC screening: flex. sig. vs. colonoscopy)  comparator (e.g., differences in dose)  outcomes (non-steroidal safety: ulcer on endoscopy vs. symptomatic ulcer complications) Methodological limitations Inconsistency of results Indirectness of evidence Imprecision of results Publication bias 28

Small sample size  small number of events  wide confidence intervals  uncertainty about magnitude of effect Methodological limitations Inconsistency of results Indirectness of evidence Imprecision of results Publication bias 29

Imprecision Any stroke (or death) within 30 days of endovascular treatment (stent, balloon angioplasty) vs. surgical carotid endarterectomy (CEA)

 Reporting of studies  publication bias number of small studies Methodological limitations Inconsistency of results Indirectness of evidence Imprecision of results Publication bias 31

All phase II and III licensing trial for antidepressant drugs between 1987 and 2004 (74 trials – 23 were not published)

33 Quality assessment criteria Lower if… Quality of evidence High Moderate Low Very low Study limitations (design and execution) Inconsistency Indirectness Imprecision Publication bias Higher if… What can raise the quality of evidence? Study design RCTs  Observational studies 

BMJ 2003;327:1459–61 34

35

Question to the audience A. High B. Moderate C. Low D. Very low You review all colonoscopies for average risk screening in your health system and document a percentage of patient who developed a perforation after the procedure (evidence of free air on imaging). No comparison group without colonoscopy available. Rate the quality of evidence for the outcome perforation: 36

37 Quality assessment criteria Lower if… Quality of evidence High Moderate Low Very low Study limitations (design and execution) Inconsistency Indirectness Imprecision Publication bias Higher if… Study design RCTs  Observational studies  Large effect (e.g., RR 0.5) Very large effect (e.g., RR 0.2) Evidence of dose-response gradient All plausible confounding… …would reduce a demonstrated effect …would suggest a spurious effect when results show no effect

38 Conceptualizing quality We are very confident that the true effect lies close to that of the estimate of the effect. High Low Our confidence in the effect is limited: The true effect may be substantially different from the estimate of the effect. Moderate We are moderately confident in the estimate of effect: The true effect is likely to be close to the estimate of effect, but possibility to be substantially different. Very low We have very little confidence in the effect estimate: The true effect is likely to be substantially different from the estimate of effect.

Design Limitations Incon- sistency Indirect- ness Imprecision Publication bias GRADE Evidence Profile Importance Overall Quality Relative and Absolute Risk 39

PICOPICO Clinical question Rate importance Select outcomes Very low Low Moderate High Formulate recommendations: For or against (direction) Strong or weak (strength) By considering:  Quality of evidence  Balance benefits/harms  Values and preferences Revise if necessary by considering:  Resource use (cost) Quality rating outcomes across studies Outcome Critical Important Critical Not important Grade down or up OutcomeImportant Overall quality of evidence

Avoid action bias ~30% (~60% saved) ~40% (~30% saved) If keepers remain stationary: chance of stopping the shot increases from 13% to 33% Goal keeper jump 94% of the time

From evidence to recommendations 42 RCT Obser- vational study High level recommen- dation Lower level recommen- dation Old system Quality of evidence Balance between benefits, harms & burdens Patients’ values & preferences GRADE

Strength of recommendation Although the strength of recommendation is a continuum, we suggest using two categories: “Strong” and “Weak” “The strength of a recommendation reflects the extent to which we can, across the range of patients for whom the recommendations are intended, be confident that desirable effects of a management strategy outweigh undesirable effects.” 43

4 determinants of the strength of recommendation Factors that can weaken the strength of a recommendation Explanation  Lower quality evidenceThe higher the quality of evidence, the more likely is a strong recommendation.  Uncertainty about the balance of benefits versus harms and burdens The larger the difference between the desirable and undesirable consequences, the more likely a strong recommendation warranted. The smaller the net benefit and the lower certainty for that benefit, the more likely is a weak recommendation warranted.  Uncertainty or differences in patients’ values The greater the variability in values and preferences, or uncertainty in values and preferences, the more likely weak recommendation warranted.  Uncertainty about whether the net benefits are worth the costs The higher the costs of an intervention – that is, the more resources consumed – the less likely is a strong recommendation warranted. 44

Developing recommendations 45

Implications of a strong recommendation  Population: Most people in this situation would want the recommended course of action and only a small proportion would not  Health care workers: Most people should receive the recommended course of action  Policy makers: The recommendation can be adapted as a policy in most situations 46

Implications of a conditional recommendation  Population: The majority of people in this situation would want the recommended course of action, but many would not  Health care workers: Be prepared to help people to make a decision that is consistent with their own values/decision aids and shared decision making  Policy makers: There is a need for substantial debate and involvement of stakeholders 47

Systematic review Guideline development PICOPICO Outcome Formulate question Rate importance Critical Important Critical Less important Create evidence profile with GRADEpro Summary of findings & estimate of effect for each outcome Rate overall quality of evidence across outcomes based on lowest quality of critical outcomes RCT start high, obs. data start low 1.Risk of bias 2.Inconsistency 3.Indirectness 4.Imprecision 5.Publication bias Grade down Grade up 1.Large effect 2.Dose response 3.Confounders Rate quality of evidence for each outcome Select outcomes Very low Low Moderate High Formulate recommendations: For or against (direction) Strong or weak (strength) By considering:  Quality of evidence  Balance benefits/harms  Values and preferences Revise if necessary by considering:  Resource use (cost) “We recommend using…” “We suggest using…” “We recommend against using…” “We suggest against using…” Outcomes across studies 48

GRADE’s limitations  Evidence rating for alternative management strategies, not risk or prognosis per se.  Does not eliminate disagreements in interpreting the evidence – judgments on thresholds continue to be necessary  Requires some training in methodology to be applied optimally

What GRADE isn’t  Not another “risk of bias” tool  Not a quantitative system (no scoring required)  Not eliminate COI, but able to minimize  Not “expensive”  Builds on well established principles of EBM  Some degree of training is needed for any system  Proportionally adds minimal amount of extra time to a systematic review

Conclusion Gaining acceptance as international standard because GRADE adds value: 1. GRADE has criteria for evidence assessment across a range of questions and outcomes 2. Sensible, transparent, systematic 3. Balance between simplicity and methodological rigor 51

52