Presentation is loading. Please wait.

Presentation is loading. Please wait.

Grading evidence and recommendations The GRADE approach

Similar presentations


Presentation on theme: "Grading evidence and recommendations The GRADE approach"— Presentation transcript:

1 Grading evidence and recommendations The GRADE approach
Holger Schünemann, MD, PhD for the GRADE Working Group

2 Professional good intentions and plausible theories are insufficient for selecting policies and practices for protecting, promoting and restoring health. Iain Chalmers

3 How can we judge the extent of our confidence that adherence to a recommendation will do more good than harm?

4 Grades of Recommendation Assessment, Development and Evaluation

5 What do you know about GRADE?
Have prepared a guideline Read the BMJ paper Have prepared a systematic review and a summary of findings table Have attended a GRADE meeting, workshop or talk

6 About GRADE Began as informal working group in 2000
Researchers/guideline developers with interest in methodology Aim: to develop a common system for grading the quality of evidence and the strength of recommendations that is sensible and to explore the range of interventions and contexts for which it might be useful* 13 meetings (~10 – 35 attendants) Evaluation of existing systems and reliability* Workshops at Cochrane Colloquia, WHO and GIN since 2000 *Grade Working Group. CMAJ 2003, BMJ 2004, BMC 2004, BMC 2005

7 GRADE Working Group David Atkins, chief medical officera
Dana Best, assistant professorb Peter A Briss, chiefc Martin Eccles, professord Yngve Falck-Ytter, associate directore Signe Flottorp, researcherf Gordon H Guyatt, professorg Robin T Harbour, quality and information director h Margaret C Haugh, methodologisti David Henry, professorj Suzanne Hill, senior lecturerj Roman Jaeschke, clinical professork Gillian Leng, guidelines programme directorl Alessandro Liberati, professorm Nicola Magrini, directorn James Mason, professord Philippa Middleton, honorary research fellowo Jacek Mrukowicz, executive directorp Dianne O’Connell, senior epidemiologistq Andrew D Oxman, directorf Bob Phillips, associate fellowr Holger J Schünemann, associate professorg,s Tessa Tan-Torres Edejer, medical officer/scientistt Helena Varonen, associate editoru Gunn E Vist, researcherf John W Williams Jr, associate professorv Stephanie Zaza, project directorw a) Agency for Healthcare Research and Quality, USA b) Children's National Medical Center, USA c) Centers for Disease Control and Prevention, USA d) University of Newcastle upon Tyne, UK e) German Cochrane Centre, Germany f) Norwegian Centre for Health Services, Norway g) McMaster University, Canada h) Scottish Intercollegiate Guidelines Network, UK i) Fédération Nationale des Centres de Lutte Contre le Cancer, France j) University of Newcastle, Australia k) McMaster University, Canada l) National Institute for Clinical Excellence, UK m) Università di Modena e Reggio Emilia, Italy n) Centro per la Valutazione della Efficacia della Assistenza Sanitaria, Italy o) Australasian Cochrane Centre, Australia p) Polish Institute for Evidence Based Medicine, Poland q) The Cancer Council, Australia r) Centre for Evidence-based Medicine, UK s) National Cancer Institute, Italy t) World Health Organisation, Switzerland u) Finnish Medical Society Duodecim, Finland v) Duke University Medical Center, USA w) Centers for Disease Control and Prevention, USA

8 Why guidelines? users looking for different things
just tell me what to do (recommendation) what to do, and on strong or weak grounds recommendation and grade recommend, grade, evidence summary, values systematic review, value statement evidence from individual studies

9 Grading System current profusion: can there be consensus?
trade-off benefits and risks do it (or don’t do it) probably do it (or probably don’t do it) quality of underlying evidence high quality (well done RCT) intermediate (quasi-RCT) low (well done observational) very low (anything else)

10 Moving down poor RCT design, implementation indirect reporting bias
inconsistency indirect A vs B, but have A to C, B to C patients, interventions, outcomes reporting bias

11 Moving up magnitude of effect dose-response biases favor control

12 When to make a recommendation?
never patient values differ just lay out benefits and risks when evidence strong enough when very weak, too uncertain clinicians need guidance intense study demands decision

13 Why bother about grading?
People draw conclusions about the quality of evidence strength of recommendations Systematic and explicit approaches can help protect against errors resolve disagreements facilitate critical appraisal communicate information However, there is wide variation in currently used approaches

14 Who is confused? Evidence Recommendation II-2 B C+ 1
Strong Strongly recommended Organization USPSTF ACCP GCPS

15 Still not confused? Recommendation for use of oral anticoagulation in patients with atrial fibrillation and rheumatic mitral valve disease Evidence Recommendation B Class I C+ 1 IV C Organization AHA ACCP SIGN

16 Guidelines development process

17 Example ACCP First ACCP guidelines in 1986 (J. Hirsh; J. Dalen)
Initially aimed at consensus Methodologists involved since beginning Now formally convening every 2 to 3 years > copies in 2001 Seventh conference held in 2003 87 panel members, 22 chapters Across subspecialties 565 recommendations, 230 new Evidence Based Recommendations

18 What makes guidelines evidence based (in 2005)?
Evidence – recommendation: transparent link Explicit inclusion criteria Comprehensive search Standardized consideration of study quality Conduct/use meta-analysis Grade recommendations Acknowledge values and preferences underlying recommendations This will highlight what I will focus on Schünemann et al. Chest 2004

19 Schünemann et al. Chest 2004

20 Schünemann HJ et al. Chest 2004
Obviously this looks confusing without the media animation – so you will have to look at the slide presentation Schünemann HJ et al. Chest 2004

21 Transparent link between evidence and recommendations & Explicit inclusion criteria
Albers et al. Chest 2004

22

23 Quality of evidence The extent to which one can be confident that an estimate of effect or association is correct. It depends on the: study design (e.g. RCT, cohort study) study quality/limitations (protection against bias; e.g. concealment of allocation, blinding, follow-up) consistency of results directness of the evidence including the populations (those of interest versus similar; for example, older, sicker or more co-morbidity) interventions (those of interest versus similar; for example, drugs within the same class) outcomes (important versus surrogate outcomes) comparison (A - C versus A - B & C - B)

24 Quality of evidence The quality of the evidence (i.e. our confidence) may also be REDUCED when there is: Sparse or imprecise data Reporting bias The quality of the evidence (i.e. our confidence) may be INCREASED when there is: A strong association A dose response relationship All plausible confounders would have reduced the observed effect All plausible biases would have increased the observed lack of effect

25 Quality assessment criteria

26 Categories of quality High: Further research is very unlikely to change our confidence in the estimate of effect. Moderate: Further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate. Low: Further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate. Very low: Any estimate of effect is very uncertain.

27 Judgements about the overall quality of evidence
Most systems not explicit Options: strongest outcome primary outcome benefits weighted separate grades for benefits and harms no overall grade weakest outcome Based on lowest of all the critical outcomes Beyond the scope of a systematic review

28 Strength of recommendation
The extent to which one can be confident that adherence to a recommendation will do more good than harm. trade-offs (the relative value attached to the expected benefits, harms and costs) quality of the evidence translation of the evidence into practice in a specific setting uncertainty about baseline risk

29 Judgements about the balance between benefits and harms
Before considering cost and making a recommendation For a specified setting, taking into account issues of translation into practice

30 Clarity of the trade-offs between benefits and the harms
the estimated size of the effect for each main outcome the precision of these estimates the relative value attached to the expected benefits and harms important factors that could be expected to modify the size of the expected effects in specific settings; e.g. proximity to a hospital

31 Judgements about recommendations
Do it Probably do it No recommendation Probably don’t do it Don’t do it This could include considerations of costs; i.e. “Is the net gain (benefits-harms) worth the costs?”

32 Will GRADE lead to change?
Should healthy asymptomatic postmenopausal women have been given oestrogen + progestin for prevention in 1992? Quality of evidence across studies for CHD Hip fracture Colorectal cancer Breast cancer Stroke Thrombosis Gall bladder disease Quality of evidence across critical outcomes Balance between benefits and harms Recommendations

33 Evidence profile: Quality assessment Oestrogen + progestin for prevention in 1992 (before WHI and HERS) Oestrogen + progestin versus usual care

34 Oestrogen + progestin for prevention after WHI and HERS

35 Further developments Diagnostic tests Costs (Equity)
Empirical evaluations

36 GRADE Profiler

37 GRADE profiler (GRADEpro)

38

39

40

41

42

43

44 Empirical evaluations
Critical appraisal of other systems Pilot test + sensibility “Case law” + practical experience Guidance for judgements Single studies Sparse data or imprecise data Agreement Validity? Comparisons with other systems Alternative presentations

45 Comparison of GRADE and other systems
Explicit definitions Explicit, sequential judgements Components of quality Overall quality Relative importance of outcomes Balance between health benefits and harms Balance between incremental health benefits and costs Consideration of equity Evidence profiles International collaboration Software Consistent judgements? Communication?

46 Who is interested in GRADE
American Endocrine Society American College of Chest Physicians (ACCP) Italian National Cancer Institute Clinical Evidence Norwegian Centre for Health Services UpToDate Close relationship with Cochrane Collaboration American Society of Clinical Oncology (ASCO) American Thoracic Society (ATS)

47 Questions?

48 Taking account of costs
Include important (disaggregated) costs in evidence summaries and balance sheets when relevant May be useful to aggregate and value (in monetary terms) Always include disaggregated resource utilisation Note when important information is missing Published cost-effectiveness analyses are rarely helpful Assess the quality of the evidence for important costs (consumption of resources) as for other effects (Were quantities measured reliably?) If costs are critical to a decision, low quality evidence can lower the overall quality of evidence Costs are negotiable (the value of resources) There are many possible criteria for making a recommendation

49 Should activated protein C be given to patients in severe sepsis?
An example with costs

50 GRADE evidence profile: Activated Protein C for sepsis
Name: Jaeschke and Schunemann Date: September 2004 Question: Should APC be used for severe sepsis? Setting: ICU in Paris Baseline risk: Severe sepsis or septic shock > 24 h References: Effectiveness: Bernard Efficacy and safety of recombinant human activated protein C for severe sepsis. NEJM 2001; 344:699 and Manns An economic evaluation of activated protein C treatment for severe sepsis. NEJM 2002;347:993. Cost-effectiveness: Manns An economic evaluation of activated protein C treatment for severe sepsis. NEJM 2002;347:993.

51 Possible criteria for making a recommendation
Treatment effect Adverse effects Cost Cost-effectiveness Equity Seriousness of the problem Administrative restrictions

52 Quality assessment

53 Summary of findings


Download ppt "Grading evidence and recommendations The GRADE approach"

Similar presentations


Ads by Google