Evidence-Based Practice David Pfleger NHS Grampian Non-medical prescribing conference 2011.

Slides:



Advertisements
Similar presentations
Evidence into Practice: how to read a paper Rob Sneyd (with help from...Andrew F. Smith, Lancaster, UK)
Advertisements

What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic.
How would you explain the smoking paradox. Smokers fair better after an infarction in hospital than non-smokers. This apparently disagrees with the view.
Designing Clinical Research Studies An overview S.F. O’Brien.
Observational Studies and RCT Libby Brewin. What are the 3 types of observational studies? Cross-sectional studies Case-control Cohort.
Study Designs in Epidemiologic
Making evidence more accessible using pictures
The Bahrain Branch of the UK Cochrane Centre In Collaboration with Reyada Training & Management Consultancy, Dubai-UAE Cochrane Collaboration and Systematic.
8. Evidence-based management Step 3: Critical appraisal of studies
Reading the Dental Literature
Introduction to Critical Appraisal : Quantitative Research
Statistics By Z S Chaudry. Why do I need to know about statistics ? Tested in AKT To understand Journal articles and research papers.
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Cohort Studies Hanna E. Bloomfield, MD, MPH Professor of Medicine Associate Chief of Staff, Research Minneapolis VA Medical Center.
Introduction to evidence based medicine
Quantitative Research
The Bahrain Branch of the UK Cochrane Centre In Collaboration with Reyada Training & Management Consultancy, Dubai-UAE Cochrane Collaboration and Systematic.
Are the results valid? Was the validity of the included studies appraised?
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 7: Gathering Evidence for Practice.
OKU 9 Chapter 15: ORTHOPAEDIC RESEARCH Brian E. Walczak.
Critical Reading. Critical Appraisal Definition: assessment of methodological quality If you are deciding whether a paper is worth reading – do so on.
EBD for Dental Staff Seminar 2: Core Critical Appraisal Dominic Hurst evidenced.qm.
Study Design. Study Designs Descriptive Studies Record events, observations or activities,documentaries No comparison group or intervention Describe.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
Evidence-Based Medicine 3 More Knowledge and Skills for Critical Reading Karen E. Schetzina, MD, MPH.
FRAMING RESEARCH QUESTIONS The PICO Strategy. PICO P: Population of interest I: Intervention C: Control O: Outcome.
ECON ECON Health Economic Policy Lab Kem P. Krueger, Pharm.D., Ph.D. Anne Alexander, M.S., Ph.D. University of Wyoming.
Systematic Reviews.
Research Skills Basic understanding of P values and Confidence limits CHE Level 5 March 2014 Sian Moss.
Study design P.Olliaro Nov04. Study designs: observational vs. experimental studies What happened?  Case-control study What’s happening?  Cross-sectional.
Study Designs in Epidemiologic
Introduction To Evidence Based Nursing By Dr. Hanan Said Ali.
Understanding real research 4. Randomised controlled trials.
EBCP. Random vs Systemic error Random error: errors in measurement that lead to measured values being inconsistent when repeated measures are taken. Ie:
Literature searching & critical appraisal Chihaya Koriyama August 15, 2011 (Lecture 2)
Evidence-Based Medicine Presentation [Insert your name here] [Insert your designation here] [Insert your institutional affiliation here] Department of.
Landmark Trials: Recommendations for Interpretation and Presentation Julianna Burzynski, PharmD, BCOP, BCPS Heme/Onc Clinical Pharmacy Specialist 11/29/07.
Clinical Writing for Interventional Cardiologists.
VSM CHAPTER 6: HARM Evidence-Based Medicine How to Practice and Teach EMB.
RevMan for Registrars Paul Glue, Psychological Medicine What is EBM? What is EBM? Different approaches/tools Different approaches/tools Systematic reviews.
Study Designs for Clinical and Epidemiological Research Carla J. Alvarado, MS, CIC University of Wisconsin-Madison (608)
How to read a paper D. Singh-Ranger. Academic viva 2 papers 1 hour to read both Viva on both papers Summary-what is the paper about.
How to Analyze Therapy in the Medical Literature (part 1) Akbar Soltani. MD.MSc Tehran University of Medical Sciences (TUMS) Shariati Hospital
Evidence Based Practice RCS /9/05. Definitions  Rosenthal and Donald (1996) defined evidence-based medicine as a process of turning clinical problems.
Critical Reading. Critical Appraisal Definition: assessment of methodological quality If you are deciding whether a paper is worth reading – do so on.
Overview of Study Designs. Study Designs Experimental Randomized Controlled Trial Group Randomized Trial Observational Descriptive Analytical Cross-sectional.
Making epidemiological evidence more accessible using pictures Rod Jackson Updated November 09.
Study designs. Kate O’Donnell General Practice & Primary Care.
Objectives  Identify the key elements of a good randomised controlled study  To clarify the process of meta analysis and developing a systematic review.
Sifting through the evidence Sarah Fradsham. Types of Evidence Primary Literature Observational studies Case Report Case Series Case Control Study Cohort.
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
Is the conscientious explicit and judicious use of current best evidence in making decision about the care of the individual patient (Dr. David Sackett)
Compliance Original Study Design Randomised Surgical care Medical care.
EVALUATING u After retrieving the literature, you have to evaluate or critically appraise the evidence for its validity and applicability to your patient.
Design of Clinical Research Studies ASAP Session by: Robert McCarter, ScD Dir. Biostatistics and Informatics, CNMC
Lecture 2: Evidence Level and Types of Research. Do you recommend flossing to your patients? Of course YES! Because: I have been taught to. I read textbooks.
Types of Studies. Aim of epidemiological studies To determine distribution of disease To examine determinants of a disease To judge whether a given exposure.
بسم الله الرحمن الرحیم.
Levels of Evidence Dr Chetan Khatri Steering Committee, STARSurg.
1 Study Design Imre Janszky Faculty of Medicine, ISM NTNU.
Case control & cohort studies
Evidence-Based Mental Health PSYC 377. Structure of the Presentation 1. Describe EBP issues 2. Categorize EBP issues 3. Assess the quality of ‘evidence’
Introduction to General Epidemiology (2) By: Dr. Khalid El Tohami.
Evidence-based Medicine
How to read a paper D. Singh-Ranger.
Confidence Intervals and p-values
Evidence-Based Practice I: Definition – What is it?
Randomized Trials: A Brief Overview
Chapter 7 The Hierarchy of Evidence
Evidence Based Medicine 2019 A.Bornstein MD FACC Assistant Professor of Medicine Hofstra Northwell School of Medicine Hempstead, Long Island.
Presentation transcript:

Evidence-Based Practice David Pfleger NHS Grampian Non-medical prescribing conference 2011

Definition “ Evidence-based medicine is the process of systematically reviewing, appraising and using clinical research findings to aid the delivery of optimum clinical care to patients.” Rosenberg and Donald 1995

Evidence Based Practice Evidence based practice is the integration of best research evidence with clinical expertise and patient values Best evidence invalidates and replaces previously accepted treatments, tests, practice and replaces them with better ones Clinical expertise - clinical experience and skills to ID patient’s individual situation Patient values - patients preferences, concerns and expectations

How – according to Sackett Step 1 – converting the need for information (about prevention, diagnosis, prognosis, therapy, causation etc) into an answerable question. Step 2 – sourcing the best evidence in order to answer the question Step 3 – critical appraisal of the retrieved evidence concentrating on validity (closeness to the truth), impact (magnitude of the effect) and applicability (usefulness to the clinical situation) Step 4 – integration of the results of appraising the evidence with clinical expertise and the patient’s (individual or population) Step 5 – evaluation of effectiveness in completing steps 1-4 and reflecting on needs and formulating a plan for improvement.

What is the Evidence? I Strong evidence from at least one systematic review of multiple,well-designed,randomised,controlled trials II Strong evidence from at least one properly designed,randomised,controlled trial of appropriate size III Evidence from well designed trials without randomisation,single group pre- or post- (treatment),cohort studies,time series,matched,case- controlled studies IV Evidence from well designed,non-experimental studies from more than one centre or research group V Opinions of respected authorities based on clinical evidence,descriptive studies or reports of expert committees

Why Are Systematic Reviews Needed? l Around 20,000 journals and 2.5 million biomedical articles published each year l Impossible to keep up with in detail, or even to interpret at times l Results of trials may be unclear, confusing or downright contradictory l Taken together, a clearer picture can emerge

How Are Systematic Reviews Conducted? - I l Define the question - intervention, patient group, settings l Search the literature - seeking an unbiased assessment, including non- English sources, and trying to avoid publication bias l Assess the studies –eligibility –study quality –independent assessors

How Are Systematic Reviews Conducted? - II l Combine the results - sometimes qualitative, but usually numerical, using meta-analysis l Put the findings in context –quality of the studies –differences between them –impact of bias and chance l Can be very substantial undertakings - they cost money

Assessing the studies: Study Design Descriptive studies –Correlation, case reports, case series Analytical studies –Observational Case – Control Cohort –Intervention / Experimental RCT –Meta analysis

Case Control Studies Cases (people with the event / outcome) Controls (people without the event / outcome) Population Exposed Not Exposed Time Direction of Enquiry

Case Control Studies Positive Rare outcomes Small sample sizes Good follow up Can look at multiple risk factors / exposures Relatively fast,easy and cheap Negative Selection of controls - must be representative of exposure in general population - selection bias Retrospective data on exposure - recall bias

Cohort Studies Population People without outcome Exposed Unexposed Event / outcome No event / outcome Time Direction of Enquiry

Cohort studies - Strengths / Weaknesses Positive Subjects can be matched Time sequence Validity of data Multiple outcome measurement Negative Large cohorts for rare events Long follow up? - cost? Loss? Confounders? No randomisation No blinding

Randomised Control Trials The Gold Standard A population is selected and randomised in to two groups. One group receives the intervention whilst the other does not (or receives placebo). The outcomes for each group are then compared.

RCT Positive Confounders distributed randomly Blinding more likely Randomisation makes statistical analysis simpler Negative Expensive, time & £ Volunteer bias Ethical issues Loss to follow up Artificial questionable advantage over prestudy

PICO-T – Helping formulate the question PICO-T –Population/Patients –Intervention –Comparison –Outcome –Time

P EC O T PECOTPECOT Recruitment Allocation Maintenance Measurement of outcomes blind or objective The RAMMbo acronym: assessing study bias

Interpreting Risk 200 subjects aged 59 years or older, with previous heart disease and type 2 diabetes randomised to two groups: – –100 receive the experimental treatment – –100 receive the control treatment Follow-up is a mean of 5 years Endpoint is a composite of all of the CHD deaths and non-fatal MIs

Results Treatment group Control group Number of subjects100 Number of CHD- deaths or non-fatal MI 2030 Probability of CHD- deaths or non-fatal MI 0.2 (20%)0.3 (30%)

Risks to be considered RR = 0.2 / 0.3 = 0.67 – –With treatment were 0.67 times as likely to die as the control group i.e. risk reduced Attributable Risk = = 0.1 – –Your actual risk of dying from CHD-related causes or suffer a non- fatal MI has dropped from 30% without treatment to 20% with i.e. the actual risk has reduced by 10% due to the treatment. Number needed to treat = 1 / 0.1 = 10 – –We would need to treat ten people for one of them to avoid dying from CHD-related causes or suffer a non-fatal MI Treatment group Control group Rate of CHD-deaths or non-fatal MI 0.2 (20%)0.3 (30%)

What if the baseline risk is lower? RR = 0.02 / 0.03 = 0.67 ARR = = 0.01 NNT = 1 / = 100 Treatment group Control group Number of subjects100 Number of CHD- deaths or non-fatal MI 23 Rate of CHD-deaths or non-fatal MI 0.02 (2%)0.03 (3%)

Estimation Rare that we observe the entire population Normally we use a representative sample Ideally the mean, risk or whatever we are looking at would be the same in the sample group as the entire population Samples are not ‘perfect’ therefore this is not the case

Confidence Intervals A range within which the real population mean or risk is likely to fall We usually use the 95% confidence interval 95% CI says that we are 95% confident that the true mean falls within those limits Important to see –How spread out those limits are –What clinical significance those limits have

Statistical inference Hypothesis testing –How likely are observed differences a result of sampling errors rather than real differences between the groups States that any differences between two groups are due to chance i.e. there are no real differences between the groups Normally one group is the study group and one is the control Looking for a real difference between the groups i.e. rejection of the null hypothesis

P values Probability As p 1 then it is more likely that something is going to happen As p 0 then it is less likely that something is going to happen We normally use the p value to measure how likely a result is due to chance p value approaches 0 tells us that it is less likely that something is going to happen due to chance

If we reject the null hypothesis saying that it is statistically significant at the 5% level I.e. p < 0.05 then….. We are saying that the rejection of the null hypothesis carries a 5% risk of being incorrect.

Before you run off to do a systematic review Sackett says: For situations encountered every day it is key that the practitioner is up to date at all times so time spent on steps 1 (question forming), 2 (searching) and 3 (appraising) will be well spent. For situations that are encountered less frequently it may be appropriate to use time to look for completed critically appraised evidence in the area of interest so time will be spent on steps 1 and 2 rather than 3. Emphasis will be on how that evidence has been derived and how it applies to practice. For situations that are outwith normal practice, where recommendations / guidance are applied ‘blindly’. Sackett describes this as the replicating mode emphasising the problems with this style of practice when the evidence base, upon which such guidance has been generated, is unknown.

Coping with the overload: three possible things you might try A. Read an evidence-based abstraction journal (and cancel other journals) B. Keep a logbook of your own clinical questions C. Run a case-discussion journal club within your practice /with peers

Take Home EBP needs the basic skills and use of those skills Needs to be part of your everyday practice Needs to be used in the context of your individual patient Is one tool to use rather than an ends to itself