Critical Appraisal Dr Samantha Rutherford

Slides:



Advertisements
Similar presentations
Appraisal of an RCT using a critical appraisal checklist
Advertisements

Evidence into Practice: how to read a paper Rob Sneyd (with help from...Andrew F. Smith, Lancaster, UK)
Systematic reviews and Meta-analyses
What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic.
Observational Studies and RCT Libby Brewin. What are the 3 types of observational studies? Cross-sectional studies Case-control Cohort.
Making evidence more accessible using pictures
8. Evidence-based management Step 3: Critical appraisal of studies
Critical Appraisal Dr Samira Alsenany Dr SA 2012 Dr Samira alsenany.
Introduction to Critical Appraisal : Quantitative Research
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Critical Appraisal of an Article by Dr. I. Selvaraj B. SC. ,M. B. B. S
Study Designs By Az and Omar.
Quantitative Research
Are the results valid? Was the validity of the included studies appraised?
Critical Reading. Critical Appraisal Definition: assessment of methodological quality If you are deciding whether a paper is worth reading – do so on.
Reading Scientific Papers Shimae Soheilipour
EBD for Dental Staff Seminar 2: Core Critical Appraisal Dominic Hurst evidenced.qm.
Systematic Reviews.
 Is there a comparison? ◦ Are the groups really comparable?  Are the differences being reported real? ◦ Are they worth reporting? ◦ How much confidence.
EVIDENCE BASED MEDICINE Effectiveness of therapy Ross Lawrenson.
Evidence Based Medicine Meta-analysis and systematic reviews Ross Lawrenson.
Introduction To Evidence Based Nursing By Dr. Hanan Said Ali.
Understanding real research 4. Randomised controlled trials.
This material was developed by Oregon Health & Science University, funded by the Department of Health and Human Services, Office of the National Coordinator.
Clinical Writing for Interventional Cardiologists.
Wipanee Phupakdi, MD September 15, Overview  Define EBM  Learn steps in EBM process  Identify parts of a well-built clinical question  Discuss.
Evidence-Based Medicine – Definitions and Applications 1 Component 2 / Unit 5 Health IT Workforce Curriculum Version 1.0 /Fall 2010.
Critical Reading of Medical Articles
Critical appraisal of randomized controlled trial
How to Analyze Therapy in the Medical Literature (part 1) Akbar Soltani. MD.MSc Tehran University of Medical Sciences (TUMS) Shariati Hospital
Critical Reading. Critical Appraisal Definition: assessment of methodological quality If you are deciding whether a paper is worth reading – do so on.
Study designs. Kate O’Donnell General Practice & Primary Care.
Objectives  Identify the key elements of a good randomised controlled study  To clarify the process of meta analysis and developing a systematic review.
Sifting through the evidence Sarah Fradsham. Types of Evidence Primary Literature Observational studies Case Report Case Series Case Control Study Cohort.
PTP 661 EVIDENCE ABOUT INTERVENTIONS CRITICALLY APPRAISE THE QUALITY AND APPLICABILITY OF AN INTERVENTION RESEARCH STUDY Min Huang, PT, PhD, NCS.
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
Compliance Original Study Design Randomised Surgical care Medical care.
Lecture 2: Evidence Level and Types of Research. Do you recommend flossing to your patients? Of course YES! Because: I have been taught to. I read textbooks.
Corso di clinical writing. What to expect today? Core modules IntroductionIntroduction General principlesGeneral principles Specific techniquesSpecific.
Critical Appraisal of a Paper Feedback. Critical Appraisal Full Reference –Authors (Surname & Abbreviations) –Year of publication –Full Title –Journal.
Evidence-Based Mental Health PSYC 377. Structure of the Presentation 1. Describe EBP issues 2. Categorize EBP issues 3. Assess the quality of ‘evidence’
Journal Club Curriculum-Study designs. Objectives  Distinguish between the main types of research designs  Randomized control trials  Cohort studies.
CRITICAL APPRAISAL OF A JOURNAL
HelpDesk Answers Evaluating the Evidence (Levels of Evidence)
for Overall Prognosis Workshop Cochrane Colloquium, Seoul
Presentation title and subject
Critically Appraising a Medical Journal Article
Evidence-based Medicine
The Research Design Continuum
How to read a paper D. Singh-Ranger.
Evidence-Based Practice I: Definition – What is it?
Interventional trials
Randomized Trials: A Brief Overview
Research Designs, Threats to Validity and the Hierarchy of Evidence and Appraisal of Limitations (HEAL) Grading System.
Clinical Study Results Publication
Heterogeneity and sources of bias
AXIS critical Appraisal of cross sectional Studies
Chapter 7 The Hierarchy of Evidence
Critical Reading of Clinical Study Results
Reading Research Papers-A Basic Guide to Critical Analysis
Experiments and Observational Studies
Sign critical appraisal course: exercise 2
Q&A – studying medicine or health-related topics at university
How to apply successfully to the NIHR HTA Board?
EAST GRADE course 2019 Introduction to Meta-Analysis
Appraisal of an RCT using a critical appraisal checklist
Evidence Based Practice
EBM Dr Adrian Burger 20 March 2007.
What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic. Ask What is a review?
Basic statistics.
Presentation transcript:

Critical Appraisal Dr Samantha Rutherford Research and Development Manager

Evidence-based dentistry Evidence–based dentistry requires the integration of: systematic assessments of clinically relevant scientific evidence, relating to the patient’s oral and medical condition and history, with the dentist’s clinical expertise and the patient’s treatment needs and preferences

Evidence-based practice Best Evidence Clinical Experience Evidence-based practice Patient Values

How should research evidence contribute to dental care? effective care identified Primary Research Critical Appraisal Systematic Reviews Research Researchers, Policy Makers 15 years Education Dissemination dissemination effective Professional Organisations Implementation Informed Patients Use by Clinicians effective implementation

Evidence in dentistry Continual production >900 dental journals in print Every month sees publication of 2000 dental or oral health articles 150 RCTs 3 new practice guidelines Lack of evidence from primary care Change of practice required for patient to benefit

There is some help...

Knowledge into action cycle Assess Behaviours Knowledge Creation Publication Implementation Cochrane Diagnostic Evaluation Knowledge Transfer and Implementation Adapted from Graham et al (2006). Lost in Knowledge Translation. Time for a Map? Journal of Continuing Education for Health Professionals

Critical Appraisal How can you tell whether a piece of research has been done properly and that the information it reports is reliable and trustworthy? Critical appraisal is the process of carefully assessing and interpreting evidence through the systematic consideration of its validity, relevance and results i.e. what are the study’s strengths and weaknesses and can it be used to inform your practice?

Research articles Abstract Introduction Methods Results Discussion Summary of paper (Abstracts can contain omissions or inaccuracies) Review of literature. Is there are gap in current knowledge? What’s the aim of the study? Is the paper worth reading? What does the paper tell you? What are the implications for your practice? What else is of interest? Abstract Introduction Methods Results Discussion

Issues to consider Is the study design appropriate? How well was the study conducted? What are the results? Are the results relevant to your clinical practice?

Cross-sectional surveys Study design Study design can be observational or experimental: Observational Experimental Case reports Intervention study Case series Clinical trials Cross-sectional surveys Case control Cohort

Randomised controlled trials Evidence hierarchy Cochrane Review Review (systematic) Randomised controlled trials Cohort studies Case-control studies Case series, case reports Editorials, expert opinion

Forest plots Line of no treatment effect intersects CI Mean effect - OR Confidence interval Pooled point estimate plus confidence intervals In a typical forest plot, the results of component studies are shown as blobs or squares centred on the point estimate of the result of each study. A horizontal line runs through the square to show its confidence interval— usually, but not always, a 95% confidence interval. The overall estimate from the meta­analysis and its confidence interval are put at the bottom, represented as a diamond. The centre of the diamond represents the pooled point estimate, and its horizontal tips represent the confidence interval. Significance is achieved at the set level if the diamond is clear of the line of no effect. ODDS RATIO

Why do an RCT? Observational studies (cohorts) have consistently shown that people who eat more fruit and vegetables , which are rich in B carotene have lower rates of cardiovascular disease and cancer. However, experimental studies (trials) to investigate the association between beta carotene intake and cardiovascular mortality tell a different story… Effect of

Cochrane Collaboration Founded in 1993 International, non-profit making, independent Vision: Help clinicians, researchers, purchasers, and patients worldwide make decisions about healthcare based on up-to-date, reliable and accurate information

However... A good observational study beats a poorly conducted RCT Most appropriate design varies by the question: Interventions - RCT Prognosis/harm - cohort Rare disease – case control

RCT may not always be the most appropriate design What study design? To evaluate how effective paracetamol is at relieving pain? To evaluate a new filling material? To examine prognosis for individuals with pocketing >6 mm? To examine if smoking increases the risk of oral cancer? RCT RCT/Cohort Cohort Case control/cohort RCT may not always be the most appropriate design

Study Quality How well was the study conducted? Has bias been minimised? Sources of bias: Selection - comparability of groups Performance - care provided to each group Measurement - outcome assessment Attrition - dropouts/withdrawals

Selection bias Were the participants truly representative of the population? What were the inclusion/exclusion criteria? How were the participants recruited? Were there systematic differences between comparison groups?

Selection bias How can it be ensured that each individual entered into the trial has the same chance of receiving each of the possible interventions? Randomisation ensures that the known and unknown confounding factors are equal in both groups Good: computer generated off-site allocation. Poor: coin-flip

How well was the study conducted? Were there systematic differences in care provided to participants in a study (performance bias)? Are all important outcomes considered? Are there systematic differences in how outcomes are measured for the groups being compared (measurement bias)? Are there systematic differences between treatment groups with regard to withdrawals or exclusions (attrition bias)?

Minimising bias Blinding can reduce performance bias/ measurement bias Study leads can’t influence who gets the intervention Patients don’t know what treatment they have been given (BEWARE placebo effect) Outcome assessors don’t know who received the intervention so outcomes cannot be influenced Not always possible (e.g. surgery vs. drug) Intention to treat analysis can reduce attrition bias

Intention to treat Assessment is based on the group participants were initially allocated to. Often used to assess clinical effectiveness rather than efficacy as it mirrors actual practice e.g. in real life not everyone adheres to treatments if there are side effects Other statistical methods can be used to take missing data into account

Other issues Sample size – was the study big enough? Duration of follow-up Were methods of assessment reasonable? Are results presented for all outcomes? Do the numbers add up? Are the results statistically significant (P<0.05, 95% confidence interval)? If so, are they clinically important?

Which would you choose? Treatment reduces the risk of having decay by about 62% Treatment reduced the odds of having decay by about 83% Treatment produces an absolute reduction in risk of decay of 42%

2x2 tables Hypothetical trial of fissure sealant on permanent molars Risk reduction (-62%) Risk 1 = 6/24 = 0.25 Risk 2 = 16/24 = 0.667 0.667 – 0.25 = 0.417 0.417/0.667 = 0.625 Odds reduction (-83%) Odds 1 = 6/18 = 0.333 Odds 2 = 16/8 = 2 2 – 0.333 = 1.667 1.667/2 =0.8335 Absolute risk (-42%) Risk 1 = 6/24 = 0.25 Risk 2 = 16/24 = 0.667 0.667 – 0.25 = 0.417

Are the results relevant to your clinical practice? Not all valid research is relevant How generalisable are the findings? Is it feasible to implement the findings? Use PICO to determine if the study is relevant to answer your question P(atient)I(ntervention)C(omparison)O(outcome)

All studies need careful appraisal In general... Experimental studies are less susceptible to bias than observational studies Prospective studies are less susceptible to bias than retrospective studies Controlled studies are less susceptible to bias than uncontrolled studies All studies need careful appraisal Be cynical!

Tools Checklists www.sign.ac.uk/methodology/checklists available for all study types includes helpful notes explaining factors to consider whilst appraising the study GRADE (www.gradeworkinggroup.org) ‘a common, sensible and transparent approach to grading quality of evidence and strength of recommendations’

RCT checklist Does the study address an appropriate and clearly focused question? Is the assignment of subjects to treatment groups randomised? Is an adequate concealment method used? Are subjects and investigators kept ‘blind’ to treatment allocation? Were the treatment and control groups similar at the start of the trial?

RCT checklist Is the only difference between the groups the treatment under investigation? Are all relevant outcomes measured in a standard, valid and reliable way? Are drop-outs reported and analysed appropriately? Are all the subjects analysed in the groups to which they were randomly allocated (intention to treat analysis)?

RCT checklist The more ‘Yes’ answers, the more likely that the study is good quality and can be trusted BUT not all questions are of equal ‘weight’

Things to consider How well was the study done to minimise the risk of bias or confounding? Taking into account clinical considerations, your evaluation of the methodology used, and the statistical power of the study, are you certain that the overall effect is due to the study intervention? Are the results of this study directly applicable to your patients?

Thank you Questions?