Critical appraisal of research Sarah Lawson

Slides:



Advertisements
Similar presentations
Evidence-based Dental Practice Developing guidelines or clinical recommendations Slide #1 This lecture follows the previous online lecture on evidence.
Advertisements

Systematic reviews and Meta-analyses
Copyright © 2003 Pearson Education, Inc. Slide 1 Computer Systems Organization & Architecture Chapters 8-12 John D. Carpinelli.
Introductory Mathematics & Statistics for Business
Part 3 Probabilistic Decision Models
Author: Julia Richards and R. Scott Hawley
STATISTICS INTERVAL ESTIMATION Professor Ke-Sheng Cheng Department of Bioenvironmental Systems Engineering National Taiwan University.
Properties Use, share, or modify this drill on mathematic properties. There is too much material for a single class, so you’ll have to select for your.
Statistical vs Clinical or Practical Significance
Statistical vs Clinical Significance
Design of Dose Response Clinical Trials
The Application of Propensity Score Analysis to Non-randomized Medical Device Clinical Studies: A Regulatory Perspective Lilly Yue, Ph.D.* CDRH, FDA,
Properties of Real Numbers CommutativeAssociativeDistributive Identity + × Inverse + ×
Copyright © 2010 Pearson Education, Inc. Slide
Lecture 2 ANALYSIS OF VARIANCE: AN INTRODUCTION
CS1512 Foundations of Computing Science 2 Week 3 (CSD week 32) Probability © J R W Hunter, 2006, K van Deemter 2007.
1 Contact details Colin Gray Room S16 (occasionally) address: Telephone: (27) 2233 Dont hesitate to get in touch.
Critical appraisal of qualitative research Sarah Lawson sarah
What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic.
Chapter 7 Sampling and Sampling Distributions
Postgraduate Course 7. Evidence-based management: Research designs.
How would you explain the smoking paradox. Smokers fair better after an infarction in hospital than non-smokers. This apparently disagrees with the view.
Business and Economics 6th Edition
5-1 Chapter 5 Theory & Problems of Probability & Statistics Murray R. Spiegel Sampling Theory.
Assessing Validity of Association
Phase II/III Design: Case Study
Hypothesis Tests: Two Independent Samples
Research article structure: Where can reporting guidelines help? Iveta Simera The EQUATOR Network workshop.
Basel-ICU-Journal Challenge18/20/ Basel-ICU-Journal Challenge8/20/2014.
1..
Lecture 3 Validity of screening and diagnostic tests
1 Cell-Free Hemoglobin-Based Blood Substitutes and Risk of Myocardial Infarction and Death Natason et al., JAMA, Prepublished online April 28, 2008 at.
1 Introduction to Critical Appraisal Yulia Lin, MD, FRCPC Transfusion Medicine & Hematology Sunnybrook Health Sciences Centre University of Toronto TMR.
Statistical Analysis SC504/HS927 Spring Term 2008
Issues of Simultaneous Tests for Non-Inferiority and Superiority Tie-Hua Ng*, Ph. D. U.S. Food and Drug Administration Presented at MCP.
Statistical Inferences Based on Two Samples
Analyzing Genes and Genomes
©Brooks/Cole, 2001 Chapter 12 Derived Types-- Enumerated, Structure and Union.
Chapter Thirteen The One-Way Analysis of Variance.
Chapter 8 Estimation Understandable Statistics Ninth Edition
©2006 Prentice Hall Business Publishing, Auditing 11/e, Arens/Beasley/Elder Audit Sampling for Tests of Controls and Substantive Tests of Transactions.
PSSA Preparation.
Experimental Design and Analysis of Variance
Essential Cell Biology
Testing Hypotheses About Proportions
Simple Linear Regression Analysis
Multiple Regression and Model Building
January Structure of the book Section 1 (Ch 1 – 10) Basic concepts and techniques Section 2 (Ch 11 – 15): Inference for quantitative outcomes Section.
4/4/2015Slide 1 SOLVING THE PROBLEM A one-sample t-test of a population mean requires that the variable be quantitative. A one-sample test of a population.
8. Evidence-based management Step 3: Critical appraisal of studies
Introduction to Critical Appraisal : Quantitative Research
Introduction to evidence based medicine
Gut-directed hypnotherapy for functional abdominal pain or irritable bowel syndrome in children: a systematic review Journal club presentation
Quantitative Research
Are the results valid? Was the validity of the included studies appraised?
EBD for Dental Staff Seminar 2: Core Critical Appraisal Dominic Hurst evidenced.qm.
CRITICAL APPRAISAL OF SCIENTIFIC LITERATURE
SYSTEMATIC REVIEWS AND META-ANALYSIS. Objectives Define systematic review and meta- analysis Know how to access appraise interpret the results of a systematic.
Literature searching & critical appraisal Chihaya Koriyama August 15, 2011 (Lecture 2)
Landmark Trials: Recommendations for Interpretation and Presentation Julianna Burzynski, PharmD, BCOP, BCPS Heme/Onc Clinical Pharmacy Specialist 11/29/07.
Wipanee Phupakdi, MD September 15, Overview  Define EBM  Learn steps in EBM process  Identify parts of a well-built clinical question  Discuss.
Objectives  Identify the key elements of a good randomised controlled study  To clarify the process of meta analysis and developing a systematic review.
CAT 5: How to Read an Article about a Systematic Review Maribeth Chitkara, MD Rachel Boykan, MD.
Sifting through the evidence Sarah Fradsham. Types of Evidence Primary Literature Observational studies Case Report Case Series Case Control Study Cohort.
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
Course: Research in Biomedicine and Health III Seminar 5: Critical assessment of evidence.
Critically Appraising a Medical Journal Article
Evidence-based Medicine
Chapter 7 The Hierarchy of Evidence
What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic. Ask What is a review?
Presentation transcript:

Critical appraisal of research Sarah Lawson sarah.lawson@kcl.ac.uk

Learning objectives Understand the principles of critical appraisal and its role in evidence based practice Understand the different levels of quantitative evidence Be able to appraise research and judge its validity Be able to assess the relevance of research to their own work Know about resources available to help them to critically appraise research Be able to critically appraise quantitative research as a group

What is evidence based practice? Evidence-based practice is the integration of individual clinical expertise with the best available external clinical evidence from systematic research and patient’s values and expectations

The evidence-based practice (EBP) process. Decision or question arising from a patient’s care. Formulate a focused question. Search for the best evidence. Appraise the evidence. Apply the evidence.

EBP in practice depending upon speciality, between 50 and 80 per cent of all 'medical activity' is evidence based. www.shef.ac.uk/scharr/ir.percent.html

Why does evidence from research fail to get into practice? 75% cannot understand the statistics 70% cannot critically appraise a research paper Using research for Practice: a UK experience of the barriers scale. Dunn, V. et al.

What is critical appraisal? Weighing up evidence to see how useful it is in decision making Balanced assessment of benefits and strengths of research against its flaws and weaknesses Assess research process and results Skill that needs to be practiced by all health professionals as part of their work

Why do we need to critically appraise? “It usually comes as a surprise to students to learn that some (the purists would say 99% of) published articles belong in the bin and should not be used to inform practice” Greenhalgh 2001

Why do we need to critically appraise? studies which don't report their methods fully overstate the benefits of treatments by around 25% Khan et al. Arch Intern med, 1996; Maher et al, Lancet 1998.   studies funded by a pharmaceutical company were found to  be 4 times as likely to give results that were favourable to the company than independent studies Lexchin et al, BMJ, 2003

Sources of bias poor control group /control dosage surrogate outcomes ignore drop outs modify trial length misuse baseline statistics statistics overload

How do I appraise? Mostly common sense. You don’t have to be a statistical expert! Checklists help you focus on the most important aspects of the article. Different checklists for different types of research. Will help you decide if research is valid and relevant.

Research methods Quantitative Uses numbers to describe and analyse Useful for finding precise answers to defined questions Qualitative Uses words to describe and analyse Useful for finding detailed information about people’s perceptions and attitudes

Levels of quantitative evidence. Systematic reviews Randomized controlled trials Prospective studies (cohort studies) Retrospective studies (case control) Case series and reports NB quality assessment!

Systematic reviews Thorough search of literature carried out. All RCTs (or other studies) on a similar subject synthesised and summarised. Meta-analysis to combine statistical findings of similar studies.

Randomised controlled trials (RCTs) Normal treatment/placebo versus new treatment. Participants are randomised. If possible should be blinded. Intention to treat analysis

Cohort studies prospective groups (cohorts) exposure to a risk factor followed over a period of time compare rates of development of an outcome of interest Confounding factors and bias

Case-control studies Retrospective Subjects confirmed with a disease (cases) are compared with non-diseased subjects (controls) in relation to possible past exposure to a risk factor. Confounding factors and bias

Qualitative research Complements quantitative Natural settings No gold standard – grounded theory common Inductive and iterative Purposive – not statistical – sampling Triangulation – varied research methods

Appraising primary research Are the results valid? Is the research question focused (and original)? Was the method appropriate? How was it conducted? What are the results? How was data collected and analysed? Are they significant? Will the results help my work with patients?

Appraising RCTs Recruitment and sample size Randomisation method and controls Confounding factors Blinding Follow-up (flow diagram) Intention to treat analysis Adjusting for multiple analyses

“As a non-statistician I tend only to look for 3 numbers in the methods : Size of sample Duration of follow-up Completeness of follow-up” Greenhalgh 2010

Sample size calculation Based on primary outcome measure. Depends upon various factors: Significance level (usually 5%) Power (minimum 80%) Variability of the measure (e.g. Standard Deviation) Level of clinically significant difference

Appraising cohort/case control studies Recruitment – selection bias Exposure - measurement, recall or classification bias Confounding factors & adjustment Time-frames Plausibility

Appraising qualitative research Sampling method and theoretical saturation Triangulation of methods Independent analysis Reflexivity Respondent validation Plausible interpretation Analysis should be done using explicit, systematic, justified and reproducible methods

Appraising systematic reviews Was a thorough literature search carried out ? How was the quality of the studies assessed? If results were combined, was this appropriate?

Publication bias papers with more ‘interesting’ results are more likely to be: Submitted for publication Accepted for publication Published in a major journal Published in the English language

Publication bias All SSRI trials registered with FDA 37 studies were assessed by FDA as positive - 36 of these were published. 22 studies with negative or inconclusive results were not published and 11 were written up as positive. Turner et al. NEJM, 2008.

Results How was data collected? Which statistical analyses were used? How precise are the results? How are the results presented?

Statistical tests Type of test used depends upon: Type of data – categorical, continuous etc One- or two-tailed (or sided) significance Independence and number of samples Number of observations and variables Distribution of data, e.g. normal

Estimates of effect The observed relationship between an intervention and an outcome Express results in terms of likely harms or benefits Relative and absolute measures, e.g.: Odds ratios, absolute and relative risks/benefits, mean Difference, Number needed to treat (NNT)

Confidence intervals (CI) The range of values within which the “true” value in the population is found 95% CI – 95% confident the population lies within those limits Wide CI = less precise estimates of effect

CIs and statistical significance When quoted alongside an absolute difference a CI that includes zero is statistically non-significant When quoted alongside a ratio or relative difference a CI that includes one is statistically non-significant IF statistically significant: Less than zero (or one) = less of the outcome in the treatment group. More than zero (or one) = more of the outcome. BUT - Is the outcome good or bad?

Cardiac deaths - less = good

Smoking cessation – more = good

P values P stands for probability - how likely is the result to have occurred by chance? P value of less than 0.05 means likelihood of results being due to chance is less than 1 in 20 = “statistically significant”. P values and confidence intervals should be consistent Confidence intervals provide a range of values

Statistical significance Remember: statistical significance does not necessarily equal clinical significance statistical non-significance may be due to small sample size - check CI upper/lower limits – might a bigger sample result in a statistically significant difference?

Are results relevant? Can I apply these results to my own practice? Is my local setting significantly different? Are these findings applicable to my patients? Are findings specific/detailed enough to be applied? Were all outcomes considered?

The good news! Some resources have already been critically appraised for you. An increasing number of guidelines and summaries of appraised evidence are available on the internet.

Summary Search for resources that have already been appraised first, e.g. Guidelines, Cochrane systematic reviews. Search down through levels of evidence, e.g. systematic reviews, RCTs. Use checklists to appraise research. How can these results be put into practice?