Introduction to Critical Appraisal : Quantitative Research

Slides:



Advertisements
Similar presentations
Evidence into Practice: how to read a paper Rob Sneyd (with help from...Andrew F. Smith, Lancaster, UK)
Advertisements

Systematic reviews and Meta-analyses
Critical appraisal of research Sarah Lawson
What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic.
How to assess an abstract
What is going on with psychotherapy today? Carolyn R. Fallahi, Ph. D.
8. Evidence-based management Step 3: Critical appraisal of studies
Reading the Dental Literature
CRITICAL APPRAISAL Dr. Cristina Ana Stoian Resident Journal Club
Statistics By Z S Chaudry. Why do I need to know about statistics ? Tested in AKT To understand Journal articles and research papers.
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Cohort Studies Hanna E. Bloomfield, MD, MPH Professor of Medicine Associate Chief of Staff, Research Minneapolis VA Medical Center.
Introduction to evidence based medicine
Critical Appraisal of an Article by Dr. I. Selvaraj B. SC. ,M. B. B. S
Gut-directed hypnotherapy for functional abdominal pain or irritable bowel syndrome in children: a systematic review Journal club presentation
Quantitative Research
The Bahrain Branch of the UK Cochrane Centre In Collaboration with Reyada Training & Management Consultancy, Dubai-UAE Cochrane Collaboration and Systematic.
Are the results valid? Was the validity of the included studies appraised?
EBD for Dental Staff Seminar 2: Core Critical Appraisal Dominic Hurst evidenced.qm.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
CRITICAL APPRAISAL OF SCIENTIFIC LITERATURE
SYSTEMATIC REVIEWS AND META-ANALYSIS. Objectives Define systematic review and meta- analysis Know how to access appraise interpret the results of a systematic.
Evidence Based Medicine & Basic Critical Appraisal
Clinical trials and pitfalls in planning a research project Dr. D. W. Green Consultant Anaesthetist King's College Hospital Denmark Hill London SE5 9RS.
Evidence Based Medicine Meta-analysis and systematic reviews Ross Lawrenson.
Evidence-Based Public Health Nancy Allee, MLS, MPH University of Michigan November 6, 2004.
Simon Thornley Meta-analysis: pooling study results.
How to Analyze Therapy in the Medical Literature (part 2)
Understanding real research 4. Randomised controlled trials.
EBCP. Random vs Systemic error Random error: errors in measurement that lead to measured values being inconsistent when repeated measures are taken. Ie:
Finding Relevant Evidence
Literature searching & critical appraisal Chihaya Koriyama August 15, 2011 (Lecture 2)
Evidence-Based Medicine Presentation [Insert your name here] [Insert your designation here] [Insert your institutional affiliation here] Department of.
Landmark Trials: Recommendations for Interpretation and Presentation Julianna Burzynski, PharmD, BCOP, BCPS Heme/Onc Clinical Pharmacy Specialist 11/29/07.
VSM CHAPTER 6: HARM Evidence-Based Medicine How to Practice and Teach EMB.
Wipanee Phupakdi, MD September 15, Overview  Define EBM  Learn steps in EBM process  Identify parts of a well-built clinical question  Discuss.
Evidence-Based Medicine – Definitions and Applications 1 Component 2 / Unit 5 Health IT Workforce Curriculum Version 1.0 /Fall 2010.
Critical Reading of Medical Articles
Study designs. Kate O’Donnell General Practice & Primary Care.
Objectives  Identify the key elements of a good randomised controlled study  To clarify the process of meta analysis and developing a systematic review.
CAT 5: How to Read an Article about a Systematic Review Maribeth Chitkara, MD Rachel Boykan, MD.
Sifting through the evidence Sarah Fradsham. Types of Evidence Primary Literature Observational studies Case Report Case Series Case Control Study Cohort.
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
Risks & Odds Professor Kate O’Donnell. When talking about the chance of something happening, e.g. death, hip fracture, we can talk about: risk and relative.
Finding, Evaluating, and Presenting Evidence Sharon E. Lock, PhD, ARNP NUR 603 Spring, 2001.
Is the conscientious explicit and judicious use of current best evidence in making decision about the care of the individual patient (Dr. David Sackett)
LIBRARY SERVICES Evaluating the evidence Paula Funnell Senior Academic Liaison Librarian (Medicine and Dentistry)
EVALUATING u After retrieving the literature, you have to evaluate or critically appraise the evidence for its validity and applicability to your patient.
Component 1: Introduction to Health Care and Public Health in the U.S. 1.9: Unit 9: The evolution and reform of healthcare in the US 1.9a: Evidence Based.
Copyright © 2011 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 18 Systematic Review and Meta-Analysis.
Research Design Evidence Based Medicine Concepts and Glossary.
Course: Research in Biomedicine and Health III Seminar 5: Critical assessment of evidence.
Levels of Evidence Dr Chetan Khatri Steering Committee, STARSurg.
Evidence-Based Practice David Pfleger NHS Grampian Non-medical prescribing conference 2011.
CRITICAL APPARAISAL OF A PAPER ON THERAPY 421 CORSE EVIDENCE BASED MEDICINE (EBM)
Article Title Resident Name, MD SVCH6/13/2016 Journal Club.
Critical Appraisal of a Paper Feedback. Critical Appraisal Full Reference –Authors (Surname & Abbreviations) –Year of publication –Full Title –Journal.
Evidence-Based Mental Health PSYC 377. Structure of the Presentation 1. Describe EBP issues 2. Categorize EBP issues 3. Assess the quality of ‘evidence’
CRITICAL APPRAISAL OF A JOURNAL
Critically Appraising a Medical Journal Article
NURS3030H NURSING RESEARCH IN PRACTICE MODULE 7 ‘Systematic Reviews’’
Interventional trials
EVIDENCE BASED MEDICINE
Chapter 7 The Hierarchy of Evidence
Critical Appraisal Skills quantitative reviews
Critical Appraisal Dr Samantha Rutherford
Interpreting Basic Statistics
What Really is Evidence Based Medicine?
Evidence Based Medicine 2019 A.Bornstein MD FACC Assistant Professor of Medicine Hofstra Northwell School of Medicine Hempstead, Long Island.
Basic statistics.
Presentation transcript:

Introduction to Critical Appraisal : Quantitative Research South East London Outreach Librarians January 2008

Learning objectives Understand the principles of critical appraisal and its role in evidence based practice Be able to appraise quantitative research and judge its validity Be able to assess the relevance of published research to your own work

Daily Mail exercise Would you treat a patient based on this article? Why? Validity Reliability Transferable to practice

What is evidence based practice? Evidence-based practice is the integration of individual clinical expertise with the best available external clinical evidence from systematic research and patient’s values and expectations

The evidence-based practice process. Decision or question arising from a patient’s care. Formulate a focused question. Search for the best evidence. Appraise the evidence. Apply the evidence. This is the step-by-step process. We will be concentrating on step 4 today. Critical appraisal is very much part of patient care.

Why does evidence from research fail to get into practice? 75% cannot understand the statistics 70% cannot critically appraise a research paper Using research for Practice: a UK experience of the barriers scale. Dunn, V. et al.

What is critical appraisal? Weighing up evidence to see how useful it is in decision making Balanced assessment of benefits and strengths of research against its flaws and weaknesses Assess research process and results Skill that needs to be practiced by all health professionals as part of their work

What critical appraisal is NOT Negative dismissal of any piece of research Assessment of results alone Based entirely on statistical analysis Only to be undertaken by researchers/ statisticians

Why do we need to critically appraise? “It usually comes as a surprise to students to learn that some (the purists would say 99% of) published articles belong in the bin and should not be used to inform practice” (Greenhalgh 2001) Find that 1% - save time and avoid information overload

How do I appraise? Mostly common sense. You don’t have to be a statistical expert! Checklists help you focus on the most important aspects of the article. Different checklists for different types of research. Will help you decide if research is valid and relevant. There are different checklists for different types of research – available from CASP website. These will help you focus on the main points to consider. Today we are looking at a systematic review so I will focus on the points to consider for this type of research. Focus on the Methods and the Results in a paper. Abstract can be useful but be aware that is may not give a full picture.

Research methods Quantitative Uses numbers to describe and analyse Useful for finding precise answers to defined questions Qualitative Uses words to describe and analyse Useful for finding detailed information about people’s perceptions and attitudes

Levels of quantitative evidence Levels of quantitative evidence. (In order of decreasing scientific validity.) Systematic reviews. Randomized controlled trials. Prospective studies (cohort studies). Retrospective studies (case control). Case series and reports Opinions of respected authorities. Before critically appraising an individual paper you need to appraise and select the best type of research to read. Does anyone know? If you are looking for evidence to support your practice, you would first try to find a systematic review, followed by Randomised CTs. If no SR or RCT has been carried out follow the trail to the next best form of external evidence and work from there.

Systematic Reviews. Thorough search of literature carried out. All RCTs (or other studies) on a similar subject synthesised and summarised. Meta-analysis to combine statistical findings of similar studies. What is a systematic review? What is a meta-analysis? Not the same as a review article or a literature review, necessarily. Has to be systematic.

Randomised Controlled Trials (RCTs) Normal treatment/placebo versus new treatment. Participants are randomised. If possible should be double-blinded. Intention to treat analysis What are the main characteristics of an RCT?

Cohort studies prospective groups (cohorts) exposure to a risk factor followed over a period of time compare rates of development of an outcome of interest Confounding factors and bias

Case control studies Retrospective Subjects confirmed with a disease (cases) are compared with non-diseased subjects (controls) in relation to possible past exposure to a risk factor. Confounding factors and bias

Appraising original research Are the results valid? Is the research question focused? Was the method appropriate? How was it conducted, e.g. randomisation, blinding, recruitment and follow up? What are the results? How was data collected and analysed? Are they significant? Will the results help my work with patients?

Appraising systematic reviews. In addition to the above: Was a thorough literature search carried out ? Publication bias - papers with more ‘interesting’ results are more likely to be: Submitted for publication Accepted for publication Published in a major journal Published in the English language These are the main points to consider. Bear in mind that you are not trying to pull the paper to pieces. You are just trying to consider “is the article good enough”. It can’t be perfect.

Reviews in general medical journals 50 reviews in 4 major journals 1985-6 No statement of methods Summary inappropriate “Current systematic reviews do not routinely use scientific methods to identify, assess and synthesise information” (Mulrow, 1987)

Is the research question focused? Patient (e.g. child) Intervention (e.g. MMR vaccine) Comparison (e.g. single vaccines) Outcome (e.g. autism) The main components of a focused question. If the research question is not focused, the article will not be able to answer your clinical question so there is no point in reading it. Don’t have to have a Comparison.

Are results significant? How was data collected? Which statistical analyses were used? How precise are the results? How are the results presented? These are some of the aspects that the checklist will help you to look at. You don’t have to be a statistical expert! Be wary of complicated statistical analyses (data-torturing). Repeated calculations could eventually prove that black is white. Could be used to get the results they wanted to find. The basic analyses should always have been carried out and presented, e.g. Confidence Intervals and/or p values Meta-analyses (possibly) Odds ratios

Intention to treat analyses Analysing people, at the end of the trial, in the groups to which they were randomised, even if they did not receive the intended intervention

Statistical analyses Odds ratios, absolute and relative risks/benefits The likelihood of something happening vs the likelihood of something not happening Numbers needed to treat (NNT) The number of people you would need to treat to see one additional occurrence of a specific beneficial outcome

Odds Ratio Diagrams. (Blobbograms or Forest Plots.)

Odds Ratio Diagrams Line of no effect – no difference between treatment and control group Result (blob) to the Left of the line of no effect = Less of the outcome in the treatment group. Result to the Right of the line = More of the outcome. BUT - Is the outcome good or bad? Results of SRs are often presented as odds ratio diagrams. Pooled statistical results or meta-analyses. It is useful to understand how to interpret these. Does anybody know?

Cardiac deaths – Less = good What does this result tell us? Outcome is unfavourable (death). Left = less of the outcome with treatment, so good! Diamond at the bottom = overall results.

Smoking cessation – More = good So what does this result tell us? Outcome is favourable (smoking cessation). To the right = more of the outcome with treatment, so good! You don’t always get the ‘Favours treatment’ bit at the bottom.

Confidence Intervals. Longer confidence interval = less confident of results – wider range. Shorter confidence interval = more confident – narrower range. Crosses line of no effect/no significance = Inconclusive results. Studies are weighted according to their size. Does anybody know how the lines and blobs work?

Confidence intervals Big blob and small line means it is a big study and we can be quite confident of the results. Small blob and long line means small study and less confidence in results. If confidence interval crosses line of no effect, we cannot be sure whether the favour control or favour treatment. Sometimes individual studies cross line but overall results are conclusive. Here results are inconclusive. We don’t know.

P Values. P stands for probability - how likely is the result to have occurred by chance? P value of less than 0.05 means likelihood of results being due to chance is less than 1 in 20 = “statistically significant”. P values and confidence intervals should be consistent Another thing to look for is the p value. Does anybody know what this is? Look for a p value of less than 0.05. This means it is not likely that the results came about by chance.

Number Needed to Treat The number of people you would need to treat to see one additional occurrence of a specific beneficial outcome. The number of patients that need to be treated to prevent one bad outcome. The NNT can be calculated by finding the Absolute Risk Reduction (ARR)

Events or outcomes are used for reporting results Events or outcomes are used for reporting results. The event rate is the proportion of patients in a group in whom the event is observed Outcome event Total Yes No Experimental group a b a + b Control group c d c + d a + c b + d a + b + c + d

CER and EER Control Event Rate (CER) is the proportion of patients in the control group in whom an event is observed. CER = c/(c+d) Experimental Event Rate (EER) is the proportion of patients in the experimental in whom an event is observed. EER = a/(a+b)

AAR & NNT Absolute Risk Reduction is the difference between the Control Event Rate (CER) and the Experimental Event Rate (EER). ARR = CER – EER Number needed to treat (NNT) NNT = 1/ARR

Outcome event Total Yes No 3 10 5 8 12 20   Outcome event Total Yes No Experimental group 3 7 10 Control group 5 8 12 20

Answers What is the event ? Lack of concentration and sleeping What is the control event rate (CER)?  5/10 = 0.50 What is the experimental event rate (EER)? 3/10 = 0.30 Calculate the absolute risk reduction (ARR)  0.50 – 0.30 = 0.20  What is the number needed to treat (NNT)?  1.00/0.20 = 5

Are results relevant? Can I apply these results to my own practice? Is my local setting significantly different? Are these findings applicable to my patients? Are findings specific/detailed enough to be applied? Were all outcomes considered? Very important. It is no good spending ages looking at a paper if you can’t apply it to your own patients. SRs are good in that respect as they usually include a wide range of patient groups from different RCTs. Are any differences between this and my setting important or not?

The good news! Some resources have already been critically appraised for you. An increasing number of guidelines and summaries of appraised evidence are available on the internet. The good thing about Cochrane is that if you find one of their systematic reviews then you don’t need to worry so much about the reliability or quality of the research. They will evaluate it for you and you can be more confident in trusting the opinions of these experts. There are also other evidence-based research summaries available – see handout for further information.

Summary. Search for resources that have already been appraised first, e.g. Guidelines, Cochrane systematic reviews. Search down through levels of evidence, e.g. systematic reviews, RCTs. Use checklists to appraise research. How can these results be put into practice? Try to use ready-appraised resources first if you can, saves time and effort. Use hierarchy of evidence to find suitable papers. Always be thinking about putting the evidence into practice (if the research is good quality) – how should policy or practice change?