CRITICALLY APPRAISING EVIDENCE Lisa Broughton, PhD, RN, CCRN.

Slides:



Advertisements
Similar presentations
Introduction to Hypothesis Testing
Advertisements

Tests of Hypotheses Based on a Single Sample
CHAPTER 21 Inferential Statistical Analysis. Understanding probability The idea of probability is central to inferential statistics. It means the chance.
8. Evidence-based management Step 3: Critical appraisal of studies
Reading the Dental Literature
Reviewing and Critiquing Research
Reliability, Validity, Trustworthiness If a research says it must be right, then it must be right,… right??
Writing a Research Protocol Michael Aronica MD Program Director Internal Medicine-Pediatrics.
Concept of Measurement
Evaluating Hypotheses Chapter 9. Descriptive vs. Inferential Statistics n Descriptive l quantitative descriptions of characteristics.
Evaluating Hypotheses Chapter 9 Homework: 1-9. Descriptive vs. Inferential Statistics n Descriptive l quantitative descriptions of characteristics ~
SOWK 6003 Social Work Research Week 4 Research process, variables, hypothesis, and research designs By Dr. Paul Wong.
SOWK 6003 Social Work Research Week 5 Measurement By Dr. Paul Wong.
FOUNDATIONS OF NURSING RESEARCH Sixth Edition CHAPTER Copyright ©2012 by Pearson Education, Inc. All rights reserved. Foundations of Nursing Research,
FINAL REPORT: OUTLINE & OVERVIEW OF SURVEY ERRORS
Critical Appraisal of an Article by Dr. I. Selvaraj B. SC. ,M. B. B. S
Reliability & Validity Qualitative Research Methods.
Copyright c 2001 The McGraw-Hill Companies, Inc.1 Chapter 7 Sampling, Significance Levels, and Hypothesis Testing Three scientific traditions critical.
Measurement and Data Quality
Are the results valid? Was the validity of the included studies appraised?
RESEARCH A systematic quest for undiscovered truth A way of thinking
Chapter 3 An Overview of Quantitative Research
Research Methods in Computer Science Lecture: Quantitative and Qualitative Data Analysis | Department of Science | Interactive Graphics System.
Validity & Reliability Trustworthiness
Data Analysis. Quantitative data: Reliability & Validity Reliability: the degree of consistency with which it measures the attribute it is supposed to.
Analyzing Reliability and Validity in Outcomes Assessment (Part 1) Robert W. Lingard and Deborah K. van Alphen California State University, Northridge.
Research Design. Research is based on Scientific Method Propose a hypothesis that is testable Objective observations are collected Results are analyzed.
Qualitative Research and Decision-Making
WELNS 670: Wellness Research Design Chapter 5: Planning Your Research Design.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 8 Planning a Nursing Study.
Reliability and Validity Why is this so important and why is this so difficult?
To Know From Huff (1954) Ch’s. 1 & 4 What is… – Population- Significance – Sample- Statistical error – Sampling error- Probably error – Sample bias- Standard.
Reliability & Validity
Slides to accompany Weathington, Cunningham & Pittenger (2010), Chapter 3: The Foundations of Research 1.
Research Seminars in IT in Education (MIT6003) Research Methodology I Dr Jacky Pow.
Literature searching & critical appraisal Chihaya Koriyama August 15, 2011 (Lecture 2)
URBDP 591 I Lecture 3: Research Process Objectives What are the major steps in the research process? What is an operational definition of variables? What.
TRUSTWORTHINESS OF THE RESEARCH Assoc. Prof. Dr. Şehnaz Şahinkarakaş.
Statistical Power The power of a test is the probability of detecting a difference or relationship if such a difference or relationship really exists.
Paper III Qualitative research methodology. Objective 1.3 To what extent can findings be generalized from qualitative studies.
Issues concerning the interpretation of statistical significance tests.
Copyright © Allyn & Bacon 2008 Intelligent Consumer Chapter 14 This multimedia product and its contents are protected under copyright law. The following.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
JS Mrunalini Lecturer RAKMHSU Data Collection Considerations: Validity, Reliability, Generalizability, and Ethics.
© 2006 by The McGraw-Hill Companies, Inc. All rights reserved. 1 Chapter 7 Sampling, Significance Levels, and Hypothesis Testing Three scientific traditions.
Criteria for selection of a data collection instrument. 1.Practicality of the instrument: -Concerns its cost and appropriateness for the study population.
© Copyright McGraw-Hill 2004
Writing A Review Sources Preliminary Primary Secondary.
Copyright © 2011, 2005, 1998, 1993 by Mosby, Inc., an affiliate of Elsevier Inc. Chapter 13: Boundary Setting in Experimental-Type Designs A deductive.
Chapter 7 Data for Decisions. Population vs Sample A Population in a statistical study is the entire group of individuals about which we want information.
European Patients’ Academy on Therapeutic Innovation The Purpose and Fundamentals of Statistics in Clinical Trials.
Course: Research in Biomedicine and Health III Seminar 5: Critical assessment of evidence.
Chapter 13 Understanding research results: statistical inference.
Elspeth Slayter, Ph.D. Assistant Professor, Salem State University Lecture notes on threats to validity, threats to trustworthiness and the optimization.
Dr. Jeffrey Oescher 27 January 2014 Technical Issues  Two technical issues  Validity  Reliability.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
Critiquing Quantitative Research.  A critical appraisal is careful evaluation of all aspects of a research study in order to assess the merits, limitations,
Data Collection Methods NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN.
Critical Appraisal Course for Emergency Medicine Trainees Module 2 Statistics.
HCS 465 OUTLET Experience Tradition /hcs465outlet.com FOR MORE CLASSES VISIT
Understanding Populations & Samples
Understanding Populations & Samples
Critically Appraising a Medical Journal Article
Reliability and Validity
Reliability and Validity in Research
Confidence Intervals and p-values
Understanding Results
Statistics in Applied Science and Technology
Analyzing Reliability and Validity in Outcomes Assessment Part 1
Managerial Decision Making and Evaluating Research
Presentation transcript:

CRITICALLY APPRAISING EVIDENCE Lisa Broughton, PhD, RN, CCRN

IMPORTANCE Critical appraisal of the literature is a must to distinguish reliable evidence to determine if new evidence should be incorporated into practice Critical appraisal of the literature is a must to distinguish reliable evidence to determine if new evidence should be incorporated into practice In order to make the best decisions about practice you must know how to determine if a study is providing valid, reliable evidence In order to make the best decisions about practice you must know how to determine if a study is providing valid, reliable evidence In order to move research into practice rigorous approaches for evaluating the literature are needed In order to move research into practice rigorous approaches for evaluating the literature are needed How certain are you that the clinical action will produce the intended outcome? How certain are you that the clinical action will produce the intended outcome? The goal of critically appraising the literature is to separate the good and the bad parts of a study The goal of critically appraising the literature is to separate the good and the bad parts of a study Do the good and useful elements outweigh the study’s pitfalls, since there are no perfect studies? Do the good and useful elements outweigh the study’s pitfalls, since there are no perfect studies?

EVIDENCE HIERARCHY The higher the study on the evidence pyramid, the more likely the intervention produced the outcome The higher the study on the evidence pyramid, the more likely the intervention produced the outcome

STRENGTH OF THE EVIDENCE Rating the strength of the evidence on a particular topic requires evaluation of: Rating the strength of the evidence on a particular topic requires evaluation of: Quality Quality How rigorous are the study designs? How rigorous are the study designs? Quantity Quantity How many studies have evaluated a particular topic? What are the sample sizes of the studies? How many studies have evaluated a particular topic? What are the sample sizes of the studies? Consistency Consistency What are the results of the studies? Are findings consistent or inconsistent with similar studies? What are the results of the studies? Are findings consistent or inconsistent with similar studies?

CRITICALLY EVALUATING QUANTITATIVE STUDIES Purpose and background Purpose and background Does the author clearly identify the purpose of the study ? Does the literature review clearly identify a gap in the literature that this study can fill? Does the author clearly identify the purpose of the study ? Does the literature review clearly identify a gap in the literature that this study can fill? Sampling Sampling Is the sample size appropriate? Is the sample size appropriate? Is it large enough to ensure reasonable confidence that chance is minimized and that the effect of the intervention is what is causing the outcomes? Is the sample so large that even a small change demonstrates significant results? Is it large enough to ensure reasonable confidence that chance is minimized and that the effect of the intervention is what is causing the outcomes? Is the sample so large that even a small change demonstrates significant results? Type I and Type II errors Type I and Type II errors Has a power analysis been performed? Has a power analysis been performed? Congruency between the research questions being asked and the methodology chosen Congruency between the research questions being asked and the methodology chosen

CRITICALLY EVALUATING QUANTITATIVE STUDIES Reliability and validity of measurement tools Reliability and validity of measurement tools Research reports should discuss the reliability and validity of measurement tools used in the study Research reports should discuss the reliability and validity of measurement tools used in the study Does the instrument chosen measure the concept it is intended to measure (validity) Does the instrument chosen measure the concept it is intended to measure (validity) Face, content, criterion-related, and construct Face, content, criterion-related, and construct Does the instrument give the same results consistently over time (reliability) Does the instrument give the same results consistently over time (reliability) Test-retest reliability, Cronbach’s alpha, inter-rater reliability (0-1.0 scale) Test-retest reliability, Cronbach’s alpha, inter-rater reliability (0-1.0 scale) Is the statistical test chosen for analysis appropriate to the research design chosen? Is the statistical test chosen for analysis appropriate to the research design chosen? Missing data Missing data Selective reporting Selective reporting Outliers Outliers How congruent are the results with previous, similar studies? How congruent are the results with previous, similar studies? Did the researchers address the weaknesses when reporting findings? Did the researchers address the weaknesses when reporting findings?

CRITICALLY APPRAISING QUANTITATIVE STUDIES Are the study results valid? Are the study results valid? Are the results of the study obtained by sound research? Was the study conducted properly (internal validity)? Are the results of the study obtained by sound research? Was the study conducted properly (internal validity)? Bias Bias Anything that distorts findings Anything that distorts findings Selection bias and randomization Selection bias and randomization Knowledge of the intervention and blinding Knowledge of the intervention and blinding Error in data collection Error in data collection Confounding variables Confounding variables The relationship between two variables is actually due to a third, known or unknown variable The relationship between two variables is actually due to a third, known or unknown variable Did the researchers address the possibility of confounding variables? Are there attempts to control for these variables in the study? Did the researchers address the possibility of confounding variables? Are there attempts to control for these variables in the study?

CRITICALLY APPRAISING QUANTITATIVE STUDIES Are the study results reliable? Are the study results reliable? What is the size of the intervention’s effect (effect size) and how was that estimated? What is the size of the intervention’s effect (effect size) and how was that estimated? Are the results statistically and clinically significant? Are the results statistically and clinically significant? p value is calculated that indicates the probability that the null hypothesis is true p value is calculated that indicates the probability that the null hypothesis is true Highly dependent on sample size Are the re Are the results of actual clinical significance? Confidence intervals (CI) indicate the range in which the true effect lies within a certain degree of certainty (usually 95%). A range of values in which they can be 95% certain that the results will fall within that range. Generally, the narrower the CI, the better. The margin of error is lower. Will the results assist in caring for patients? Are the findings generalizable (external validity) Will the results assist in caring for patients? Are the findings generalizable (external validity)

CRITICALLY EVALUATING QUALITATIVE STUDIES Evaluating trustworthiness: Evaluating trustworthiness: Credibility Credibility Accuracy and validity of data confirmed through: Accuracy and validity of data confirmed through: Documentation of actions, opinions, and biases (reflexivity) Documentation of actions, opinions, and biases (reflexivity) Purposive sampling Purposive sampling Data saturation Data saturation Triangulation Triangulation Validation of data through member checks and peer debriefing Validation of data through member checks and peer debriefing Transferability Transferability Demonstrates the research findings are meaningful to other people in similar situations. Can the implications be applied to a larger population? Demonstrates the research findings are meaningful to other people in similar situations. Can the implications be applied to a larger population? Done by the researcher explaining how informants feel and make sense of their experiences and their effectiveness in communicating what lessons can be learned from the data Done by the researcher explaining how informants feel and make sense of their experiences and their effectiveness in communicating what lessons can be learned from the data

CRITICALLY EVALUATING QUALITATIVE STUDIES Evaluating trustworthiness: Evaluating trustworthiness: Dependability Dependability The researcher carefully documents how conclusions were reached The researcher carefully documents how conclusions were reached Audit trail Audit trail Can another researcher in similar situations expect to obtain similar findings? Can another researcher in similar situations expect to obtain similar findings? Confirmability Confirmability Providing substantiations that findings and interpretations are grounded in the data Providing substantiations that findings and interpretations are grounded in the data Links between the researcher’s assertions and the data are clear Links between the researcher’s assertions and the data are clear The researcher provides ample evidence of informant’s statements and a decision trail is clear The researcher provides ample evidence of informant’s statements and a decision trail is clear Evaluating authenticity: Evaluating authenticity: Does the researcher give the reader a deeper understanding of the studied phenomenon? Does the researcher give the reader a deeper understanding of the studied phenomenon?