Presentation is loading. Please wait.

Presentation is loading. Please wait.

CRITICALLY APPRAISING EVIDENCE Lisa Broughton, PhD, RN, CCRN.

Similar presentations


Presentation on theme: "CRITICALLY APPRAISING EVIDENCE Lisa Broughton, PhD, RN, CCRN."— Presentation transcript:

1 CRITICALLY APPRAISING EVIDENCE Lisa Broughton, PhD, RN, CCRN

2 IMPORTANCE Critical appraisal of the literature is a must to distinguish reliable evidence to determine if new evidence should be incorporated into practice Critical appraisal of the literature is a must to distinguish reliable evidence to determine if new evidence should be incorporated into practice In order to make the best decisions about practice you must know how to determine if a study is providing valid, reliable evidence In order to make the best decisions about practice you must know how to determine if a study is providing valid, reliable evidence In order to move research into practice rigorous approaches for evaluating the literature are needed In order to move research into practice rigorous approaches for evaluating the literature are needed How certain are you that the clinical action will produce the intended outcome? How certain are you that the clinical action will produce the intended outcome? The goal of critically appraising the literature is to separate the good and the bad parts of a study The goal of critically appraising the literature is to separate the good and the bad parts of a study Do the good and useful elements outweigh the study’s pitfalls, since there are no perfect studies? Do the good and useful elements outweigh the study’s pitfalls, since there are no perfect studies?

3 EVIDENCE HIERARCHY The higher the study on the evidence pyramid, the more likely the intervention produced the outcome The higher the study on the evidence pyramid, the more likely the intervention produced the outcome

4 STRENGTH OF THE EVIDENCE Rating the strength of the evidence on a particular topic requires evaluation of: Rating the strength of the evidence on a particular topic requires evaluation of: Quality Quality How rigorous are the study designs? How rigorous are the study designs? Quantity Quantity How many studies have evaluated a particular topic? What are the sample sizes of the studies? How many studies have evaluated a particular topic? What are the sample sizes of the studies? Consistency Consistency What are the results of the studies? Are findings consistent or inconsistent with similar studies? What are the results of the studies? Are findings consistent or inconsistent with similar studies?

5 CRITICALLY EVALUATING QUANTITATIVE STUDIES Purpose and background Purpose and background Does the author clearly identify the purpose of the study ? Does the literature review clearly identify a gap in the literature that this study can fill? Does the author clearly identify the purpose of the study ? Does the literature review clearly identify a gap in the literature that this study can fill? Sampling Sampling Is the sample size appropriate? Is the sample size appropriate? Is it large enough to ensure reasonable confidence that chance is minimized and that the effect of the intervention is what is causing the outcomes? Is the sample so large that even a small change demonstrates significant results? Is it large enough to ensure reasonable confidence that chance is minimized and that the effect of the intervention is what is causing the outcomes? Is the sample so large that even a small change demonstrates significant results? Type I and Type II errors Type I and Type II errors Has a power analysis been performed? Has a power analysis been performed? Congruency between the research questions being asked and the methodology chosen Congruency between the research questions being asked and the methodology chosen

6 CRITICALLY EVALUATING QUANTITATIVE STUDIES Reliability and validity of measurement tools Reliability and validity of measurement tools Research reports should discuss the reliability and validity of measurement tools used in the study Research reports should discuss the reliability and validity of measurement tools used in the study Does the instrument chosen measure the concept it is intended to measure (validity) Does the instrument chosen measure the concept it is intended to measure (validity) Face, content, criterion-related, and construct Face, content, criterion-related, and construct Does the instrument give the same results consistently over time (reliability) Does the instrument give the same results consistently over time (reliability) Test-retest reliability, Cronbach’s alpha, inter-rater reliability (0-1.0 scale) Test-retest reliability, Cronbach’s alpha, inter-rater reliability (0-1.0 scale) Is the statistical test chosen for analysis appropriate to the research design chosen? Is the statistical test chosen for analysis appropriate to the research design chosen? Missing data Missing data Selective reporting Selective reporting Outliers Outliers How congruent are the results with previous, similar studies? How congruent are the results with previous, similar studies? Did the researchers address the weaknesses when reporting findings? Did the researchers address the weaknesses when reporting findings?

7 CRITICALLY APPRAISING QUANTITATIVE STUDIES Are the study results valid? Are the study results valid? Are the results of the study obtained by sound research? Was the study conducted properly (internal validity)? Are the results of the study obtained by sound research? Was the study conducted properly (internal validity)? Bias Bias Anything that distorts findings Anything that distorts findings Selection bias and randomization Selection bias and randomization Knowledge of the intervention and blinding Knowledge of the intervention and blinding Error in data collection Error in data collection Confounding variables Confounding variables The relationship between two variables is actually due to a third, known or unknown variable The relationship between two variables is actually due to a third, known or unknown variable Did the researchers address the possibility of confounding variables? Are there attempts to control for these variables in the study? Did the researchers address the possibility of confounding variables? Are there attempts to control for these variables in the study?

8 CRITICALLY APPRAISING QUANTITATIVE STUDIES Are the study results reliable? Are the study results reliable? What is the size of the intervention’s effect (effect size) and how was that estimated? What is the size of the intervention’s effect (effect size) and how was that estimated? Are the results statistically and clinically significant? Are the results statistically and clinically significant? p value is calculated that indicates the probability that the null hypothesis is true p value is calculated that indicates the probability that the null hypothesis is true Highly dependent on sample size Are the re Are the results of actual clinical significance? Confidence intervals (CI) indicate the range in which the true effect lies within a certain degree of certainty (usually 95%). A range of values in which they can be 95% certain that the results will fall within that range. Generally, the narrower the CI, the better. The margin of error is lower. Will the results assist in caring for patients? Are the findings generalizable (external validity) Will the results assist in caring for patients? Are the findings generalizable (external validity)

9 CRITICALLY EVALUATING QUALITATIVE STUDIES Evaluating trustworthiness: Evaluating trustworthiness: Credibility Credibility Accuracy and validity of data confirmed through: Accuracy and validity of data confirmed through: Documentation of actions, opinions, and biases (reflexivity) Documentation of actions, opinions, and biases (reflexivity) Purposive sampling Purposive sampling Data saturation Data saturation Triangulation Triangulation Validation of data through member checks and peer debriefing Validation of data through member checks and peer debriefing Transferability Transferability Demonstrates the research findings are meaningful to other people in similar situations. Can the implications be applied to a larger population? Demonstrates the research findings are meaningful to other people in similar situations. Can the implications be applied to a larger population? Done by the researcher explaining how informants feel and make sense of their experiences and their effectiveness in communicating what lessons can be learned from the data Done by the researcher explaining how informants feel and make sense of their experiences and their effectiveness in communicating what lessons can be learned from the data

10 CRITICALLY EVALUATING QUALITATIVE STUDIES Evaluating trustworthiness: Evaluating trustworthiness: Dependability Dependability The researcher carefully documents how conclusions were reached The researcher carefully documents how conclusions were reached Audit trail Audit trail Can another researcher in similar situations expect to obtain similar findings? Can another researcher in similar situations expect to obtain similar findings? Confirmability Confirmability Providing substantiations that findings and interpretations are grounded in the data Providing substantiations that findings and interpretations are grounded in the data Links between the researcher’s assertions and the data are clear Links between the researcher’s assertions and the data are clear The researcher provides ample evidence of informant’s statements and a decision trail is clear The researcher provides ample evidence of informant’s statements and a decision trail is clear Evaluating authenticity: Evaluating authenticity: Does the researcher give the reader a deeper understanding of the studied phenomenon? Does the researcher give the reader a deeper understanding of the studied phenomenon?


Download ppt "CRITICALLY APPRAISING EVIDENCE Lisa Broughton, PhD, RN, CCRN."

Similar presentations


Ads by Google