Diagnostic Tests Patrick S. Romano, MD, MPH Professor of Medicine and Pediatrics Patrick S. Romano, MD, MPH Professor of Medicine and Pediatrics.

Slides:



Advertisements
Similar presentations
Validity and Reliability of Analytical Tests. Analytical Tests include both: Screening Tests Diagnostic Tests.
Advertisements

Lecture 3 Validity of screening and diagnostic tests
Chapter 4 Pattern Recognition Concepts: Introduction & ROC Analysis.
TESTING A TEST Ian McDowell Department of Epidemiology & Community Medicine November, 2004.
Evaluation of segmentation. Example Reference standard & segmentation.
Critically Evaluating the Evidence: diagnosis, prognosis, and screening Elizabeth Crabtree, MPH, PhD (c) Director of Evidence-Based Practice, Quality Management.
Introduction to Biostatistics, Harvard Extension School © Scott Evans, Ph.D.1 Evaluation of Screening and Diagnostic Tests.
GerstmanChapter 41 Epidemiology Kept Simple Chapter 4 Screening for Disease.
© Vipin Kumar CSci 8980 Fall CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance Computing Research Center Department of Computer.
Statistical Fridays J C Horrow, MD, MSSTAT
Principles of Epidemiology Lecture 12 Dona Schneider, PhD, MPH, FACE
Judgement and Decision Making in Information Systems Diagnostic Modeling: Bayes’ Theorem, Influence Diagrams and Belief Networks Yuval Shahar, M.D., Ph.D.
(Medical) Diagnostic Testing. The situation Patient presents with symptoms, and is suspected of having some disease. Patient either has the disease or.
Screening and Early Detection Epidemiological Basis for Disease Control – Fall 2001 Joel L. Weissfeld, M.D. M.P.H.
DATASET INTRODUCTION 1. Dataset: Urine 2 From Cleveland Clinic
Critiquing for Evidence-based Practice: Diagnostic and Screening Tests M8120 Columbia University Fall 2001 Suzanne Bakken, RN, DNSc.
Interpreting Diagnostic Tests
Screening Test for Occult Cancer 100 patients with occult cancer: 95 have "x" in their blood 100 patients without occult cancer: 95 do not have "x" in.
Statistics in Screening/Diagnosis
BASIC STATISTICS: AN OXYMORON? (With a little EPI thrown in…) URVASHI VAID MD, MS AUG 2012.
Medical decision making. 2 Predictive values 57-years old, Weight loss, Numbness, Mild fewer What is the probability of low back cancer? Base on demographic.
Mother and Child Health: Research Methods G.J.Ebrahim Editor Journal of Tropical Pediatrics, Oxford University Press.
Diagnostic Testing Ethan Cowan, MD, MS Department of Emergency Medicine Jacobi Medical Center Department of Epidemiology and Population Health Albert Einstein.
Division of Population Health Sciences Royal College of Surgeons in Ireland Coláiste Ríoga na Máinleá in Éirinn Indices of Performances of CPRs Nicola.
Performance measurement. Must be careful what performance metric we use For example, say we have a NN classifier with 1 output unit, and we code ‘1 =
Principles and Predictive Value of Screening. Objectives Discuss principles of screening Describe elements of screening tests Calculate sensitivity, specificity.
Sensitivity Sensitivity answers the following question: If a person has a disease, how often will the test be positive (true positive rate)? i.e.: if the.
1 Interpreting Diagnostic Tests Ian McDowell Department of Epidemiology & Community Medicine January 2012 Note to readers: you may find the additional.
Screening and Diagnostic Testing Sue Lindsay, Ph.D., MSW, MPH Division of Epidemiology and Biostatistics Institute for Public Health San Diego State University.
Diagnosis: EBM Approach Michael Brown MD Grand Rapids MERC/ Michigan State University.
CpSc 810: Machine Learning Evaluation of Classifier.
1 Epidemiological Measures I Screening for Disease.
MEASURES OF TEST ACCURACY AND ASSOCIATIONS DR ODIFE, U.B SR, EDM DIVISION.
Evaluating Diagnostic Tests Payam Kabiri, MD. PhD. Clinical Epidemiologist Tehran University of Medical Sciences.
Appraising A Diagnostic Test
CT image testing. What is a CT image? CT= computed tomography CT= computed tomography Examines a person in “slices” Examines a person in “slices” Creates.
Statistical test for Non continuous variables. Dr L.M.M. Nunn.
Likelihood 2005/5/22. Likelihood  probability I am likelihood I am probability.
Evidence-Based Medicine Diagnosis Component 2 / Unit 5 1 Health IT Workforce Curriculum Version 1.0 /Fall 2010.
1. Statistics Objectives: 1.Try to differentiate between the P value and alpha value 2.When to perform a test 3.Limitations of different tests and how.
Evaluating Results of Learning Blaž Zupan
TESTING A TEST Ian McDowell Department of Epidemiology & Community Medicine January 2008.
Prediction statistics Prediction generally True and false, positives and negatives Quality of a prediction Usefulness of a prediction Prediction goes Bayesian.
Diagnostic Tests Afshin Ostovar Bushehr University of Medical Sciences Bushehr, /7/20151.
Positive Predictive Value and Negative Predictive Value
1 Wrap up SCREENING TESTS. 2 Screening test The basic tool of a screening program easy to use, rapid and inexpensive. 1.2.
Diagnostic Tests Studies 87/3/2 “How to read a paper” workshop Kamran Yazdani, MD MPH.
Unit 15: Screening. Unit 15 Learning Objectives: 1.Understand the role of screening in the secondary prevention of disease. 2.Recognize the characteristics.
Diagnostic Test Characteristics: What does this result mean
Screening.  “...the identification of unrecognized disease or defect by the application of tests, examinations or other procedures...”  “...sort out.
10 May Understanding diagnostic tests Evan Sergeant AusVet Animal Health Services.
Evaluating Classification Performance
P(B)P(B)P(B ) Bayes’ Formula Exactly how does one event A affect the probability of another event B? 1 AP(B)P(B) prior probability posterior probability.
Laboratory Medicine: Basic QC Concepts M. Desmond Burke, MD.
Diagnosis Examination(MMSE) in detecting dementia among elderly patients living in the community. Excel.
Timothy Wiemken, PhD MPH Assistant Professor Division of Infectious Diseases Diagnostic Tests.
Elspeth Slayter, Associate Professor School of Social Work, Salem State University.
Sensitivity, Specificity, and Receiver- Operator Characteristic Curves 10/10/2013.
Screening Tests: A Review. Learning Objectives: 1.Understand the role of screening in the secondary prevention of disease. 2.Recognize the characteristics.
Performance of a diagnostic test Tunisia, 31 Oct 2014
Diagnostic Test Studies
Evidence-Based Medicine
Evaluating Results of Learning
بسم الله الرحمن الرحيم Clinical Epidemiology
کاربرد آمار در آزمایشگاه
Diagnosis II Dr. Brent E. Faught, Ph.D. Assistant Professor
Is a Positive Developmental-Behavioral Screening Score Sufficient to Justify Referral? A Review of Evidence and Theory  R. Christopher Sheldrick, PhD,
Figure 1. Table for calculating the accuracy of a diagnostic test.
Patricia Butterfield & Naomi Chaytor October 18th, 2017
Evidence Based Diagnosis
Presentation transcript:

Diagnostic Tests Patrick S. Romano, MD, MPH Professor of Medicine and Pediatrics Patrick S. Romano, MD, MPH Professor of Medicine and Pediatrics

Disease + Disease- Test + TPFPTP + FP Test-FNTNFN + TN TP + FNFP + TNTotal Disease + Disease- Test + TPFPTP + FP Test-FNTNFN + TN TP + FNFP + TNTotal The Two-by-two Table

True positives: Patients with disease who test positive False negatives: Patients with disease who test negative True negatives: Patients without disease who test negative False positives: Patients without disease who test positive True positives: Patients with disease who test positive False negatives: Patients with disease who test negative True negatives: Patients without disease who test negative False positives: Patients without disease who test positive The Two-by-two Table (cont)

Sensitivity: TP/(TP + FN) Test accuracy (or probability of correct classification) among patients with disease Specificity: TN/(TN + FP) Test accuracy (or probability of correct classification) among patients without disease Sensitivity: TP/(TP + FN) Test accuracy (or probability of correct classification) among patients with disease Specificity: TN/(TN + FP) Test accuracy (or probability of correct classification) among patients without disease Test Characteristics

Positive predictive value: TP/(TP + FP) Predictive value of a positive (abnormal) result OR post-test probability of disease, given positive test Negative predictive value: TN/(TN + FN) Predictive value of a negative (normal) result OR post-test probability of non- disease, given negative test Positive predictive value: TP/(TP + FP) Predictive value of a positive (abnormal) result OR post-test probability of disease, given positive test Negative predictive value: TN/(TN + FN) Predictive value of a negative (normal) result OR post-test probability of non- disease, given negative test Test Characteristics (cont)

Have you ever felt you should CUT down on your drinking? Have people ANNOYED you by criticizing your drinking? Have you ever felt bad or GUILTY about your drinking? Have you ever had a drink first thing in the morning to steady your nerves or to get rid of a hangover (EYE opener?) Have you ever felt you should CUT down on your drinking? Have people ANNOYED you by criticizing your drinking? Have you ever felt bad or GUILTY about your drinking? Have you ever had a drink first thing in the morning to steady your nerves or to get rid of a hangover (EYE opener?) CAGE Questionnaire

No ‘Yes’AlcoholismAlcoholism responses(n) (%)(n) (%) No ‘Yes’AlcoholismAlcoholism responses(n) (%)(n) (%) Prevalence of Alcoholism by CAGE Score

Performance Characteristics of CAGE: 3-4 “yes” Responses

Performance Characteristics of CAGE: 2-4 “yes” Responses

 Choice of cutoff value  Quality of administration of test – Equipment, technique, reagents, questionnaire  Quality of interpretation of test  Spectrum of disease (severity distribution) – A truncated sample may result from using a test measure to select recipients of the “gold standard” measure  NOT prevalence  Choice of cutoff value  Quality of administration of test – Equipment, technique, reagents, questionnaire  Quality of interpretation of test  Spectrum of disease (severity distribution) – A truncated sample may result from using a test measure to select recipients of the “gold standard” measure  NOT prevalence What Affects Sensitivity?

What Affects Specificity?  Choice of cutoff value: – Sensitivity-specificity tradeoff  Quality of administration of test  Quality of interpretation of test  Spectrum of non-disease – Other prevalent diseases may cause false positive values  NOT prevalence  Choice of cutoff value: – Sensitivity-specificity tradeoff  Quality of administration of test  Quality of interpretation of test  Spectrum of non-disease – Other prevalent diseases may cause false positive values  NOT prevalence

 Sensitivity  Specificity  Prevalence  Sensitivity  Specificity  Prevalence PV+ = (Sensitivity)(Prevalence) (Sens)(Prev) + (1-Spec)(1-Prev) PV- = (Specificity)(1-Prevalence) (Spec)(1-Prev) + (1-Sens)(Prev) What Affects Predictive Values?

Performance Characteristics of CAGE: High Prevalence of Alcoholism

Performance Characteristics of CAGE: Low Prevalence of Alcoholism

Test Characteristics  Likelihood ratio (positive): = Sensitivity / (1-Specificity) = (TP/Disease +) / (FP/Disease –)  Likelihood of a (true) positive test among patients with disease, relative to the likelihood of a (false) positive test among those without disease  How much more likely are you to find a positive test result in a person with disease than in a person without disease?  Likelihood ratio (positive): = Sensitivity / (1-Specificity) = (TP/Disease +) / (FP/Disease –)  Likelihood of a (true) positive test among patients with disease, relative to the likelihood of a (false) positive test among those without disease  How much more likely are you to find a positive test result in a person with disease than in a person without disease?

Test Characteristics (cont)  Likelihood ratio (positive): = Sensitivity/(1-Specificity) = (TP/Disease +)/(FP/Disease –) If ODDS = p(event)/[1-p(event)], then:  Pre-test odds x Likelihood ratio = Post-test odds  Prior odds x Likelihood ratio = Posterior odds  Likelihood ratio (positive): = Sensitivity/(1-Specificity) = (TP/Disease +)/(FP/Disease –) If ODDS = p(event)/[1-p(event)], then:  Pre-test odds x Likelihood ratio = Post-test odds  Prior odds x Likelihood ratio = Posterior odds

Specificity (FP/[TN+FP]) Sensitivity (TP/[TP+FN]) Urologic practice Community screening PSA Performance (ROC) Curve

Sensitivity 1–Sensitivity Specificity 1–Specificity 2.5 ng/ml 5.0 ng/ml 10.0 ng/ml 2.5 ng/ml 5.0 ng/ml 10.0 ng/ml Stage A Stage B Stage C Stage D

Problem: In your study, you are using a diagnostic test of unknown accuracy. A better "gold standard" test is available, but is too expensive or too complicated for you to adopt. How accurate is your classification of patients based on the cheaper test? Problem: In your study, you are using a diagnostic test of unknown accuracy. A better "gold standard" test is available, but is too expensive or too complicated for you to adopt. How accurate is your classification of patients based on the cheaper test? Using Bayes Theorem

Solution—Step 1: Review the literature (or check with your instrument supplier or manufacturer) to ascertain the sensitivity and specificity of the measure in previous studies. Solution—Step 1: Review the literature (or check with your instrument supplier or manufacturer) to ascertain the sensitivity and specificity of the measure in previous studies. Using Bayes Theorem (cont)

Solution—Step 2: If possible, do your own "validation.” This usually involves applying the gold standard to a subset of your sample and comparing the results with those of the cheaper test. A 5–10% subsample may suffice (depending on sample size). Solution—Step 2: If possible, do your own "validation.” This usually involves applying the gold standard to a subset of your sample and comparing the results with those of the cheaper test. A 5–10% subsample may suffice (depending on sample size). Using Bayes Theorem (cont)

Solution—Step 3: Apply Bayes theorem to calculate the predictive values of positive and negative tests, based on sensitivity, specificity, and prevalence. Sensitivity = P(disease)|Test + Specificity = P(no disease)|Test - Prevalence = Prior probability of disease in your sample Solution—Step 3: Apply Bayes theorem to calculate the predictive values of positive and negative tests, based on sensitivity, specificity, and prevalence. Sensitivity = P(disease)|Test + Specificity = P(no disease)|Test - Prevalence = Prior probability of disease in your sample Using Bayes Theorem (cont)

Solution—Step 3 (cont) PV+ = (Sensitivity)(Prevalence) (Sens)(Prev) + (1-Spec)(1-Prev) PV- = (Specificity)(1-Prevalence) (Spec)(1-Prev) + (1-Sens)(Prev) Using Bayes Theorem (cont)

You are using daily urinary ratios of pregnanediol-3-glucuronide to creatinine, indexed against each patient's baseline value, to identify anovulatory menstrual cycles. The “gold standard” involves serum progesterone determinations, but cannot be applied to a large community-based sample. You are using daily urinary ratios of pregnanediol-3-glucuronide to creatinine, indexed against each patient's baseline value, to identify anovulatory menstrual cycles. The “gold standard” involves serum progesterone determinations, but cannot be applied to a large community-based sample. Using Bayes Theorem—Example

Cycles with a low ratio are labeled anovulatory. The test has a sensitivity of 90% and a specificity of 90%. In the real world, where only 5-10% of cycles are anovulatory, how often will you misclassify cycles? Cycles with a low ratio are labeled anovulatory. The test has a sensitivity of 90% and a specificity of 90%. In the real world, where only 5-10% of cycles are anovulatory, how often will you misclassify cycles? Using Bayes Theorem— Example (cont)

PV+ = (0.9)(0.1)/[(0.9)(0.1)+(0.1)(0.9)] =0.50 Assuming 10% prevalence PV+ = (0.9)(0.05)/[(0.9)(0.05)+(0.1)(0.95)] =0.32 Assuming 5% prevalence PV+ = (0.9)(0.1)/[(0.9)(0.1)+(0.1)(0.9)] =0.50 Assuming 10% prevalence PV+ = (0.9)(0.05)/[(0.9)(0.05)+(0.1)(0.95)] =0.32 Assuming 5% prevalence Using Bayes Theorem— Example (cont) Using Bayes Theorem— Example (cont) In other words, 50–68% of all cycles labeled as anovulatory will actually be false positives (e.g., ovulatory).

Thank you !