Presentation is loading. Please wait.

Presentation is loading. Please wait.

Diagnostic Tests Patrick S. Romano, MD, MPH Professor of Medicine and Pediatrics Patrick S. Romano, MD, MPH Professor of Medicine and Pediatrics.

Similar presentations


Presentation on theme: "Diagnostic Tests Patrick S. Romano, MD, MPH Professor of Medicine and Pediatrics Patrick S. Romano, MD, MPH Professor of Medicine and Pediatrics."— Presentation transcript:

1 Diagnostic Tests Patrick S. Romano, MD, MPH Professor of Medicine and Pediatrics Patrick S. Romano, MD, MPH Professor of Medicine and Pediatrics

2 Disease + Disease- Test + TPFPTP + FP Test-FNTNFN + TN TP + FNFP + TNTotal Disease + Disease- Test + TPFPTP + FP Test-FNTNFN + TN TP + FNFP + TNTotal The Two-by-two Table

3 True positives: Patients with disease who test positive False negatives: Patients with disease who test negative True negatives: Patients without disease who test negative False positives: Patients without disease who test positive True positives: Patients with disease who test positive False negatives: Patients with disease who test negative True negatives: Patients without disease who test negative False positives: Patients without disease who test positive The Two-by-two Table (cont)

4 Sensitivity: TP/(TP + FN) Test accuracy (or probability of correct classification) among patients with disease Specificity: TN/(TN + FP) Test accuracy (or probability of correct classification) among patients without disease Sensitivity: TP/(TP + FN) Test accuracy (or probability of correct classification) among patients with disease Specificity: TN/(TN + FP) Test accuracy (or probability of correct classification) among patients without disease Test Characteristics

5 Positive predictive value: TP/(TP + FP) Predictive value of a positive (abnormal) result OR post-test probability of disease, given positive test Negative predictive value: TN/(TN + FN) Predictive value of a negative (normal) result OR post-test probability of non- disease, given negative test Positive predictive value: TP/(TP + FP) Predictive value of a positive (abnormal) result OR post-test probability of disease, given positive test Negative predictive value: TN/(TN + FN) Predictive value of a negative (normal) result OR post-test probability of non- disease, given negative test Test Characteristics (cont)

6 Have you ever felt you should CUT down on your drinking? Have people ANNOYED you by criticizing your drinking? Have you ever felt bad or GUILTY about your drinking? Have you ever had a drink first thing in the morning to steady your nerves or to get rid of a hangover (EYE opener?) Have you ever felt you should CUT down on your drinking? Have people ANNOYED you by criticizing your drinking? Have you ever felt bad or GUILTY about your drinking? Have you ever had a drink first thing in the morning to steady your nerves or to get rid of a hangover (EYE opener?) CAGE Questionnaire

7 42310000 3379713 2286714 33 111282872 018535895 42310000 3379713 2286714 33 111282872 018535895 No ‘Yes’AlcoholismAlcoholism responses(n) (%)(n) (%) No ‘Yes’AlcoholismAlcoholism responses(n) (%)(n) (%) Prevalence of Alcoholism by CAGE Score

8 Performance Characteristics of CAGE: 3-4 “yes” Responses

9 Performance Characteristics of CAGE: 2-4 “yes” Responses

10  Choice of cutoff value  Quality of administration of test – Equipment, technique, reagents, questionnaire  Quality of interpretation of test  Spectrum of disease (severity distribution) – A truncated sample may result from using a test measure to select recipients of the “gold standard” measure  NOT prevalence  Choice of cutoff value  Quality of administration of test – Equipment, technique, reagents, questionnaire  Quality of interpretation of test  Spectrum of disease (severity distribution) – A truncated sample may result from using a test measure to select recipients of the “gold standard” measure  NOT prevalence What Affects Sensitivity?

11 What Affects Specificity?  Choice of cutoff value: – Sensitivity-specificity tradeoff  Quality of administration of test  Quality of interpretation of test  Spectrum of non-disease – Other prevalent diseases may cause false positive values  NOT prevalence  Choice of cutoff value: – Sensitivity-specificity tradeoff  Quality of administration of test  Quality of interpretation of test  Spectrum of non-disease – Other prevalent diseases may cause false positive values  NOT prevalence

12  Sensitivity  Specificity  Prevalence  Sensitivity  Specificity  Prevalence PV+ = (Sensitivity)(Prevalence) (Sens)(Prev) + (1-Spec)(1-Prev) PV- = (Specificity)(1-Prevalence) (Spec)(1-Prev) + (1-Sens)(Prev) What Affects Predictive Values?

13 Performance Characteristics of CAGE: High Prevalence of Alcoholism

14 Performance Characteristics of CAGE: Low Prevalence of Alcoholism

15 Test Characteristics  Likelihood ratio (positive): = Sensitivity / (1-Specificity) = (TP/Disease +) / (FP/Disease –)  Likelihood of a (true) positive test among patients with disease, relative to the likelihood of a (false) positive test among those without disease  How much more likely are you to find a positive test result in a person with disease than in a person without disease?  Likelihood ratio (positive): = Sensitivity / (1-Specificity) = (TP/Disease +) / (FP/Disease –)  Likelihood of a (true) positive test among patients with disease, relative to the likelihood of a (false) positive test among those without disease  How much more likely are you to find a positive test result in a person with disease than in a person without disease?

16 Test Characteristics (cont)  Likelihood ratio (positive): = Sensitivity/(1-Specificity) = (TP/Disease +)/(FP/Disease –) If ODDS = p(event)/[1-p(event)], then:  Pre-test odds x Likelihood ratio = Post-test odds  Prior odds x Likelihood ratio = Posterior odds  Likelihood ratio (positive): = Sensitivity/(1-Specificity) = (TP/Disease +)/(FP/Disease –) If ODDS = p(event)/[1-p(event)], then:  Pre-test odds x Likelihood ratio = Post-test odds  Prior odds x Likelihood ratio = Posterior odds

17 0 20 40 100 0204060 1-Specificity (FP/[TN+FP]) Sensitivity (TP/[TP+FN]) Urologic practice Community screening PSA Performance (ROC) Curve 80 60 80100

18 1.00.80.60.40.20 0.4 0.6 0.8 0 0.60.40.2 0.8 0.6 0.4 0.2 0 Sensitivity 1–Sensitivity Specificity 1–Specificity 2.5 ng/ml 5.0 ng/ml 10.0 ng/ml 2.5 ng/ml 5.0 ng/ml 10.0 ng/ml Stage A Stage B Stage C Stage D

19 Problem: In your study, you are using a diagnostic test of unknown accuracy. A better "gold standard" test is available, but is too expensive or too complicated for you to adopt. How accurate is your classification of patients based on the cheaper test? Problem: In your study, you are using a diagnostic test of unknown accuracy. A better "gold standard" test is available, but is too expensive or too complicated for you to adopt. How accurate is your classification of patients based on the cheaper test? Using Bayes Theorem

20 Solution—Step 1: Review the literature (or check with your instrument supplier or manufacturer) to ascertain the sensitivity and specificity of the measure in previous studies. Solution—Step 1: Review the literature (or check with your instrument supplier or manufacturer) to ascertain the sensitivity and specificity of the measure in previous studies. Using Bayes Theorem (cont)

21 Solution—Step 2: If possible, do your own "validation.” This usually involves applying the gold standard to a subset of your sample and comparing the results with those of the cheaper test. A 5–10% subsample may suffice (depending on sample size). Solution—Step 2: If possible, do your own "validation.” This usually involves applying the gold standard to a subset of your sample and comparing the results with those of the cheaper test. A 5–10% subsample may suffice (depending on sample size). Using Bayes Theorem (cont)

22 Solution—Step 3: Apply Bayes theorem to calculate the predictive values of positive and negative tests, based on sensitivity, specificity, and prevalence. Sensitivity = P(disease)|Test + Specificity = P(no disease)|Test - Prevalence = Prior probability of disease in your sample Solution—Step 3: Apply Bayes theorem to calculate the predictive values of positive and negative tests, based on sensitivity, specificity, and prevalence. Sensitivity = P(disease)|Test + Specificity = P(no disease)|Test - Prevalence = Prior probability of disease in your sample Using Bayes Theorem (cont)

23 Solution—Step 3 (cont) PV+ = (Sensitivity)(Prevalence) (Sens)(Prev) + (1-Spec)(1-Prev) PV- = (Specificity)(1-Prevalence) (Spec)(1-Prev) + (1-Sens)(Prev) Using Bayes Theorem (cont)

24 You are using daily urinary ratios of pregnanediol-3-glucuronide to creatinine, indexed against each patient's baseline value, to identify anovulatory menstrual cycles. The “gold standard” involves serum progesterone determinations, but cannot be applied to a large community-based sample. You are using daily urinary ratios of pregnanediol-3-glucuronide to creatinine, indexed against each patient's baseline value, to identify anovulatory menstrual cycles. The “gold standard” involves serum progesterone determinations, but cannot be applied to a large community-based sample. Using Bayes Theorem—Example

25 Cycles with a low ratio are labeled anovulatory. The test has a sensitivity of 90% and a specificity of 90%. In the real world, where only 5-10% of cycles are anovulatory, how often will you misclassify cycles? Cycles with a low ratio are labeled anovulatory. The test has a sensitivity of 90% and a specificity of 90%. In the real world, where only 5-10% of cycles are anovulatory, how often will you misclassify cycles? Using Bayes Theorem— Example (cont)

26 PV+ = (0.9)(0.1)/[(0.9)(0.1)+(0.1)(0.9)] =0.50 Assuming 10% prevalence PV+ = (0.9)(0.05)/[(0.9)(0.05)+(0.1)(0.95)] =0.32 Assuming 5% prevalence PV+ = (0.9)(0.1)/[(0.9)(0.1)+(0.1)(0.9)] =0.50 Assuming 10% prevalence PV+ = (0.9)(0.05)/[(0.9)(0.05)+(0.1)(0.95)] =0.32 Assuming 5% prevalence Using Bayes Theorem— Example (cont) Using Bayes Theorem— Example (cont) In other words, 50–68% of all cycles labeled as anovulatory will actually be false positives (e.g., ovulatory).

27 Thank you !


Download ppt "Diagnostic Tests Patrick S. Romano, MD, MPH Professor of Medicine and Pediatrics Patrick S. Romano, MD, MPH Professor of Medicine and Pediatrics."

Similar presentations


Ads by Google