Presentation is loading. Please wait.

Presentation is loading. Please wait.

Verification of Performance Specifications

Similar presentations


Presentation on theme: "Verification of Performance Specifications"— Presentation transcript:

1 Verification of Performance Specifications
An Advanced View of Method Validation Version 5.0, August 2012 Additional materials supplied for Advanced Assay Validation modules

2 Objectives Identify test classifications
Define what each validation experiment details for testing methods Discuss what is recommended to perform each of the validation experiments for testing methods Recognize how to evaluate data obtained from each of the validation experiments

3 Pre-Assessment Question #1
A rapid Human Immunodeficiency Virus (HIV) test would likely be classified as a: High complexity, modified assay Moderate complexity, unmodified assay Food and Drug Administration (FDA)-approved, modified assay Waived, FDA-approved, unmodified assay Remember to pick the best answer. The correct answer is: D. “Waived, FDA-approved, unmodified assay”

4 Pre-Assessment Question #2
The precision of a test method gives information related to the method’s: Systematic error Comparison of results to a reference method Reproducibility Likelihood of being affected by hemolysis, lipemia and icterus Both A and B Remember to pick the best answer. The correct answer is: C. “Reproducibility”

5 Pre-Assessment Question #3
When transferring reference intervals of 20 specimens used, what is the minimum number that must fall within manufacturer’s reference intervals? 20 18 16 15 Remember to pick the best answer. The correct answer is: B. “18”

6 Pre-Assessment Question #4
Which linear regression equation component gives information regarding constant bias? y x m (slope) b (intercept) Remember to pick the best answer. The correct answer is: D. “b (intercept)”

7 Selecting a Method Evaluate diagnostic tests
Characteristics of testing methods References: Technical literature and manufacturer’s information Select method of analysis Validate method performance Implement method Perform tests with appropriate Quality Control (QC) and External Quality Assurance (EQA) Characteristics of testing methods Applications: Types of specimens, sample volume recommended, time of performance, work, space recommended, etc. Methodology: Sensitivity and specificity, reportable range, etc. A number of factors are involved in the selection of methods and equipment required for testing (cost, space, access to local equipment support, etc.) Important to review analyzers offered by different companies – do they meet the needs of your lab? Need to validate the assay before you begin to use it to report patient test results. Need to verify manufacturers claims; validate performance method

8 Method Validation What is method validation? Why must we validate?
When should we validate? What is Method Validation? Method validation and/or verification is the process by which a method is determined to be fit for purpose and intended use. Although method validation/verification are often used interchangeably, validation is usually performed on in-house and/or modified methods, while verification is taking a marketed/unmodified assay and verifying its performance. Why is it important? Different testing environment, need to demonstrate that the test method performs in your lab environment as the manufacturer states it should Need to prove to yourself that the results reported by the manufacturer are reliable. When? Initially, before releasing patient results, and after any manufacturer changes/modifications and/or movement of the equipment. What should we validate?

9 Method Validation (cont’d)
Why is validation important? Division of Acquired Immunodeficiency Syndrome (DAIDS) requirement How important is it that the results produced by the testing method are reliable? Shouldn’t the laboratory know the level of performance of an adopted test method? Validation is an FDA requirement under Investigational New Drug (IND).

10 Tests to Validate Waived Non-waived Unmodified FDA-approved
Waived tests are approved by the FDA for home use and by definition are simple to perform (i.e., pregnancy test) – do not require validation. Non-waived tests included moderately and highly complex tests [DEFINE] – requires validation Degree of validation/verification depends on status of the test method (FDA-approved/Non-FDA-approved/Modified/Unmodified) Unmodified FDA-approved: using as intended by the manufacturer and licensed for use by the FDA Modified or Non-FDA approved: using test kit for indications other than as intended by the manufacturer; not licensed for use by the FDA. Unmodified FDA-approved Modified and/or Non-FDA-approved

11 FDA Approval Resources
Vendor Publications Procedures/InVitroDiagnostics/LabTest/ucm htm Sources for information regarding FDA approval status.

12 Skill Check What would you consider to be the complexity, per Clinical Laboratory Improvement Amendments (CLIA), of the glucose assay in the workbook? Waived Moderate High Remember to pick the best answer. The correct answer is: B. “Moderate” Moderate: No special pre-treatment steps recommended, performed on analyzer, very little or no interpretation required.

13 Skill Check What would you consider to be the complexity of a rapid urine pregnancy assay? Waived Moderate High Remember to pick the best answer. The correct answer is: A. “Waived” Waived: Simple to perform, very little chance of error, can be performed outside of the lab by non-clinical personnel.

14 Skill Check What would you consider to be the complexity of performing a manual white cell differential using a stained whole blood smear? Waived Moderate High Remember to pick the best answer. The correct answer is: C. “High” High Complexity: Some degree of specimen preparation/pretreatment, interpretation/identification required.

15 Method Validation Before you begin:
Be sure you are familiar with the test method before starting Know what to expect from the method (package insert, discussions with technical assistance, and field service representatives) Do not include results outside of stated reportable ranges Predict your findings; establish limits/evaluation criteria

16 Central Tendency Dispersion Terms for Discussion
Central Tendency – describes the way in which quantitative data tend to cluster around some value; If you run specimens again and again, results have a tendency to go to an average (mean) Dispersion – spread of results

17 Terms for Discussion (cont’d)
Values All results, even if they don’t look good. Run 17

18 Error in Test Methods Some error is expected Examples
Error must be managed Understanding Defining specifications of allowable error Measurement Some error is expected; but need to manage error in order to report accurate results. All methods have some level of systematic and random error. The purpose of method validation is error assessment (i.e. random, systematic and total analytical errors) During method validation a series of experiments are performed to estimate certain types of analytical errors: Linearity experiments determine reportable range Replication experiments estimate precision or random error Comparison of methods experiments estimate accuracy or systematic error Interference experiments estimate constant and proportional systematic errors ( or analytical specificity) Detection limit experiments characterize analytical sensitivity. We will review all of these concepts shortly.

19 Total Error of Testing System
Total Allowable Error CLIA Guidelines per analyte Other Guidelines Systematic Error Random Error Total Error Total Allowable Error: For a given test or assay, what we should expect to see when you combine systematic and random error. CLIA guidelines have several indicators that can be used to help you determine your Total Allowable Error for a given test/assay.

20 Error Assessment Random Error (RE) Systematic Error (SE) Total Error
In one direction, cause results to be high or low In either direction, unpredictable Combined effect Here we discuss the idea that Systematic + Random error = total error, and has the potential to push the result we get that distance from the actual true result. We can draw a picture to represent this. Systematic Error: Tends towards or direction (either +/- from true value); size is stable and consistent; appears every time you perform the test. Random Error: By nature is random; tends towards either direction (can add to or deduct from the true value); size is unpredictable. Average amount of random error occurs sporadically; each specimen going through the test will be affected to varying degrees. CLIA guidelines available for estimation of Total Error; may not always be available for your particular assay. David Rhodes – the most we should ever allow for Total Error is 30%.

21 Total Error Considerations
Low End Performance Standards Recommendations derived from upper portion of reportable range are more difficult to achieve at lower concentrations Maximum Total Error Allowed Considered to be 30% by David Rhoads, except for amplification methods

22 Systematic and Random Errors
Systematic Error Slope/Proportional error Intercept/Constant error Bias Random Error Mean Standard deviation (SD) Coefficient of variation (CV) Systematic error is represented in terms of… Random error is represented in terms of…

23 Tools for Use Data-Crunching Tools
Statistical calculators, graph paper Spreadsheets with calculations Here is where we describe and/or demonstrate tools that we have. We can use the spreadsheets from the standards at this point…Or just free wheel it. Need some data crunching tools at your disposal (i.e., Excel) Validation software can be purchased (i.e., EP Evaluation, Analyze IT, etc.) Westgard has free tools available for use online ( Validation Software (Westgard, Analyze-It, EP Evaluator)

24 How We Will Work Through This Module
One quantitative test taken through the validation process One qualitative method taken through the validation process

25 Elements of Validation
Reportable Range Precision Accuracy Elements of Validation Reference Intervals Sensitivity The 6 Elements of Method Validation: If FDA-approved/Unmodified – only Reportable Range, Precision, Accuracy and Reference Intervals need to be verified. If Non-FDA approved/Modified – all 6 elements must be performed, including sensitivity and specificity. Correction Factors: Correction factors, if used, must be incorporated into the relevant test procedure and reflected in the appropriate Standard Operating Procedure (SOP). Correction factors represent adjustments made to compensate for constant and proportional error (or bias). Specificity

26 How we perform the testing
Precision Definition: Reproducibility Gives information related to random error Introduction 20 samples of same material (typically two levels; e.g., Glucose at 50 and 300 mg/dL) Standard solutions Control materials Pools (short term only) What is needed Measure of the reproducibility of the assay. Provides information related to random error. How do you verify precision? Must repeat testing on 20 samples over one day (short term) and over a period of 20 days (long term). Within 20 days we hope to see good among of variation in terms of how the test performed and exposure to a variety of environmental conditions. Repeat testing over short and long term (one day and 20 days, respectively) How we perform the testing

27 Precision: How We Evaluate the Data
Calculate the following: Mean Standard deviation (SD) Coefficient of Variation (CV) What amount of random error is allowable, based on CLIA criteria? Short term: 0.25 of allowable total error Long term: 0.33 of allowable total error First step, calculate mean, SD (standard deviation), and CV CV = SD/Mean x 100% “the great leveler” Compare information to manufacturers package insert, OR Compare to CLIA recommendations for allowable random error: CLIA criteria for allowable random error: no more than 25% for short term, or 33% for long term

28 Allowable Total Error Database
Link for: Clinical Laboratory Improvement Amendments (CLIA) College of American Pathologists (CAP) Royal College of Pathologists of Australasia (RCPA) Others

29 Precision: Levey-Jennings (LJ) Charts
Values All results, even if they don’t look good. Run 29

30 Precision: How We Evaluate the Data
How do we compare to manufacturer’s data? Mean SD CV: More commonly used, allows for easier comparison Compare results to manufacturers data first, if comparable to manufacturers data you can indicate that the method is acceptable from a precision standard and move forward. If not, then compare to CLIA.

31 Precision Example Mean of Level 1 Glucose 90 mg/dL
CLIA Total Allowable Error 6 mg/dL or ± 10% Total Allowable Error Level 1 Glucose 0.1 x 90 = 9 mg/dL Random error allowed: 0.25 x total allowable 0.33 x total allowable As an example…. Calculated mean is 90 mg/mL Per CLIA – 25% of mean is attributed to random error for Short Term (2.25 mg/dL); 33% for Long Term (2.97 mg/dL) Compare CLIA calculations for short and long term? Do results make sense? Yes, would expect more random error over the long term. Short-term precision Long-term precision 0.25 x 9 mg/dL 0.33 x 9 mg/dL 2.25 mg/dL 2.97 mg/dL

32 Activity Work with Levey-Jennings graph and data
Work with mean and standard deviation to calculate a coefficient of variation, as well as a mean and a coefficient of variation to calculate a standard deviation Determine if precision data is acceptable <Refer to supplemental materials provided for this section> Supplemental Materials Needed (Advanced Method Validation): Precision Data Sets Worksheet, Graph paper

33 How we perform the testing
Accuracy Definition: How close to the true value Comparison of methods Gives information related to systematic error Potential conflicts on interpretation of results (reference values) Introduction 40 different specimens Cover reportable range of method Quality versus quantity What is needed How does the new method compare to the reference method or standard? Testing should be performed in duplicate over at least 5 days in order to account for analyzer variations over the specified time period. Precision gives you information related to random error; Accuracy gives you information related to systematic error. Duplicate measurements of each specimen on each method Minimum of five days, prefer over 20 (since replicate testing is same) How we perform the testing

34 Accuracy: How We Evaluate the Data
Graph the Data: Difference plot Real time Comparison plot Calculate y = mx + b Test method on Y-axis b represents constant error Reference (comparative) method on X-axis m represents proportional error Create graph electronically or by hand on an x/y axis. Plot data, draw best fit line, calculate linear regression equation (y= mx +b). b= y intercept, m= slope, x=comparative/reference method. Best case scenario: Slope close to 1; Intercept close to 0 – would indicate that a result of 50 on current method is also 50 on comparative method. Difference plot, if one-to-one relationship expected. Shows analytical range of data, linearity of response over range and relationship between methods

35 Visual Inspection for Accuracy
(x1, y1) Test Method (x2, y2) Slope = (y2- y1) / (x2- x1) To evaluate accuracy, first run all of your samples on both the old instrument and the new instrument. <<<click>>> Next, plot the results you obtain. In this example the old instrument is along the X axis and the new instrument is along the Y axis. Plot all of the results from your run. Draw the “best fit” line. Intercept Reference Method 35

36 Accuracy: How We Evaluate the Data
Slope: Usually not significantly different from 1 Intercept: Not significantly different from 0 Significant difference with Medical Decision Points

37 Calculate Appropriate Statistics
Slope Measure of proportional bias m = (y1-y2)/(x1-x2) or “rise/run” Slope greater than 1 means the Y (Test) values are generally higher than the X (Comparative) values Slope of 1.11 means the Y (Test) values are on average 11% higher than the X (Comparative) values Linear regression programs or calculator will perform this calculation for you. Slope itself is a measure of proportional bias. If m>1, the higher the value of x; the more variation on y. If m>1, can assume y>x, i.e. test values for new method are generally higher than comparative method. If m>1.11, test values are 11% greater than comparative method. If m=0.95, test values are lower on average by 5%.

38 Calculate Appropriate Statistics (cont'd)
Intercept of the Line Measure of constant bias between two methods Y (Test) value at the point where the line crosses the Y axis If Y intercept is 12, then all Y (Test) values are at least 12 units higher than the X (Comparative) values Assuming slope = 1, if y= 12, the test values are at least 12 units higher. Assuming slope = 1, if y= -10, assume test values are 10 units lower than comparative method.

39 What type of bias do you see?
Accuracy What type of bias do you see? Answer on the next slide.

40 Accuracy (cont’d) Constant Bias Proportional Bias
Is this proportional or constant bias Constant Bias: y intercept has changed Proportional Bias: Slope has changed

41 Skill Check Can a linear regression formula offer predictive value in relation to method comparisons? Yes No Remember to pick the best answer. The correct answer is: A. “Yes”

42 Activity Create graph based on sample set
Determine slope from best-fit line Determine Y-intercept from best-fit line Explain the relationship between comparative and test results <Refer to supplemental materials provided for this section> Supplemental Materials Needed (Advanced Method Validation): Comparison Data Set, Graph Paper

43 Reportable Range / Linearity
Definition: Lowest and highest test results that are reliable Especially important with two point calibrations Analytical Measurement Range (AMR) and derived Clinical Reportable Range (CRR) Introduction Series of samples of known concentrations (e.g., standard solutions, EQA linearity sets) Series of known dilutions of highly elevated specimen or spiked specimens; EQA specimens At least four levels (five preferred) What is needed Linearity may be misnomer; hence the terminology of Analytical Measurement Range and derived Clinical Reportable Range. How we perform the testing CLSI recommends four measurements of each specimen; three are sufficient

44 Reportable Range: How We Evaluate the Data
Plot mean values of: Measured values on Y-axis versus Known or assigned values on X-axis Visually inspect, draw best-fit line, estimate reportable range Compare with expected values (typically provided by manufacturer)

45 Reportable Range Activity
Assigned Value Experimental Results Average Rep #1 Rep #2 Rep #3 Rep #4 10.0 ____ 11.0 100.0 99.0 103.0 101.0 300.0 303.0 305.0 304.0 306.0 500.0 505.0 506.0 800.0 740.0 741.0 744.0 742.0

46 Reportable Range Activity (cont'd)
Assigned Value Experimental Results Average Rep #1 Rep #2 Rep #3 Rep #4 10.0 10.5 11.0 100.0 101.5 99.0 103.0 101.0 300.0 304.5 303.0 305.0 304.0 306.0 500.0 505.5 505.0 506.0 800.0 741.8 740.0 741.0 744.0 742.0

47 Reportable Range Activity (cont'd)
Can everyone see the slight drop in linearity between 700 and 1,000?

48 AMR vs. CRR Analytical Measurement Range (AMR) Linearity
Clinically Reportable Range (CRR) Discussion of definitions: AMR: Straight linearity, minimum three (3) levels of known concentrations CRR: Dilutions NOTE: Validation materials must span the range of the Analytical Measurement Range (AMR); the matrix of the materials should not interfere with the method or bias the results. Allows for dilution or other preparatory steps beyond routine

49 Skill Check If you do not have enough specimen to perform a dilution, upon which reportable range component must you rely? AMR CRR Neither A or B Both A and B Remember to pick the best answer. The correct answer is: A. “AMR”

50 Linearity Materials Utilizing the marketing materials from the two chemistry linearity kits in your handouts: Determine which kit would be more appropriate for use with the chemistry assay you chose earlier Explain your reasoning <Refer to supplemental materials provided for this section> Supplemental Materials Needed (Advanced Method Validation): NOVA-ONE Chemistry Reference Kits, Fisher Healthcare (Microgenics CASCO DOCUMENT CALVER) The first one is better…has an upper concentration of 750, more closely approximating the manufacturer’s linearity.

51 Graph Activity Given your choice of linearity kits, you perform your AMR experiments by performing four replicates of each level of known concentration solution. The data you obtain is displayed on the next slide. Review data; record any initial observations Graph data on supplied graph paper Determine your assay’s AMR <Refer to supplemental materials provided for this section> Supplemental Materials Needed (Advanced Method Validation): Graph Paper

52 Linearity Experiment Results
Level Rep 1 Rep 2 Rep 3 Rep 4 1 24 23 25 2 196 197 171 194 3 359 360 358 361 4 530 532 529 535 5 700 695 702 709 5 levels of known concentration linearity materials, run in duplicates of 4 What is the next step? Evaluate data – are there any outliers? Yes (171) Statistically speaking you can only exclude one outlier Exclude 171 and calculate the average

53 Activity Using an Excel spreadsheet, create a graph and calculate linear regression statistics from the data provided To what concentration have we proven linearity? (705 = AMR)

54 Lab's Average Known Conc 195.7 200 375 550 725
Rep 1 Rep 2 Rep 3 Rep 4 Lab's Average Known Conc 24 23 25 196 197 171 194 195.7 200 359 360 358 361 375 530 532 529 535 550 700 695 702 709 725 Exclude outlier (171) and calculate the average

55

56 Dilution Protocols Your medical director, in consultation with clinicians, determines that for proper study participant care the Clinically Reportable Range (CRR) for glucose is 15 – 1400 mg/dL Given your linearity experiment results and the package insert, devise a dilution protocol to be contained within our Glucose SOP <Refer to supplemental materials provided for this section> Supplemental Materials Needed (Advanced Method Validation): Glucose Package Insert If analyzer creates results greater than 1400, report as >1400 If less than 15, report as <15 Dilution Protocol: What will you use as a diluent? Consult the manufacturers package insert for guidance (i.e., saline) Based on results, what is the maximum dilution we will ever perform? We have proven AMR up to 705, and the highest reportable value is 1400, so at most dilute to 2X [2 x 700 = 1,400]

57 Reportable Results Given your AMR, CRR, and dilution protocol, how would you handle the following analyzer results? 12 mg/dL 800 mg/dL 1600 mg/dL Using the previously calculated values for AMR and CRR, and the dilution protocol, discuss how you would handle the following analyzer results

58 How we perform the testing
Reference Intervals Definition: Normal range in healthy population Used for diagnosis/clinical interpretation of results Introduction Pre-defined “normal” criteria for screening purposes Transferring: 20 “normal” individuals’ specimens Establishing: 120 “normal” individuals’ specimens What is needed For each reagent or kit, need to establish reference intervals to assist clinicians in interpretation of results. Based on normal “healthy” population. Test and compare to manufacturer: If all 20 specimens fall within the specified range, the reference ranges have been verified and you can adopt manufacturer suggestions and incorporate into your SOP. If not, need to establish reference ranges for your population (240 specimens: 120 each of male and female). Another lab may have established reference ranges that can be used for verification purposes; must be documented. How we perform the testing Perform testing on all samples Document results

59 Reference Intervals: How We Evaluate the Data
Transferring Establishing 18 of 20 must fall within manufacturer’s ranges Calculate mean and SD of data for each group Reference Intervals = mean ± 2 SD (if Gaussian Distribution only, otherwise, additional calculations recommended) If 3 of 20 fail…must look at another group of 20 for a total 40; now 36 must fall within the reference range.

60 Activity Determine if assay is eligible for transference of reference intervals Review a sample set of data to determine if transference may be performed; if not, determine next step(s) <Refer to supplemental materials provided for this section> Supplemental Materials Needed (Advanced Method Validation): Normal Range Data, Glucose Package Insert Adult Glucose Result: Glucose manufacturer’s reference range (refer to package insert): mg/dL Is the assay eligible for transference of reference intervals? NO (3 fall out of manufacturers reference range; 107, 106 and 110) *Need to repeat with another 20, of which 36 of the 40 must fall within manufacturers expected range

61 How we perform the testing
Sensitivity Definition: Lowest reliable value; lower limit of detection, especially of interest in drug testing and tumor markers Different terminologies used by different manufacturers Introduction Blank solutions Spiked samples What is needed Not required for FDA approved/ unmodified assay. If FDA approved/Unmodified, information can be used directly and incorporated in the your SOP. Need to adopt values in your SOP as evidence for sponsor/audit. How we perform the testing 20 replicate measurements over short or long term, depending on focus

62 Sensitivity: How We Evaluate the Data
Three methods used: Lower Limit of Detection (LLD): Mean of the blank sample, plus two or three SD of blank sample Biological Limit of Detection: LLD plus two or three times SD of spiked sample with concentration of detection limit Functional Sensitivity: Mean concentration for spiked sample whose CV = 20%; lowest limit where quantitative data is reliable

63 Activity Using the manufacturer’s package inserts, find the related information for sensitivity. How was it calculated? <Refer to supplemental materials provided for this section> Supplemental Materials Needed (Advanced Method Validation): Glucose Package Insert Referring to the package insert, determine what method was used to calculate sensitivity: Limit of Detection (LOD) = 2.5mg/mL Limit of Quantitation (LOQ) = 5.0mg/mL FDA approved/unmodified assay – can copy information directly into SOP and adopt values directly; do not need to validate

64 How we perform the testing
Specificity Definition: Determination of how well a method measures the analyte of interest accompanied by potential interfering materials Introduction Standard solutions, participant specimens or pools Interferer solutions (standard solutions, if possible; otherwise, pools or specimens) added at high concentrations What is needed Ability of your method to accurately measure an analyte within the presence of potential interfering substances. Not required for FDA approved/ unmodified assay. If FDA approved/Unmodified, adopt values in your SOP as evidence for sponsor/audit. How we perform the testing Duplicate measurements

65 Specificity: How We Evaluate the Data
Tabulate results for pairs of samples (dilution and interferent) Calculate means for each (dilution and interferent) Calculate the differences Calculate the average interference of all specimens tested at a given concentration of interference What else can cause interferences? Drugs/medication/hemolysis/lipemia. Run assay with specimen diluted with blank (saline) and compare to same specimen diluted with potential interfering substance.

66 Qualitative Assays Compare diagnosis
Assume comparative (reference) method is accurate Determine the following: True Positives, True negatives False Positives, False negatives Calculate sensitivity and specificity and compare to manufacturer Up until now, this module has spoken by and large to validation steps in relation to quantitative assays, or assays that generate a numeric result. What about qualitative assays, the tests that generate positive or negative, reactive or non-reactive type results? The steps recommended to validate these assays vary a bit in the respect that laboratories will compare “diagnoses” produced by the new testing method as compared to a current or reference method. The rule of thumb here is to assume the reference method (current method) is accurate, and if there is a discrepancy in the results; the new method’s result is the one in question. Given this convention, the laboratory can tally discrepant results into classifications of false positives or false negatives, as compared with the reference method. The results that compare across methods would be tallied as true positives or negatives. Given this data, the laboratory can calculate a comparative sensitivity and specificity, that allows evaluation of the new method based on published information from the new method’s manufacturer. 66

67 Qualitative Assays: Control of Validation
Negative and Positive Quality Controls Use QC materials recommended by manufacturer for verification purposes Determine validity of other results, e.g., method comparisons Evaluate failed runs if they occur during verification process

68 Qualitative Methods: Precision
How is it performed? Runs of specimens with analyte concentrations near the cutoff point Three specimens, one at cutoff, one just below cutoff, and one just above cutoff (± 20% recommended) Replicate measurements of each of three specimens (20 each, minimum) How is it evaluated? Determine percentage of positives and negatives for each specimen Evaluate cutoff, as well as other two specimens

69 Accuracy/Method Comparisons
How is it performed? Specimens typical of population (to be tested in future use of method) 50 positive specimens and 50 negative specimens recommended; minimum 20 each Performed over 10 to 20 days How is it evaluated? Discrepant results near cutoff? Most often sensitivity and specificity used to describe performance

70 Comparative or Reference Method Result Positive Predictive Value
Qualitative Methods Comparative or Reference Method Result Positive Negative Test Method Result True Positive False Positive Positive Predictive Value False Negative True Negative Negative Predictive Value Sensitivity Specificity True vs. False False Positive Rate - False Positives divided by total number of Negatives False Negative Rate - False Negatives divided by total number of Positives

71 Qualitative Methods (cont'd)
Comparative or Reference Method Result Positive Negative Test Method Result True Positive False Positive Positive Predictive Value False Negative True Negative Negative Predictive Value Sensitivity Specificity Sensitivity = 100 x True Positives divided by (True Positives + False Negatives) Specificity = 100 x True Negatives divided by (True Negatives + False Positives)

72 Qualitative Methods (cont'd)
Comparative or Reference Method Result Positive Negative Test Method Result True Positive False Positive Positive Predictive Value False Negative True Negative Negative Predictive Value Sensitivity Specificity Predictive Values - Operation of a test on a mixed population of Positive and Negatives A property of the test and the population; and affected by prevalence of Positives Positive Predictive Value = True Positives divided by (True Positives + False Positives) Negative Predictive Value = True Negatives divided by (True Negatives + False Negatives)

73 Evaluation Criteria High Diagnostic Value 100% Sensitivity
100% Specificity What happens if True Positive rate is equal to the False Positive rate? Sensitivity = TP/(TP+FN) = must have zero false negatives (FN) to achieve 100% Specificity = TN/(TN+FP) = must have zero false positives (FP) to achieve 100% What happens if we have 50% sensitivity and 50% specificity? Would have equal values for TN and FP; and equal values for TP and FN – does that assay have real diagnostic value? NO

74 Activity Estimate sensitivity and specificity of a qualitative method given a data set. <Refer to supplemental materials provided for this section> Supplemental Materials Needed (Advanced Method Validation): Qualitative Method Comparison Data Use the data to tally the TP, FP, FN, TN; then calculate the sensitivity and specificity: [TP=8, FP=2, FN=1, TN=9] Sensitivity = TP/(TP+FN) = 89% Specificity = TN/(TN+FP) = 82% Is this acceptable performance for your lab? Methods vary in terms of specificity and sensitivity.

75 Activity (cont’d) Create a validation plan for a quantitative assay to be performed in your laboratory.

76 In Closing Now that you have completed this module, you should be able to: Identify test classifications Define what each validation experiment details for testing methods Discuss what is recommended to perform each of the validation experiments for testing methods Recognize how to evaluate data obtained from each of the validation experiments

77 Post-Assessment Question #1
A rapid HIV test would likely be classified as a: High complexity, modified assay Moderate complexity, unmodified assay FDA-approved, modified assay Waived, FDA-approved, unmodified assay Remember to pick the best answer. The correct answer is: D. “Waived, FDA-approved, unmodified assay”

78 Post-Assessment Question #2
The precision of a test method gives information related to the method’s: Systematic error Comparison of results to a reference method Reproducibility Likelihood of being affected by hemolysis, lipemia and icterus Both A and B Remember to pick the best answer. The correct answer is: C. “Reproducibility”

79 Post-Assessment Question #3
When transferring reference intervals of 20 specimens used, what is the minimum number that must fall within manufacturer’s reference intervals? 20 18 16 15 Remember to pick the best answer. The correct answer is: B. “18”

80 Post-Assessment Question #4
Which linear regression equation component gives information regarding constant bias? y x m (slope) b (intercept) Remember to pick the best answer. The correct answer is: D. “b (intercept)”

81 References DAIDS Good Clinical Laboratory Practice (GCLP) Guidelines.
Validation of Qualitative Methods. 42 CFR § College of American Pathologists Commission on Laboratory Accreditation, Accreditation Checklists, April 2006. Westgard, James O. Basic Method Validation 2nd Edition. Madison, WI: Westgard QC, Inc., 2003. Clinical and Laboratory Standards Institute. User Protocol for Evaluation of Qualitative Test Performance; Approved Guideline. NCCLS document EP12-A. Clinical and Laboratory Standards Institute, Wayne, PA USA, 2002. Clinical and Laboratory Standards Institute. Evaluation of Precision. Performance of Quantitative Measurement Methods. NCCLS document EP5-A2. Clinical and Laboratory Standards Institute, Wayne, PA USA, 2004. Clinical and Laboratory Standards Institute. User verification of Performance for Precision and Trueness. CLSI document EP15-A2. Clinical and Laboratory Standards Institute, Wayne, PA USA, 2005.

82 Wrap Up


Download ppt "Verification of Performance Specifications"

Similar presentations


Ads by Google