Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Statistical Approach to Method Validation and Out of Specification Data.

Similar presentations


Presentation on theme: "A Statistical Approach to Method Validation and Out of Specification Data."— Presentation transcript:

1 A Statistical Approach to Method Validation and Out of Specification Data

2 Outline of talk Basic statistics –Averaging, confidence intervals Fitness-for-purpose and analytical capability. Quantifying variability and producing a capable method. Out-of-specification results. Conclusions.

3 Repeat measurements 1005.081994.765996.8626 1000.6651017.53981.7084 998.30291003.802998.3409 1002.7791007.7321008.048 1008.842995.17941004.904 1002.4331013.8021008.136 998.06361004.671006.48 992.7641988.08341002.151 1011.4411005.991993.7479 996.3199997.80861005.854 997.1728999.47181004.641 1002.325996.1361000.387

4 Distribution of measurements The 95% confidence interval is the range of values around the mean in which 95% of the measurements are expected to lie.

5 Relative standard deviation, RSD For a strength of ~100%, a 0.7% RSD equates to a standard deviation of ~0.7%. This means that the range of values encompassing 99% of all possible measures is approximately +/- 2.1%. 0.7% RSD at 100% strength has a 99% confidence interval of 97.9% to 102.1%.

6 Effect of averaging The standard deviation is a measure of variability. The effect of variability can be reduced by taking the average of a number of repeat measures. The standard deviation associated with the mean of n measures is:

7 Distribution of the mean n=1 n=2 n=3 n=4 The confidence in the mean improves as the number of measurements increases.

8 How many measurements should I average? Depends upon: –The amount of variability present in the measurements. –The degree of confidence I wish to achieve. WHAT IS FITNESS FOR PURPOSE?

9 Capability of an analytical method Incapable method Capable method

10 How to measure capability? Use measures from statistical process control e.g., specification between 97 mg/l and 103 mg/l, width of confidence interval of 12mg/l:

11 Interpreting c p Batch failure rate purely due to variability in analytical method.

12 One-sided specifications Where is the expected average value of the parameter.

13 Method development/validation To determine the number of repeat measurements to ensure that the analytical capability is acceptable, for example >1. Acceptance criteria are then product dependent, rather than technique specific. How do I determine the amount of variability? How do I determine the number of repeat measurements required?

14 Quantifying variability (e.g. HPLC) Need to assess two sources of variability (repeatability): –Between “weighings” –Instrumental. Between weighings quantifies variability due to sample inhomogeneity and the sample preparation process. Instrumental quantifies the variability associated with the instrumental measurement. weighings measures Sample Experimental Design Quantify a source of variability by determining its standard deviation.

15 Example Weighing 123456 1 975.20928.77992.301047.961036.101109.29 2 971.41934.271035.731069.981064.501074.81 Can use Analysis of Variance (ANOVA) to determine: Standard deviation for “weighing”, s w = 57.9 Standard deviation for instrument, s = 19.2 These values refer to the measured response (e.g. weight- corrected area)

16 Confidence interval for analysis Confidence interval for future number of weighings (n 1 ) and measurements per weighing (n) is given by:  : degree of confidence (usually 0.05 for 95% confidence) N: number of degrees of freedom to determine s w and s. t: Students t-value for  and N.  : confidence interval for measurement (area)

17 Analytical Capability Number of weighings 12345 10.5740.8120.9941.1481.283 20.5890.8331.0201.1771.316 30.5940.8401.0291.1881.328 40.5960.8441.0331.1931.334 50.5980.8461.0361.1961.337 Number of measures per weighing The analytical capability, c p, changes with n 1 and n.

18 External Standards : strength of external standard : average measure for external standard : average measure for sample : estimated strength for sample If  is the confidence interval for and, then the confidence interval for, i.e. if has an RSD of 0.7%, the RSD for the estimated strength is ~1.0%.

19 Practical consequences: finding result Out-of-Specification Consumers risk Producers risk measures

20 Dealing with OOS results Can re-test samples. On re-testing, FDA guidelines for industry state “if no …errors are identified in the first test, there is no scientific basis for invalidating OOS results in favour of passing re-test results.” Scientifically, the issue of whether the re-test results “pass” or “fail” is of little consequence. The issue is whether the re-test results are statistically the same as the original OOS result. Can use the t-test to assess the similarity between OOS and re- test.

21 Example 1 Specification >97.0% OOS result 96.5% with confidence interval +/- 2.1%. Re-test 97.7% with confidence interval +/- 2.1%. No evidence that the OOS and re-test are different from t-test. Average the OOS and re-test gives 97.1% with confidence interval +/- 1.5%.

22 Example 2 Specification >97.0% OOS 96.0% with confidence interval +/- 0.9%. Re-test 98.0% with confidence interval +/- 0.9%. No evidence that the OOS and re-test are the same. Cannot average the OOS and re-test result. Consequently must doubt both results.

23 Conclusions Understanding and determining the confidence interval associated with an analytical result is an important part of method development/validation. The relationship between the confidence interval and the product specification is an important aspect of defining method fitness-for-purpose. The analytical capability is quantifiable measure of fitness-for-purpose for precision. Understanding the confidence interval is important during out-of-specification investigations.


Download ppt "A Statistical Approach to Method Validation and Out of Specification Data."

Similar presentations


Ads by Google