Presentation is loading. Please wait.

Presentation is loading. Please wait.

Challenges in Process Comparison Studies Seth Clark, Merck and Co., Inc. Acknowledgements: Robert Capen, Dave Christopher, Phil Bennett, Robert Hards,

Similar presentations


Presentation on theme: "Challenges in Process Comparison Studies Seth Clark, Merck and Co., Inc. Acknowledgements: Robert Capen, Dave Christopher, Phil Bennett, Robert Hards,"— Presentation transcript:

1 Challenges in Process Comparison Studies Seth Clark, Merck and Co., Inc. Acknowledgements: Robert Capen, Dave Christopher, Phil Bennett, Robert Hards, Xiaoyu Chen, Edith Senderak, Randy Henrickson 1

2 Key Issues There are different challenges for biologics versus small molecules in process comparison studies Biologic problem is often poorly defined Strategies for addressing risks associated with process variability early in product life cycle with limited experience 2

3 Biologic Process Comparison Problem Biological products such as monoclonal antibodies have complex bioprocesses to derive, purify, and formulate the “drug substance” (DS) and “drug product” (DP) The process definition established for Phase I clinical supplies may have to be changed for Phase III supplies (for example). –Scale up change: 500L fermenter to 5000L fermenter –Change manufacturing site –Remove additional impurity for marketing advantage –Change resin manufacturer to more reliable source 3 Separation & Purification Fermentation Formulation Filtration DS DP Cells Medium Buffers Resins Buffers

4 Comparison Exercise 4 ICH Q5E: The goal of the comparability exercise is to ensure the quality, safety and efficacy of drug product produced by a changed manufacturing process, through collection and evaluation of the relevant data to determine whether there might be any adverse impact on the drug product due to the manufacturing process changes Comparison decision Meaningful change in CQAs or important analytical QAs Meaningful change in preclinical animal and/or clinical S/E Scientific justification for analytical only comparison N Y Comparable Not Comparable Y N Y N

5 5 What about QbD? Knowledge Space X space Critical process parms., Material Attrb. Y space Critical Quality Attributes Models DS Acceptable Quality Constraint Region that links to Safety, efficacy, etc. Z space Clinical Safety/Efficacy (S/E) Acceptable Clincial S/E S/E = f (CQAs) + e = f ( g (CPP)) + e Models? Complete? QbD relates process parameters (CPPs) to CQAs which drive S/E in the clinic

6 Risks and Appropriate Test 6 ComparableNot Comparable ComparableCorrectConsumer Risk (mostly) Not ComparableProducer Risk (mostly)Correct Truth Conclusion Ha: Comparable Analytically Action:Support scientific argument with evidence for Comparable CQAs H0: Not Comparable Analytically Action:Examine with scientific judgment, determine if preclinical/clinical studies needed to determine comparability Hypotheses of an equivalence type of test Process mean and variance both important Study design and “sample size” need to be addressed Meaningful differences are often not clear Difficulty defining meaningful differences & need to demonstrate “highly similar” imply statistically meaningful differences may also warrant further evaluation Non-comparability can result from “improvement”

7 Specification Setting CQA USL LSL ~ Clinical Safety/Efficacy (S/E) f (CQAs) = S/E ?? In many cases for biologics an explicit f linking CQA to S/E is unknown usually is an qualitative link between CQA and S/E Difficult to establish such an f for biologics Specs correspond to this link and are refined & supported with clinical experience and data on process capability and stability 7 URL LRL

8 Preliminary specs and process 1 identified Upper spec revised based on clinical S Process revised to lower mean Process revised again but is not tested in clinic (analytical comparison only) Process 3 in commercial production with further post approval changes Process and Spec Life Cycle PreclinicalPhase I Phase III Study Commercial CQA Release USL LSL Process 1 Process Development Process 2 Process 3 Phase I Study Commercial Clinical Trial Data Phase III 1 2 3 4 Process 3 Process 4 1 2 3 4 Time Design Space in Effect Preclinical/Animal data 8

9 Sample Size Problem “Wide format” Unbalanced (N old process > N new process) Process variation, N = # lots –Usually more of a concern –Independence of lots –What drives # lots available? 1.Needs for clinical program 2.Time, resources, funding available 3.Rules of thumb –Minimum 3 lots/process for release –3 lots/process or fewer stability –1-2 for forced degradation (2 previous vs 1 new) DF for estimating assay variation –Usually less of a concern Have multiple stability testing results Have assay qualification/validation data sets 9

10 More about # of Lots Same source DS lot! “…batches are not independent. This could be the case if the manufacturer does not shut down, clean out, and restart the manufacturing process from scratch for each of the validation batches.” Peterson (2008) “Three consecutive successful batches has become the de facto industry practice, although this number is not specified in the FDA guidance documents” Schneider et. al. (2006) 10 DP LotDS Lot L0052857807-001004 L0052857907-001007 L0051851007-001013 L0051851107-001013 L0051854207-001013

11 Stability Concerns Constrained intercept multiple temperature model gives more precise lot release means and good estimates of assay + sample variation Similar sample size problems Generally don’t test for differences in lot variation given limited # lots 11 Long term StabilityForced Degradation Evaluate differences in slope between processesEvaluate differences in derivative curve  CQA/  week Blue process shows improvement in rate  Not comparable Y = (  + Lot ) + (1 + Lot Temp + Temp)*f(Months) + e Test + e Residual

12 Methods and Practicalities Methods used –Comparable to data range –Conforms to control-limit Tolerance limits 3 sigma limits multivariate process control –Difference test –Equivalence test Not practical –Process variance comparison –Large # lots late in development, prior to commercial 12

13 Methods and Practicalities 13 Symbols are N historical lots Comparisons to N2=3 new lots LSL = -1 Mean=0 USL = 1 Delta = 0.25 Assay var = 2*lot var Total SD = 0.19 Alpha = Pr(test concludes analytically comparable when not) = Pr(consumer risk) Beta = Pr(test concludes not analytically comparable when is) = Pr(producer risk)

14 Defining a Risk Based Meaningful Difference Starting process 1 2 3 Change not meaningful Change meaningful Change borderline meaningful 14 Risk level of meaningful differences are fine tuned through C pk or C pu LRL = Lower release limit URL = Upper release limit  = process mean  = process variance 0 RSD  C pu  C Boundary 2 1 3 2 1 0   C pk  C Boundary 3 Key quality characteristic

15 Defining a Risk Based Meaningful Difference 15 0 RSD  C pu  C Boundary 2 1 Underlying Assumption that we are starting with a process that already has acceptable risk Starting process 1 2 Meaningful change Meaningful change? 0   C pk  C Boundary 2 1

16 Two-sided meaningful change Simplifying Assumptions –Process 1 is in control with good capability (true Cpk>C) with respect to meaningful change window, (L,U) –Process 1 is approx. centered in meaningful change window –Process distributions are normally distributed with same process variance,  2 Equivalence Test on process distribution mean difference 16

17 Two-sided meaningful change sample sizes A comparison of 3 batches to 3 batches requires a 3 sigma effect size A 2 sigma effect size requires a 13 batch historical database to compare to 3 new batches A 1 sigma effect size requires 70 batch historical database to compare to 10 new batches (not shown) Effect size = process capability in #sigmas vs max tolerable capability in #sigmas 17 Historical New

18 One-sided (upper) meaningful change Similar simplifying assumptions as with two-sided evaluation –Meaningful change window is now (0,U) Test on process distribution mean difference Linear Ratio 18

19 One-sided meaningful change sample sizes A comparison of 3 batches to 3 batches requires a 3 sigma effect size A 2 sigma effect size requires a 6 batch historical database to compare to 3 new batches A 1 sigma effect size requires 20 batch historical database to compare to 10 new batches (not shown) Effect size = process capability in #sigmas vs max tolerable capability in #sigmas 19 Historical New

20 Study Design Issues 20 Designs for highly variable assays: what is a better design? Process 1 + assay Process 1 Process 2 + assay Process 2 Run 1 Run 2 Run n a … P1L1 P1L2 P2L1 P2L2 P1Lk Run 1 Run 2 Run n a … P1L1 P2L1 P1L2 P2L2 P1Lk P2Lk Design versus

21 21 Sample size with control of assay variation Tested in same runs Comparisons to N2=3 new lots LSL = -1 Mean=0 USL = 1 Delta = 0.25 Run var = 2*lot var Rep var = lot var Total SD = 0.15

22 Summary Many challenges in process comparison for biologics, chief being number of lots to evaluate the change For risk based mean shift comparison, process capability needs to be at least a 4 or 5 sigma process within meaningful change windows, such as within release limits. Careful design of method testing and use of stability information can improve sample size requirements If this is not achievable, the test/criteria needs to be less powerful (increased producer risk), such as by “flagging” any observed difference to protect consumers risk Flagged changes need to be assessed scientifically to determine analytical comparability 22

23 Backup 23

24 References ICH Q5E: Comparability of Biotechnological/Biological Products Subject to Changes in their Manufacturing Process Peterson, J. (2008), “A Bayesian Approach to the ICH Q8 Definition of Design Space,” Journal of Biopharmaceutical Statistics, 18: 959-975 Schneider, R., Huhn, G., Cini, P. (2006). “Aligning PAT, validation, and post-validation process improvement,” Process Analytical Technology Insider Magazine, April Chow, Shein-Chung, and Liu, Jen-pei (2009) Design and Analysis of Bioavailability and Bioequivalance Studies, CRC press Pearn and Chen (1999), “Making Decisions in Assessing Process Capability Index Cpk” 24

25 Defining a Risk Based Meaningful Difference 0   C pk  C Boundary 3 2 1 Starting process 1 2 3 Change not meaningful Change meaningful Change borderline meaningful 25 Risk level of meaningful differences are fine tuned through C pk or C pm LRL = Lower release limit URL = Upper release limit  = process mean  = process variance 0   C pm  C Boundary 2 1 3

26 Test Cpk? 26 How many lots are needed to have 80% power assuming they are measured with high precision (e.g., precision negligible) with alpha=0.05? Pearn and Chen (1999), “Making Decisions in Assessing Process Capability Index Cpk” Critical Value = Evidence for Comparable CQAs Examine with scientific judgment

27 Power 27 Power Evidence for Comparable CQAs alphaCpk2 K Sigmas mean from limits NPower 0.051.334490.80 0.051.675170.82 0.051.334100.23 0.051.33450.13 0.051.33430.09 0.051.675100.54 0.051.67550.25 0.051.67530.13 Examine further with scientific judgment

28 P1L6P1L6 P1L3P1L3 28 Comparability to Range Method P1L4P1L4 P1L1P1L1 P1L2P1L2 P1L5P1L5 P2L2P2L2 P2L1P2L1 P2L3P2L3 Process Distribution?


Download ppt "Challenges in Process Comparison Studies Seth Clark, Merck and Co., Inc. Acknowledgements: Robert Capen, Dave Christopher, Phil Bennett, Robert Hards,"

Similar presentations


Ads by Google