Presentation is loading. Please wait.

Presentation is loading. Please wait.

August 2, 2018 An Overview of Matching-Adjusted Indirect Comparisons (MAIC) in Single-Arm Clinical Trials With Practical Recommendations and Potential.

Similar presentations


Presentation on theme: "August 2, 2018 An Overview of Matching-Adjusted Indirect Comparisons (MAIC) in Single-Arm Clinical Trials With Practical Recommendations and Potential."— Presentation transcript:

1 August 2, 2018 An Overview of Matching-Adjusted Indirect Comparisons (MAIC) in Single-Arm Clinical Trials With Practical Recommendations and Potential Challenges Presented at the 2018 Joint Statistical Meetings Vancouver, British Columbia, Canada

2 Outline Introduction to matching-adjusted indirect comparisons (MAIC) focused on single-arm studies Challenges conducting MAIC in single-arm studies Simulation to illustrate challenges conducting MAIC in single-arm studies

3 Treatment Comparisons
Suppose that we want to compare Treatment A with Treatment B Direct treatment comparison Ideally, 2-arm, randomized, controlled trial Typically, compare treatments across studies Meta-analysis (anchored) A B But what do we do if we have 2 single-arm studies? Indirect treatment comparison? Traditional meta-analysis approaches not feasible Naïve-treatment comparisons are not recommended Typically, IPD not available for both studies A B IPD = individual patient data.

4 Why Compare Single-Arm Studies?
Important for health technology assessments (HTAs)—i.e., payers need comparative effectiveness as input to cost-effectiveness models Health agencies base their recommendations on evidence, even if only single-arm studies are available a Swallow et al.b commented that there is limited guidance from HTAs regarding how to compare single-arm studies Important for general clinical knowledge Single-arm studies are common in certain therapeutic areas such as oncology Allows for comparative assessment of new therapies at early phases of clinical development a Purser et al. ISPOR 19th Annual International Meetings; 2014.; b Swallow et al.. ISPOR 20th Annual International Meeting; 2015.

5 Why Use MAIC When Comparing Single-Arm Studies?
What’s the problem with naïve-treatment comparisons? It’s important to adjust for cross-trial differences What can we do? Ideally we would have individual patient data (IPD) from both studies and run analysis using all IPD data, but this rarely happens. It is more common is to have aggregate patient data from 1 study (e.g., publication) and IPD from the other study. MAICa is one way to level the playing field when comparing interventions from single-arm studies a Signorovitch et al. Pharmacoeconomics. 2010;28(10):

6 What’s MAIC? MAIC is a reweighting method similar to inverse propensity score weighting that adjusts for baseline differences in trials Each patient in the IPD trial is assigned a weight such that Weighted mean baseline characteristics of the IPD trial match those reported in the APD Each patient’s weight is equal to his/her estimated odds of enrolling in the APD trial versus the IPD trial Logistic regression using the “method of moments” is used to balance the mean covariate value between the trials via SAS IML Results of the IPD study are reanalyzed using the weighted IPD (with standard errors obtained using robust sandwich estimator) APD = aggregate patient data.

7 NICE DSU Technical Document: Single-Arm Studies
Propensity Score Reweighting (Unanchored Methods) 1a. Create a logistic propensity score model, which includes all effect modifiers and prognostic variables. This is equivalent to a model on the log of the weights: log( 𝑤𝑖)= 𝛼 0 + 𝛼 1 𝑇 𝑋 𝑖 4. Calculate standard errors using a robust sandwich estimator, bootstrapping, or Bayesian techniques. 5. Provide evidence that absolute outcomes can be predicted with sufficient accuracy in relation to the relative treatment effects, and present an estimate of the likely range of residual systematic error (Section and Appendix B). If this evidence cannot be provided or is limited, then state that the amount of bias in the indirect comparison is likely to be substantial, and could even exceed the magnitude of treatment effects which are being estimated. 1b. Estimate the weights using the method of moments to match effect-modifier distributions between trials. This is equivalent to minimizing, 𝑖=1 𝑁 𝐵(𝐵) exp⁡( 𝛼 1 𝑇 𝑋 𝑖 ) when, 𝑋 (𝐶) 𝐸𝑀 =0. 6. If justified, use the shared effect modifier assumption to transport the Δ 𝐵𝐶(𝐶) estimate into the target population for the decision. Otherwise, comment on the representativeness of the C population to the true target population. 2. Predict outcomes on treatment B in the C trial by reweighting the outcomes of the B individuals: 𝑌 𝐵(𝐶) = 𝑖=1 𝑁 𝐵(𝐵) 𝑌 𝑖(𝐵) 𝑤 𝑖 𝑖=1 𝑁 𝐵(𝐵) 𝑤 𝑖 7. Present the distribution of estimated weights and effective sample size. 3. Form the unanchored indirect comparison in the C population as: Δ 𝐵𝐶(𝐶) =𝑔 𝑌 𝐶(𝐶) −𝑔( 𝑌 𝐵(𝐶) ) NICE = National Institute for Health and Care Excellence. DSU = Decision Support Unit. Phillippo et al. DSU technical support document Figure 5, Main differences between unanchored and anchored MAIC methods bolded.

8 Practical Challenges Conducting MAIC, Particularly in Single-Arm Studies
Single-arm studies often have small sample sizes Entry criteria Examine for differences; may need to subset IPD if APD more restrictive Outcome variable(s) Multiple data updates (e.g., overall survival); important to use similar timeframe/follow-up period Similar definition (e.g., RECIST) and timing of assessments Matching variables (IPD vs. APD) Available and presented consistently? Prognostic? Effect modifier? Examine distribution Algorithm performance Did it converge and balance trial characteristics? Examine distribution of patient weights Generalizability of findings (e.g., what population?) May be difficult to estimate residual bias error in these rarer populations

9 Focus on the Impact of Matching Variables in MAIC
While all steps are important when conducting MAIC, today we will focus our attention on the practical challenges of the matching variables and their impact on MAIC results using a simple simulation of single- arm oncology studies

10 Simulation: “Typical” Oncology Setting With Single-Arm Studies
Study 1: IPD Study 2: aggregate data from publication (APD) In each study, n = 100 patients Typical in single-arm studies to have smaller sample sizesa Outcome: objective response (yes, no) Common as a regulatory endpoint in single-arm studies in oncologya Baseline characteristics 10 potential categorical baseline matching variables presented consistently across studies Age (≤ 65, > 65) Race (white, nonwhite) Prior surgery (yes, no) Prior radiotherapy (yes, no) Body mass index (≤ 30, > 30) Prior adjuvant therapy (yes, no) Disease-free interval (< 12, ≥ 12 months) ECOG performance status (0, 1, 2) Comorbidities (hypertension, depression, other, none) Metastatic disease (visceral, nonvisceral) Disease-free interval set as a prognostic variable (i.e., > 12 months positively associated with response); remaining variables not associated with the outcome ECOG = Eastern Cooperative Oncology Group. a Oxnard et al. JAMA Oncol. 2016;2(6):

11 Simulation: APD and Prematch IPD Distribution Specifications
Objective response IPD APD BMI = body mass index.

12 Simulation: Exercise to Examine Matching Algorithm Performance
2 single-arm studies Study 1: IPD Study 2: aggerate data from publication (APD) Create 1,000 replicate IPD studies Randomly assigned baseline characteristics and outcome variable following expected value of the specifications from previous slide Run MAIC on each IPD replicate study to “match” summary baseline values of the APD study After MAIC, examine if the distribution of the weighted IPD baseline characteristics “match” APD summary baseline values Using criteria of 1 percentage point for each baseline characteristic Summarize across 1,000 replicate studies Number of replicates where all 10 matching variables are within 1 percentage point

13 Simulation Results: Percentage With All Matching Variables Within 1 Percentage Point
What’s happening? ~ 50% replicates were not balanced on 3 or fewer variables ~ 75% did not balance on disease-free interval (reminder: most imbalanced variable but also associated with the outcome) 10% matched 90% Did not match

14 What’s the Issue? Cross-frequencies of all matching variables lead to zero cells

15 What to Do When the Algorithm Does Not Balance the Matching Variables
Some researchers suggest not conducting MAIC However, this leaves only a naïve comparison Other researchers recommend limiting variables and/or focusing on prognostic and effect-modifier variables What if we limit the number of matching variables from 10 to 5? What if we limit to the known prognostic variable(s)? Other researchers recommend loosening the criteria when examining if the algorithm balanced baseline matching variables What if we loosen the criteria from 1 percentage point to 3 percentage points for all matching variables? Should the criteria depend on how similar the trials are in the first place?

16 Simulation Results: Alternative Approaches
Criteria set to 3 % points for all variables Limit to first 5 nonprognostic matching variables* at 1 % point criteria for all variables Limit to 1 prognostic variable at 1 % point criteria *Age, race, prior surgery, prior radiation, BMI and prior adjuvant therapy.

17 Simulation Results: Impact on Outcome
Box-and-whisker plots across various approaches APD (50%) APD (50%) APD (50%) IPD (37%) IPD (37%) IPD (37%) (1) Original approach: All 10 matching variables at 1 % point criteria (2) Alternative approach: All 10 variables at 3 % point criteria (3) Alternative approach: Limit to 5 nonprognostic variables at 1 % point criteria Box-and-whisker plots on the % objective response across the IPD replicates after matching. The APD and IPD lines represent the objective response rate before matching.

18 Simulation Results: Impact on Outcome
Box-and-whisker plots across various approaches APD (50%) APD (50%) APD (50%) IPD (37%) IPD (37%) IPD (37%) (4) Alternative Approach: Limit to 1 prognostic variable at 1 % point criteria (5) Alternative Approach: Limit to 9 nonprognostic variables at 1 % criteria* (6) Alternative Approach: All 10 matching variables with no matching criteria* * Not previously reported; introduced for comparison purposes. Box-and-whisker plots on the % objective response across the IPD replicates after matching. The APD and IPD lines represent the objective response rate before matching.

19 Conclusion MAIC is a useful method for comparing 2 single-arm studies when you have access to IPD from 1 study and APD from the other Applying MAIC to single-arm studies carries its own set of unique challenges, including the lack of a common comparator arm and small sample sizes Simulations demonstrated a wide range of variability in adjusted estimates related to the number and content of matching variables and the closeness of matches Careful consideration should be given in choosing which variables will be used for balancing

20 Thank You Questions? Dawn Odom, MS Lawrence Rasouliyan, MPH
Lawrence Rasouliyan, MPH Molly Purser, PhD


Download ppt "August 2, 2018 An Overview of Matching-Adjusted Indirect Comparisons (MAIC) in Single-Arm Clinical Trials With Practical Recommendations and Potential."

Similar presentations


Ads by Google