When an RCT is not possible, researchers may opt for a single-arm trial (SAT), which does not include an internal control group, but may be augmented by external comparator (EC) patients from real world data. If an EC study is submitted for regulatory purposes, this may not take the place of a pivotal trial, but as additional information to increase the ability to contextualize and interpret SAT findings. EC studies can also be used for Health Technology Assessment submissions or even for internal decision making.
As EC design has been gaining traction in recent years, it is important therefore, to evaluate the strength and limitations of this study design – in particular by assessing the impact of unmeasured confounding and missing data in prognostic factors (covariates).
Evaluating impact in two EC case studies
Unmeasured confounding occurs when a variable that influences both the treatment and the outcome is not measured or accounted for in the study. This may lead to biased results when trying to establish a causal relationship between the treatment and outcome. Also, missing data in important prognostic factors may lead to bias, which is illustrated in the new research paper “Examining the Effect of Missing Data and Unmeasured Confounding on External Comparator Studies: Case Studies and Simulations.” This publication shows results from two actual EC case studies and additional simulations.
Researchers constructed the EC case studies by using the treatment arms from two RCTs – one multiple myeloma (MM) trial and one metastatic hormone-sensitive prostate cancer (mHSPC) trial – and by augmenting the data by one real world EC cohort each.
With overall survival as the main endpoint, researchers assessed the effects of unmeasured confounding and missing data for the case studies by reporting estimated EC versus RCT treatment effects. For the simulations, the bias could be assessed directly, as the true treatment effect was known upfront by the simulation set-up.
Measuring the impact of bias
The results of the simulations showed that the bias increased with the magnitude of unmeasured confounding, as expected, and exact numerical estimates of this relationship could be derived to gain a quantitative instead of a mere qualitative understanding. Concretely, the bias for the log hazard ratio was about −0.10 per additional missing covariate for the MM case study while it was ~ −0.06 for the mHSPC case study.
For missing data scenarios (instead of unmeasured confounding scenarios), the study showed that the choice of the analysis approach does matter in terms of best bias minimization.
The case studies provided valuable scientific evidence into which EC studies may have greater challenges than others. For example, researchers designing a new EC study should consider whether most key eligibility criteria, and also key prognostic factors, are available in a specific real world dataset.
Valuable lessons for future EC studies
The effects of unmeasured confounding and missing data can be exactly quantified, which helps to assess the robustness of the EC design for regulatory and Health Technology Assessment submissions.
Furthermore, it can be demonstrated by practical examples and theoretical simulations that EC studies need sufficient data availability for important baseline data. This highlights the importance of conducting in-depth feasibility assessments and obtaining statistical, medical and epidemiological expert input to determine and assess the availability and quality of critical data. Key considerations include eligibility criteria, covariates and endpoints to evaluate whether candidate external data sets are robust and comprehensive enough to be suitable as an EC data source.
Gerd Rippin is Senior Director, Biostatistics, Statistics Services, IQVIA.
Filed Under: clinical trials, Drug Discovery, Drug Discovery and Development