Discussion on “Improving the precision and power in randomized trials for COVID-19 treatments using covariate fitting, for binary, ordinal and time-to-event outcomes” by David Benkeser, Ivan Diaz, Alex Luedtke, Jodi Segal, Daniel Scharfstein and Michael Rosenblum – LaVange – – Biometrics
The article titled “Precision and Potency in Randomized Trials for COVID-19 Treatments Using Covariate Fitting, for Binary, Ordinal and Time-to-Event Outcomes” is a welcome addition to the literature on l adjustment of covariates in clinical trials conducted for medical research. The authors’ work has the potential to have a substantial impact on drug development through increased use of an extremely underutilized tool to improve the accuracy of clinical trials, thereby reducing the number of studies that fail due to insufficient power. And the timeliness of this research couldn’t be better, with the number of clinical trials currently planned or already in the field to fight COVID-19. The urgency with which safe and effective treatments are needed in a pandemic, and the competition that inherently follows multiple trials launched at the same time, with the same goal (i.e. finding those safe and effective treatments), results in an unusually high need for precise size trials optimized to achieve their goals.
The benefits of fitting covariates in the context of the much simpler linear model have been known for some time, and the method is popular with clinical experimenters for its ability to improve the precision of estimates of treatment effects with minimal assumptions. . I have already written (LaVange 2014, 2019) of my dilemma, after joining the FDA in 2011, regarding the lack of a guidance document on adjusting covariates, only to find that a document had been drafted in the early 2000s but had not never been published. The guidance was apparently abandoned because, while uncontroversial for linear models, regulators feared that covariate fit might be misused in nonlinear models without proper guidance for this more complicated framework. As director of the office of biostatistics at the Center for Drug Evaluation and Research, I was able to prioritize an update of this guide, which was completed shortly after my departure and published in 2019. (FDA, 2019).
A review of the delay in the FDA publication of a covariate adjustment guide helps explain the importance of the Benkeser et al. paper to the drug development company. The International Council for Harmonization (ICH) published the directive, E9 Statistical principles for clinical trials (ICH, 1999), calling for adjustment of covariates, measured before randomization, that correlated with the primary outcomes of the trial. The objectives of this adjustment were twofold, to improve accuracy and to adjust for imbalances between treatment groups. The European Medicines Agency (EMA) followed up with a Points to Consider document in 2003 and a guidance document in 2015, both providing similar advice on fitting covariates for regulated trials in the region. European Union (EU). FDA guidelines followed much later – 20 years after ICH E9 – and the main reason was the inability to endorse a simple analytical tool like analysis of covariance when the analysis model was nonlinear. As the FDA guidelines make clear, pre-specification of any covariate adjustment is necessary to ensure that the risk of drawing an erroneous conclusion about the effects of the drug is not increased due to experimentation with different model adjustments after the end of the test. The guidelines also make it clear that even if the analysis model is inaccurate, the benefits of fitting covariates under the linear model still apply, and the resulting treatment effect estimates are valid for substantiate the inference about the drug. Such a claim could not be made within the nonlinear framework, or at least not compared to an approach widely accepted in the early 2000s, and which was, in large part, the source of the delay in the publication of the guidance. .
It should be noted that the FDA guidelines are silent on the use of covariate adjustments to correct for imbalances between treatment groups in order to produce unbiased estimates of drug effects. Any difference between treatment groups is random, provided that only the pre-randomization covariates are used for the adjustment. Although adjusting for such imbalances has the potential to greatly improve the precision of the estimates and the power of hypothesis testing about those estimates, unadjusted estimates are still valid for the true effects of the drug. The advantage lies in the improvement in precision (Permutt, 2009). This point is often omitted, but more importantly, a misunderstanding of the purpose of adjusting covariates over time has led to its use primarily in small clinical trials, where researchers are concerned that imbalances in treatment groups could bias the results. study results. With large clinical trials, covariate adjustments are more often seen as a pleasant but not essential part of trial design, believing that random imbalances tend to decrease as sample sizes increase. Senn (1989), however, noted for the normal bivariate case, “the imbalance of covariates is of as much concern in large studies as in small ones” due to the fact that although the absolute differences in the baseline covariates (absolute imbalance) may decrease As sample sizes increase, standardized differences do not, and standardized differences are those that impact precision. Senn goes on to advocate analysis of covariance with predefined covariates as a best practice for studies of all sizes, regardless of any random imbalances that may be observed in the study.
Around the same time, Gary Koch and colleagues were exploring methods of covariate fitting based on randomization in the parameters of linear and nonlinear models, resulting in a series of publications for a variety of parameters. (see, for example, Tangen and Koch, 1999; Saville and Koch, 2013). These randomization-based methods have a particular advantage in large confirmatory clinical trials conducted for regulatory approval, where the primary focus is on hypothesis testing, as minimal hypotheses are required for their use. Benkeser’s accent et al. on the usefulness of fitting covariates in large trials follows this earlier work by Senn and Koch with a consistent message. By expanding the analytical tools available for ordinal results and further providing performance results of covariate-adjusted estimators for binary outcomes and time to event, the use of covariates in large clinical trials, where endpoints more often fall into these categories, are expected to increase dramatically. In my opinion, this is the major contribution of the article and the one that makes me very excited to see it in publication!
The authors provide the results of in-depth simulations for a variety of estimates of interest when the primary clinical outcome corresponds to the occurrence or timing of an event or an ordinal scale. The control arm distributions were based on actual data from two very relevant sources, and large power gains or relative efficiencies are reported for all estimates examined. Looking at the main protocols of the National Institutes of Health Accelerating Covid-19 Therapeutic Interventions and Vaccines (ACTIV) launched over the past year, a range of outcomes are specified, including disease severity assessed by ordinal scales to seven. or eight points, the number of symptoms, time spent on ventilation or in intensive care unit, recovery time and mortality. With the methods proposed by the authors, covariate fits could be predefined in planning analyzes of all key primary and secondary endpoints in these trials, thereby increasing the power of the tests and improving the precision of the estimates to characterize each. important dimension of the results of the pandemic.
Not enough can be said about the benefit of producing valid estimates of drug effects, even in the presence of poor model specification. The pre-specification of the statistical analysis plan for a clinical trial provides the basis for FDA assurance that sponsors do not present the most promising set of a range of exploratory results in their regulatory submissions. If the model specification error cannot be determined until the processing codes are known and preliminary analyzes have been performed, such prespecification is not possible. The authors provide a framework for drug developers to optimize their planned analyzes without requiring post hoc model adjustment. With the publication of this article, there should no longer be any obstacles to the use of covariate adjustment when analyzing endpoints relevant to patient health and well-being. In times of a pandemic, realizing the benefits of adjusting covariates to reduce sample sizes and gain earlier responses to promising therapies is invaluable. The authors’ contributions in this regard are to be welcomed.