A Bayesian comparative effectiveness trial in action: developing a platform for multisite study adaptive randomization.
ABSTRACT: In the last few decades, the number of trials using Bayesian methods has grown rapidly. Publications prior to 1990 included only three clinical trials that used Bayesian methods, but that number quickly jumped to 19 in the 1990s and to 99 from 2000 to 2012. While this literature provides many examples of Bayesian Adaptive Designs (BAD), none of the papers that are available walks the reader through the detailed process of conducting a BAD. This paper fills that gap by describing the BAD process used for one comparative effectiveness trial (Patient Assisted Intervention for Neuropathy: Comparison of Treatment in Real Life Situations) that can be generalized for use by others. A BAD was chosen with efficiency in mind. Response-adaptive randomization allows the potential for substantially smaller sample sizes, and can provide faster conclusions about which treatment or treatments are most effective. An Internet-based electronic data capture tool, which features a randomization module, facilitated data capture across study sites and an in-house computation software program was developed to implement the response-adaptive randomization.A process for adapting randomization with minimal interruption to study sites was developed. A new randomization table can be generated quickly and can be seamlessly integrated in the data capture tool with minimal interruption to study sites.This manuscript is the first to detail the technical process used to evaluate a multisite comparative effectiveness trial using adaptive randomization. An important opportunity for the application of Bayesian trials is in comparative effectiveness trials. The specific case study presented in this paper can be used as a model for conducting future clinical trials using a combination of statistical software and a web-based application.ClinicalTrials.gov Identifier: NCT02260388 , registered on 6 October 2014.
Project description:<h4>Background</h4>Determination of comparative effectiveness in a randomized controlled trial requires consideration of an intervention's comparative uptake (or acceptance) among randomized participants and the intervention's comparative efficacy among participants who use their assigned intervention. If acceptance differs across interventions, then simple randomization of participants can result in post-randomization losses that introduce bias and limit statistical power.<h4>Methods</h4>We develop a novel preference-adaptive randomization procedure in which the allocation probabilities are updated based on the inverse of the relative acceptance rates among randomized participants in each arm. In simulation studies, we determine the optimal frequency with which to update the allocation probabilities based on the number of participants randomized. We illustrate the development and application of preference-adaptive randomization using a randomized controlled trial comparing the effectiveness of different financial incentive structures on prolonged smoking cessation.<h4>Results</h4>Simulation studies indicated that preference-adaptive randomization performed best with frequent updating, accommodated differences in acceptance across arms, and performed well even if the initial values for the allocation probabilities were not equal to their true values. Updating the allocation probabilities after randomizing each participant minimized imbalances in the number of accepting participants across arms over time. In the smoking cessation trial, unexpectedly large differences in acceptance among arms required us to limit the allocation of participants to less acceptable interventions. Nonetheless, the procedure achieved equal numbers of accepting participants in the more acceptable arms, and balanced the characteristics of participants across assigned interventions.<h4>Conclusions</h4>Preference-adaptive randomization, coupled with analysis methods based on instrumental variables, can enhance the validity and generalizability of comparative effectiveness studies. In particular, preference-adaptive randomization augments statistical power by maintaining balanced sample sizes in efficacy analyses, while retaining the ability of randomization to balance covariates across arms in effectiveness analyses.<h4>Trial registration</h4>ClinicalTrials.gov, NCT01526265; 31 January 2012.
Project description:Randomizing patients among treatments with equal probabilities in clinical trials is the established method to obtain unbiased comparisons. In recent years, motivated by ethical considerations, many authors have proposed outcome adaptive randomization, wherein the randomization probabilities are unbalanced, based on interim data, to favor treatment arms having more favorable outcomes. While there has been substantial controversy regarding the merits and flaws of adaptive versus equal randomization, there has not yet been a systematic simulation study in the multi-arm setting. A simulation study was conducted to evaluate four different Bayesian adaptive randomization methods and compare them to equal randomization in five-arm clinical trials. All adaptive randomization methods included an initial burn-in with equal randomization and some combination of other modifications to avoid extreme randomization probabilities. Trials either with or without a control arm were evaluated, using designs that may terminate arms early for futility and select one or more experimental treatments at the end. The designs were evaluated under a range of scenarios and sample sizes. For trials with a control arm and maximum same size 250 or 500, several commonly used adaptive randomization methods have very low probabilities of correctly selecting a truly superior treatment. Of those studied, the only adaptive randomization method with desirable properties has a burn-in with equal randomization and thereafter randomization probabilities restricted to the interval 0.10-0.90. Compared to equal randomization, this method has a favorable sample size imbalance but lower probability of correctly selecting a superior treatment. In multi-arm trials, compared to equal randomization, several commonly used adaptive randomization methods give much lower probabilities of selecting superior treatments. Aside from randomization method, conducting a multi-arm trial without a control arm may lead to very low probabilities of selecting any superior treatments if differences between the treatment success probabilities are small.
Project description:Comparative effectiveness research trials in real-world settings may require participants to choose between preferred intervention options. A randomized clinical trial with parallel experimental and control arms is straightforward and regarded as a gold standard design, but by design it forces and anticipates the participants to comply with a randomly assigned intervention regardless of their preference. Therefore, the randomized clinical trial may impose impractical limitations when planning comparative effectiveness research trials. To accommodate participants' preference if they are expressed, and to maintain randomization, we propose an alternative design that allows participants' preference after randomization, which we call a "preference option randomized design (PORD)". In contrast to other