Please activate JavaScript!
Please install Adobe Flash Player, click here for download


ISCB 2014 Vienna, Austria • Abstracts - Oral Presentations 55Tuesday, 26th August 2014 • 11:00-12:30 Monday25thAugustTuesday26thAugustThursday28thAugustAuthorIndexPostersWednesday27thAugustSunday24thAugust C29.3 Response adaptive randomization in large phase III confirmative clinical trials with binary outcomes - benefits are unlikely W Zhao1 , VL Durkalski-Mauldin1 1 Medical University of South Carolina, Charleston, United States   In the recent decade, response adaptive randomization (RAR) has been advocated for its benefits in study subject ethics (assigning a higher percentage of subjects to the so far better performing arm) and trial ef- ficiency (power). Literature on the benefit and cost of using RAR in a real trial is minimal. Based on pure theoretical analysis with conceptual sce- narios of trials with binary outcomes and a fixed sample size, it is indicated that using RAR can minimize the total number of failures and maximize the power. However, our computer simulation studies based on various large confirmative phase III trials in a frequentist setting reveal that the efficiency benefit is trivial; and the ethical benefit is obtained at the cost of efficiency. More importantly, under the condition of fixed power, using RAR will more likely increase, not decrease, the total number of failures. This result contradicts what many investigators expect from RAR. Further studies demonstrate that when a time trend exists in the trial, using RAR may cause noticeable inflation or deflation of type I error, which will make interim analysis more complex and ultimately reduce the interpretability of the trial. Therefore, we recommend not to use RAR for large confirma- tive phase III clinical trials in a frequentist setting.   C29.4 Incorporating feasibility assessment in the design of clinical studies T Jaki1 , LV Hampson1 1 Lancaster University, Lancaster, United Kingdom   Many publicly funded clinical trials fail to meet their recruitment time- lines, with the consequence that these trials then require an extension of funding in order to complete recruitment. To avoid this scenario, there is a movement by funders towards requiring that larger Phase II and Phase III clinical trials incorporate a feasibility stopping rule, with the aim of es- tablishing early on whether recruitment targets can be met within the planned time frame. The feasibility evaluation is usually based upon factors that are not of pri- mary interest to the trial (i.e., do not concern the endpoint of direct clinical interest) and allows for three different actions: continue as planned; adapt recruitment procedures; or abandon the trial. Efficacy data collected dur- ing the feasibility phase of the trial contribute towards the final analysis of efficacy. In this presentation, we will show how ideas from the adaptive designs literature can be used to incorporate feasibility evaluations into the main trial design to ensure that the required type I error rate for testing efficacy is maintained and power is maximised. Simulations are used to il- lustrate the potential gains in power that follow from using our proposed approach. Optimal boundaries for the feasibility stopping rule are derived which minimise the expected overrun of the trial beyond its planned dura- tion subject to controlling the probabilities of incorrectly allowing a trial to proceed when the recruitment rate is insufficient, and incorrectly aban- doning a trial that would have gone on to complete in a timely manner. C29.5 Some novel alternatives to parallel group designs for pragmatic clinical trials R Hooper1 , L Bourke1 1 Queen Mary University of London, London, United Kingdom   A before-after comparison in the same participants is a powerful way to evaluate the effect of an intervention, but a clinical trial requires a con- current control - for example a parallel groups design with baseline and follow-up assessments in both intervention and control groups. One alternative which exploits the power of the before-after comparison is the cross-over design, but this assumes the treatment effect from the first period has disappeared by the time the effect is measured in the sec- ond period. Cross-over trials are therefore problematic for interventions whose effects are maintained. In a trial where the comparator is routine care, however, it is often reasonable to assume the effect of a treatment introduced at some period of time following randomisation is indepen- dent of that period of time. In this case there are a variety of alternatives to parallel groups and cross-over designs. One approach - the stepped wedge design - has been used extensively. Stepped wedge designs come with a heavy burden of assessment, howev- er, and require a model for how treatment effects are maintained in order to analyse repeated assessments after introduction of the intervention. An intriguing alternative is to reduce the schedule of assessments in the dif- ferent randomised groups to a much sparser arrangement. These incom- plete unidirectional cross-over designs (the simplest being the recently published‘dog-leg’design) offer the remarkable possibility of more power with fewer assessments than a parallel groups design. Dog-leg designs are likely to be particularly useful for cluster-randomised trials involving repeated cross-sections.   C30 Adaptive designs II C30.1 Estimation after blinded sample size reassessment F Klinglmueller1 , M Posch1 , F König1 , F Miller2 1 Medical University of Vienna, CeMSIIS, Vienna, Austria, 2 Stockholm University, Dep. of Statistics, Stockholm, Sweden   When comparing the means of normally distributed endpoints the sam- ple size to achieve a target power typically depends on nuisance param- eters as the variance. It has been shown that superiority trials where the sample size is reassessed based on blinded interim estimates of the nui- sance parameter achieve the target power regardless of the true nuisance parameter and the sample size reassessment has no relevant impact on the type I error rate. While previous work has focused on the control of the type I error rate, we investigate the properties of point estimates and confidence intervals following blinded sample size reassessment. We show that the maximum likelihood estimates for the mean and variance may be biased and quan- tify the bias in simulations. Furthermore, we provide a lower bound for the bias of the variance estimate and show by simulation that the coverage probabilities of confidence intervals may lie below their nominal level, es- pecially when first stage sample sizes are small. Finally, we discuss the im- pact of the findings for blinded sample size reassessment in clinical trials.

Pages Overview