Please activate JavaScript!
Please install Adobe Flash Player, click here for download


84 ISCB 2014 Vienna, Austria • Abstracts - Oral PresentationsWednesday, 27th August 2014 • 16:00-17:30 Monday25thAugustTuesday26thAugustThursday28thAugustAuthorIndexPostersWednesday27thAugustSunday24thAugust Results: The design chosen was: 100 patients randomised to one of 6 dexamethasone doses or placebo; 28% allocated to placebo; 5 evenly spaced adaptations; adaptation criterion based on precision of estimated response at the ED95 (the minimum dose with near-maximal efficacy). Averaged across scenarios, this design gave statistical power of 93.8% (95% confidence interval 91.9%, 95.8%). Conclusion: Adaptive designs offer flexibility and efficiency. Our inte- grated approach, using SAS to control simulations by executing analysis in WinBUGS is a practical tool for their development. C48.2 Sample size optimization for phase II/III drug development programs M Kirchner1 , M Kieser1 , H Götte2 , A Schüler2 1 Institute of Medical Biometry and Informatics, Heidelberg, Germany, 2 Merck KGaA, Darmstadt, Germany   About 50% of development programs in phase III do not get regulatory approval (Arrowsmith, 2011). Usually, sample size of phase III trials is based on the treatment effect estimated from phase II data. As the true treat- ment effect is uncertain, a high intended statistical power for the phase III trial does not necessarily translate into a high success probability. Hence, the variability of the estimate has to be considered when sizing a study. However, there is still a lack of methodology for sample size calculation across a phase II/III program including go/no-go decisions after phase II. We investigate the impact of the uncertainty about the treatment effect estimate obtained from phase II trials on the sample size of subsequent phase III studies. Success probabilities of the complete phase II/III program are evaluated under consideration of the program-wise sample size. In or- der to optimize program-wise planning, utility as a function of phase II sample size is calculated for different scenarios. It is demonstrated that the go/no-go decision after phase II and the size of phase II trials strong- ly influence the distribution of the phase III sample size and the utility. Recommendations are given concerning an adequate choice of phase II sample size taking these aspects into account. In summary, the presented methods for program-wise combined plan- ning of phase II and III trials may help to improve the calculation of the sample size for phase II and phase III trials under the aim of reaching high success probabilities.   C48.3 Bayesian meta-analytical methods to incorporate multiple surrogate endpoints in drug development process S Bujkiewicz1 , JR Thompson1 , KR Abrams1 1 University of Leicester, Leicester, United Kingdom   Surrogate endpoints are increasingly being investigated as candidate endpoints in drug development process where measuring a primary out- come of interest may be too costly, too difficult to measure or require long follow-up time. A number of meta-analytical methods have been pro- posed that aim to evaluate surrogate endpoints as predictors of the target outcome. Bivariate meta-analytical methods can be used to predict the target outcome from the surrogate endpoint (while taking into account of the uncertainty around the surrogate outcome) as well as to combine evidence on both outcomes to “borrow strength” across outcomes when evaluating new health technologies. Extensions to multivariate models will be discussed aiming to include multiple surrogate endpoints with a potential benefit of increasing pre- cision of predictions. In our recent paper on Bayesian multivariate meta- analysis of mixed outcomes we model the between-study covariance in a formulation of a product of normal univariate distributions (Stat Med 2013; 32:3926-3943).This formulation is particularly convenient for includ- ing multiple surrogate outcomes. In this model however, two outcomes (which can be surrogate endpoints to the target outcome) are condition- ally independent, conditional on the target outcome. Building on this model, we extend it to the case where this assumption is relaxed to allow for one of the surrogate endpoints to act as a surrogate to the other. The modelling techniques are investigated using example from multiple scle- rosis (where the disability worsening is the target outcome, while relapse rate and MRI lesions have been shown to be good surrogates to the dis- ability progression).   C48.4 Sequential meta-analyses of safety data D Saure1 , K Jensen1 , M Kieser1 1 Institute of Medical Biometry and Informatics Heidelberg, Heidelberg, Germany   While meta-analyses investigating the efficacy of therapies are mainly conducted retrospectively, there is a need for a prospective sequential approach for the assessment of safety data over several studies within a drug development program. Currently available methods for sequential meta-analyses, for example the procedure based on the combination of p-values (Jennison and Turnbull, JBiopharmStat 2005) or the repeated cu- mulative meta-analysis approach (Whitehead, StatMed 1997), are tailored to superiority trials. However, in the analysis of safety data including seri- ous adverse events with low rates one is usually interested in demonstrat- ing non-inferiority. We demonstrate the need for sequential methods in this situation and examine the applicability of the above mentioned ap- proaches for different scenarios which are typical for drug development. Our focus lies on fixed-effect meta-analyses with binary outcomes where we incorporate different effect measures and pooling methods. We cal- culate the exact type I error rate and the exact power for various situa- tions occurring in a sequential approach of safety data. Various scenarios for event rates and non-inferiority margins are considered. The methods proposed by Jennison and Turnbull assume that the so called “p-clud” property of the p-values (Brannath et al., JAmStatAssoc 2002) holds true. We investigate whether this assumption is fulfilled in the current situation of sparse binary data and non-inferiority trials and, furthermore, in case of non-fulfillment of the “p-clud”condition, we examine the performance of those p-values within those methods. C48.5 On the three-arm non-inferiority design including a placebo T Tango1,2 , E Hida1 1 Center for Medical Statistics, Tokyo, Japan, 2 Teikyo University Graduate School of Public Health, Tokyo, Japan   The design and the analysis of three-arm non-inferiority trials seems to have been focused on the fraction approach (e.g., Kock and Tangen 1999, Pigeot et al.,2003; Kock and Röhmel, 2004), which aim to show that the experimental treatment preserve a prescpecified fraction f of the active control treatment effect to placebo.The fraction approach has been modi- fied and/or extended to several situations. However, in many “common” two-arm non-inferiority trials conducted so far over the world, the non-inferiority margin Δ has been defined as a prespecified difference of treatments. So, we proposed a method with Δ for inference of the difference in means (Hida and Tango, 2011) and in proportions (Hida and Tango, 2013), in which we have to show the follow- ing inequality: θP <θR -Δ <θE where θP , θR , θE denote the expected value of treatment outcome under the placebo, reference and experimental treat- ment, respectively. The first inequality implies the requirements for assay sensitivity that the superiority of the reference over the placebo should be more than Δ . To this substantial superiority condition, Röhmel and Pigeot(2011) and Stucke and Kieser(2012) expressed their concern. Kwong et al.(2012), on the other hand, stand against the fraction approach, but

Pages Overview