Please activate JavaScript!
Please install Adobe Flash Player, click here for download

ISCB2014_abstract_book

92 ISCB 2014 Vienna, Austria • Abstracts - Poster PresentationsMonday, 25th August 2014 • 15:30-16:00 Monday25thAugustTuesday26thAugustThursday28thAugustAuthorIndexPostersWednesday27thAugustSunday24thAugust tion from such studies becomes crucial for current and future scientific endeavors. Bayesian statistics provides an intuitive framework firmly grounded on probability theory to design and analyze complex data. Bayesian methods make it possible to incorporate prior information in the analysis and may be applied to problems whose structure is too complex for conventional methods to handle. With the aid of modern computing, the approach pro- vides a flexible formulation to address applied problems realistically, and to incorporate the research goals into the analysis. This paper/presentation will discuss the latest Bayesian developments in adaptive dose-finding studies. In particular, it will highlight Bayesian techniques used in the early phases of clinical drug development. In early-phase clinical trials, most often the data is scant (small sample sizes) and very little is known about the actual dose-toxicity relationship. This makes for a perfect setting for the use of Bayesian techniques and this paper will demonstrate some of the many practical benefits of adopting the Bayesian paradigm.   P1.2 Design and analysis of clinical trials P1.2.5 Introducing continuity correction for the Laster-Johnson-Kotler non-inferiority asymptotic test F Almendra-Arao1 , D Sotres-Ramos2 , Y Castillo-Tzec2 1 UPIITA del Instituto Politécnico Nacional, México, D. F., Mexico, 2 Programa de Estadística, Colegio de Postgraduados, Texcoco, Mexico   Non-inferiority asymptotic statistical tests are frequently used in clinical trials. The “at least as good” criterion was introduced by Laster, Johnson and Kotler, for dichotomous data. In this approach (LJK), the margin of non-inferiority is taken as a percentage of the control response, rather than a fixed difference.The procedure is seen to be more efficient than the fixed margin approach yielding smaller sample sizes. Also, the procedure offers several advantages in the design, statistical efficiency and interpret- ability of non-inferiority trials. However, the LJK procedure has the disadvantage that its size is much greater than the required nominal significance level (α). This latter issue is addressed in this work by using two continuity correction factors. The results show that the size of the modified LJK test yields a much better behavior than that of the original LJK test. P1.2.7 An online calculator for futility interim monitoring rules in randomised clinical trials A Alvarez-Iglesias1 , P Gunning1 , J Newell1 1 HRB Clinical Research Facility, National University Ireland, Galway, Ireland   Multi-stage or Group sequential methods are common in many branches of scientific research. The aim of these methods is to pre-specify, prior to the start of data collection, the timing and manner in which a sequence of interim analyses will be conducted within the study. Based on the results of these analyses, a decision about early stopping can be made, without any compromise in power. For instance, in a Randomised Clinical Trial, one might want to investigate as soon as possible whether patients under a new treatment are being exposed to a harmful pharmaceutical drug, in which case the study should be stopped for futility. On the contrary, if the new treatment is leading to positive results, early stopping means that patients will benefit earlier from the new treatment. In this work we review a general approach proposed by Freidlin et al (2010) when there is a need for inefficacy monitoring rules in modelling time to event data.This approach has the advantage that the interim looks can be defined post design, meaning that there is no need for sample size modifications, even after some of the data have already been collected. We show some simulations for different survival distributions and, due to the usefulness of the method, an online calculator will be presented that can easily provide the stopping rules for any superiority design in which two populations are being compared using the log-rank test statistic.   P1.2.18 Best-after-breast design: challenges of nutrition intervention studies in infants E Balder1 , S Swinkels1 1 Danone Nutricia Research, Utrecht, The Netherlands   Breastfeeding is the preferred and recommended method of infant feed- ing. If a mother cannot or chooses not to (fully) breastfeed, a commer- cially prepared infant formula is the recommended alternative. This pre- sentation will address the challenges of nutrition intervention studies in infants, due to practical, ethical and regulatory issues that are inherent to studying feeding regimens in infants. A Best-after-breast design was implemented in Danone Nutricia Research, which allows subjects to enter the study without interfering with the mother‘s or parents‘ choice of early nutrition for the infant. After inclusion (≤28 days of age), regardless of the feeding regimen, subject data is recorded. When the mother/parents au- tonomously decide to start formula feeding, the subject is randomized in a double-blind parallel design to one of the study products. After start with the study formula, the mother is free to continue breast feeding in combination with formula as long as she wants and/or she can switch to full formula feeding in her own pace at any time. The different feeding regimens in the Best-after-breast design may introduce time-varying con- founding. To account for this, for each subject for each measurement the state and duration of full breast feeding, mixed feeding (combination of formula with breast milk and/or weaning foods) and full formula feeding is determined. However, the choice to breast or formula feed may introduce selection bias that cannot be eliminated in the statistical analysis. These and other challenges that are encountered in infant nutrition intervention studies and possible solutions will be presented.   P1.2.28 When does an interim analysis not jeopardise the type I error rate ? P Broberg1 1 Oncology & Cancer Epidemiology, LU/Skåne Hospital, Lund, Sweden   Interest in adaptive clinical trial designs has surged during the last few years. One particular kind of these called sample-size adjustable designs (sometimes sample size re-estimation designs) has come to use in a num- ber of trials lately. Following a pre-planned interim analysis this design of- fers the options of • closing the trial due to futility • continuing as planned • continuing with an increased sample size Recent research has identified situations when raising the sample size does not lead to inflation of the type I error rate. Mehta and Pocock (StatMed 2011) identifies a set of promising outcomes where it is safe to raise the sample size in two-stage trials. Denote the observed test statistic at the interim by z, the originally planned sample size by N0 , the number of observations at the interim by n, and the raise considered by r. Call the final test statistic Z* 2 . Then the reference finds that the modified rejection threshold c(z, N0 +r-n) ensures protection of type I error: P0(Z* 2 ≥ c(z, N0 +r-n))=α. In Broberg (BMC

Pages Overview