Please activate JavaScript!
Please install Adobe Flash Player, click here for download


ISCB 2014 Vienna, Austria • Abstracts - Poster Presentations 99Monday, 25th August 2014 • 15:30-16:00 Monday25thAugustTuesday26thAugustThursday28thAugustAuthorIndexPostersWednesday27thAugustSunday24thAugust Bayesian optimal adaptive procedures are considered which recommend that each new cohort of patients receives the combination of doses that minimises the variance of the posterior modal estimate of ED100π. Prior opinion about the dose-response relationship is represented as pseudo- data. It is assumed that the dose-response relationship follows an Emax model. However, fitting Emax models can be challenging due to problems of non-convergence. With this in mind, when the Emax model fails to con- verge at an interim analysis we investigate Bayesian procedures which use a cubic approximation to the Emax model for the purposes of making dose recommendations. An algorithmic procedure is also considered for dose-finding, which as- sumes only that the dose-response relationship is monotonic when mak- ing interim dose recommendations. Simulation is used to compare the algorithmic procedure and Bayesian optimal design for estimating the ED100π, using a non-adaptive incomplete block design as a benchmark for comparison. P1.2.166 Statistical methods for centralised risk-based monitoring in clinical trials JV Torres-Martin1 , M Rodriguez1 , K Haas2 , M Horneck2 1 Syntax for Science S.L, Palma de Mallorca, Spain, 2 maxclinical GmbH, Freiburg, Germany   Due to a global crisis in combination with an increase in drug develop- ment costs, pharmaceutical companies are under enormous pressure to make their processes more cost-effective. In August 2013, the FDA re- leased a guideline outlining its position on the current practice of clini- cal monitoring. This guideline opened a new perspective acknowledging that traditional costly practices based on on-site monitoring might not be the most efficacious. The FDA encourages centralised risk-based monitor- ing which allocates resources across centres based on their level of risk. Centralised risk-based monitoring stands out as a promising area wherein its use can not only reduce costs but also improve research quality and patient safety. A number of statistical methods used in other areas that can be applied to risk-based monitoring are reviewed. We explore the application of these methods under the following situa- tions using real data: (1) error and fraud detection (multivariate outlier detection in combina- tion with missing data imputation, overdispersion and underdispersion); (2) blinded monitoring of nuisance parameters that can affect the prin- cipal objectives of the clinical trial (variability of the primary variable for continuous endpoints, or event rates for survival and count endpoints); and, (3) patient recruitment. Statistical modeling can also be used to assess trends of undesired operational trial events, and quantify the risk of these events happening in the near future.   P1.2.167 Re-sampling methods for internal model validation in diagnostic and prognostic studies: review of methods and current practice JV Torres-Martin1 , H Chadha-Boreham2 1 Syntax for Science S.L, Palma de Mallorca, Spain, 2 Actelion Pharmaceuticals Ltd., Basel, Switzerland   Multivariable logistic regression models are extensively used in diagnostic and prognostic studies. Examples can be found in cancer (e.g., prostate cancer test), cardiovascular diseases (e.g., Framingham Risk Score) or pul- monary arterial hypertension (e.g., DETECT PAH Risk Score).Variable selec- tion is an important part of model building. Good models are those which show good performance characteristics (calibration and discrimination) not only with the data used to fit the model but also with new external data. As external data are not readily available in all disease areas, the data used to fit the model is commonly used to validate the model (i.e. internal model validation). The objective of this work is to review methods for internal model valida- tion, focusing on variable selection and discrimination by means of resam- pling methods. Methods for examining model performance and detection of overfitting will be reviewed. Two systematic reviews are conducted: (1) in “Statistics in Medicine” to gather relevant research on re-sampling methods for internal model vali- dation; and (2) in the“New England Journal of Medicine”to assess the ex- tent to which these methods are used in medical research.The application of these methods in SAS and R is described. We finish our work by illustrat- ing the methods applied to real data, and describing the challenges we faced in real life situations. P1.2.168 Adaptive increase sample size with count endpoints: the path from statistical simulation to the development of an explicit formula A Rodríguez1 , A Mir2 , M Rainisio3 , JV Torres-Martín1 1 Syntax for Science S.L., Basel, Switzerland, 2 University of the Balearic Islands, Palma de Mallorca, Spain, 3 Abanovus, Sanremo, Italy   Adaptive designs have been a hot topic of discussion during the past re- cent years by biostatisticians from pharmaceutical companies, regulatory agencies and academia. Mehta and Pocock proposed an adaptive design where the trial starts with a small up-front sample size commitment com- pared to the traditional group sequential method. Additional sample size resources are committed to the trial only if promising results based on con- ditional power are obtained at interim analysis. The authors proved that this design does not require any multiplicity ad- justment being preserved the type I error if (1) the information obtained in the interim analysis is only used to decide whether the sample size is increased, and (2) the sample size is increased only when the interim con- ditional power falls within a promising zone. This design was developed for normal, binomial and survival endpoints, but not for count endpoints. We describe how we use the previous design to plan a clinical trial with a count endpoint using statistical simulations. The required increase in sample size based on the conditional power at interim is obtained, and it is shown that the type I error is preserved. Statistical simulations are not considered by the regulatory agencies as a valid method to prove any sta- tistical result.We describe our achievements and the challenges we face to prove the previous results by means of explicit mathematical formulation.   P1.2.172 Comparison of different allocation procedures in clinical trials in small population groups with respect to accidental and selection bias D Schindler1 , D Uschner1 1 RWTH Aachen University, Institute for Medical Statistics, Aachen, Germany   Each medical treatment available on the market has been tested by exten- sive clinical research. For statistically proving the effectiveness of a medi- cal intervention, the randomized controlled clinical trial is considered the “goldstandard”. Usually, patients arrive sequentially to the clinical trial and have to be al- located to the treatments arm immediately. The allocation is realized us- ing a randomization procedure. The accrual character of clinical trials is the source of different biases that may arise even though randomization and blinding have been employed effectively. This results particularly in a biased estimator for the treatment effect. Hence the effectiveness of a placebo might be alleged or, conversely, an effective treatment might be found inefficient and be banned from the market forever.The use of a suit-

Pages Overview