Please activate JavaScript!
Please install Adobe Flash Player, click here for download


ISCB 2014 Vienna, Austria • Abstracts - Oral Presentations 41Tuesday, 26th August 2014 • 9:00-10:30 Monday25thAugustTuesday26thAugustThursday28thAugustAuthorIndexPostersWednesday27thAugustSunday24thAugust sample size positively influenced estimation and model performance. However, the composition of the sample does not influence the results, given the EPV and total sample size: with a few large clusters, estima- tion and prediction performance is as good as with many small clusters. Stepwise variable selection led to a substantial bias in one of the regres- sion coefficients, but this did not worsen predictive performance. Our findings demonstrate the limited importance of the amount of clustering. In line with several studies dealing with unclustered data, we recommend at least ten EPV for predefined models, although up to fifty EPV may be needed when variable selection is performed. C19.5 Review and evaluation of penalised likelihood methods for risk prediction in data with few events M Pavlou1 , G Ambler1 , S Seaman2 , R Omar1 1 University College London, London, United Kingdom, 2 MRC Biostatistics Unit, Cambridge, United Kingdom Prognostic regression models typically use multiple predictors to predict an outcome. When the number of events is small compared to the num- ber of regression coefficients, the danger of model overfitting is particu- larly pronounced. Traditional guidance suggested the ´rule of 10´ to mean than at least 10 events per estimated regression coefficient (Events Per Variable-EPV) are necessary for the development of reliable risk models. An overfitted model tends to demonstrate poor calibration and predictive accuracy when applied to new data. In this work we review penalised like- lihood methods for binary outcome. We consider Ridge and Lasso, both of which shrink coefficient estimates (Lasso can provide parsimonious models by also omitting some of the predictors). Additionally, we consider extensions of these (e.g. Elastic Net and Adaptive Lasso), their Bayesian analogues and Bayesian approaches based on ´spike and slab´ priors. We evaluate the predictive performance of the methods in comparison to standard MLE in simulated data derived from real datasets. Several fea- tures of the data are varied, namely the EPV, the strength of predictors, the number of ´noise´ predictors and the correlation between predictors. Simulation and real data analyses suggest that MLE tends to produce overfitted models with poor predictive performance in scenarios with few events. Penalised methods offer significant improvement. The choice of method depends on the features of the particular data. Elastic Net per- formed well overall, while the Bayesian approaches were also found to be useful for prediction.   C20 Individual participant data meta-analysis C20.1 How to appraise Individual Participant Data (IPD) meta-analysis in diagnostic and prognostic risk prediction research KGM Moons1 , T Debray1 , M Rovers2 , RD Riley3 , JB Reitsma1 1 UMC Utrecht, Utrecht, The Netherlands, 2 Radboud University Medical Center, Nijmegen, The Netherlands, 3 University of Birmingham, Birmingham, United Kingdom Background: The development and (external) validation of diagnostic and prognostic prediction models is an important aspect of contempo- rary epidemiological research. Unfortunately, many prediction models perform more poorly than anticipated when tested or applied in other individuals, and interpretation of their generalizability is not straightfor- ward. During the past decades, evidence synthesis and meta-analysis of individual participant data (IPD) has become increasingly popular for im- proving the development, validation and eventual performance of novel prediction models. Also, IPD meta-analysis lead to a better understanding in the generalizability of prediction models across different populations. There is, however, little guidance on how to conduct an IPD meta-analysis for developing and validating diagnostic or prognostic prediction models. Objective and Methods: We provide guidance for both authors and reviewers in appraising IPD meta-analyses that aim to develop and/or validate a prediction model using multiple IPD datasets. Furthermore, we demonstrate why and how IPD meta-analysis of risk prediction research differs from IPD meta analysis of intervention research. Finally, we provide methodological recommendations for conducting an IPD meta-analysis for risk prediction research, and illustrate these with a clinical example. Conclusions: Whereas meta-analytical strategies for intervention re- search have been well described during the past few decades, evidence synthesis in risk prediction research is relatively new. Appropriate meth- ods for conducting an IPD meta-analysis in risk prediction research have become available during the past few years, and clearly differ from their counterparts in intervention research.   C20.2 Being PRO ACTive - what can a clinical trials database reveal about ALS? N Zach1 , R Kueffner2 , A Shui3 , A Sherman4 , J Walker4 , E Sinani4 , I Katsovskiy4 , D Schoenfeld3 , G Stolovitzky5 , R Norel5 , N Atassi4 , J Berry4 , M Cudkowicz4 , M Leitner6 1 Prize4Life, Herzliya, Israel, 2 Helmholtz Zentrum, Munich, Germany, 3 MGH Biostatistics Center, Massachusetts General Hospital, Boston, United States, 4 Neurological Clinical Research Institute, MGH, Charlestown, United States, 5 The DREAM Project, IBM, Yorktown Heights, United States, 6 Prize4Life, Boston, United States Understanding a given patient population is a necessary step in advanc- ing clinical research and clinical care and conducting successful and cost- effective clinical trials. To overcome the challenge of gathering a large enough cohort of patients in rare diseases such as ALS. We developed the Pooled Resource Open-access ALS Clinical Trials (PRO-ACT) platform. The PRO-ACT database consists of 8600 ALS patients who participated in 17 clinical trials. The dataset includes demographic, family history, vital signs, clinical assessment, lab-based, treatment arm, and survival information. The database was launched open access on December 2012, and since then over 225 researchers from 25 countries have requested the data. Several assessments were made to start understanding the value of the PRO-ACT in addressing pivotal questions in ALS clinical research. One such initiative included a crowdsourcing effort-the ALS Prediction Prize chal- lenge- to develop improved methods to accurately predict disease pro- gression at the individual patient level. The challenge brought in 1000+ registrants and led to the creation of multiple novel disease progression algorithms. Other highly important insights from the database include newly identi- fied predictive features, definitive support for previously proposed predic- tive features based on smaller samples, and newly identified stratification of patients based on their disease progression profiles. These results demonstrate the value of large datasets for developing a better understanding of ALS natural history, prognostic factors and dis- ease variables. Such critical questions include patient stratification, as- sociations with disease co-morbidities and concomitant medications, identification of biomarkers, and potentially new ways to enhance clinical practice and clinical trials.  

Pages Overview