Please activate JavaScript!
Please install Adobe Flash Player, click here for download


ISCB 2014 Vienna, Austria • Abstracts - Oral Presentations 79Wednesday, 27th August 2014 • 16:00-17:30 Monday25thAugustTuesday26thAugustThursday28thAugustAuthorIndexPostersWednesday27thAugustSunday24thAugust The previous interpretation of independent censoring would be valid if the parameter of interest would not change after replacing the observa- tional intensity of censoring with 0.We will discuss identifiability of param- eters when subject to hypothetical censoring regimes. Especially the ones corresponding to stabilized and non-stabilised censoring weights. Using local independence graphs and delta-separation, we derive an analogy to the back-door criterion that applies to censoring in survival analysis.   C44 Validation of prediction models C44.1 The need for a third dimension in the external validation of clinical prediction rules W Vach1 1 Clinical Epidemiologiy, IMBI, University of Freiburg, Freiburg, Germany   When clincial prediction rules have to be validated in an external data set, the focus is often on two dimensions: calibration and discrimina- tion. However, these two dimensions do not cover the whole information about the discrepany between the true event probabilities and the sug- gested probabilities according to the clinical prediction rule. We present some (theoretical) examples with varying degree of agreement between true and suggested event probabilities, which give identical calibration scope, AUC and Brier score. To overcome the problem, we can consider to estimate directly some mea- sures of the agreement between true and suggested event probabilities, like the euclidian distance. However, such measures may be hard to in- terpret. As an alternative, we suggest to estimate the inverse calibration slope, i.e. the slope of a regression of the suggested vs. the true event probabilities. The joint interpretation of the inverse calibration slope and the ordinary calibration slope is simple: If both are 1, then we have per- fect agreement. We demonstrate that the inverse calibration slope can be estimated by a boostrap bias correction of the naive estimate based on a flexible estimate of the true event probabilities.   C44.2 Multiple validation of prediction models: a framework for summarizing and interpreting results D Nieboer1 , Y Vergouwe1 , TPA Debray2 , H Koffijberg2 , KG Moons2 , EW Steyerberg1 1 Erasmus MC, Rotterdam, The Netherlands, 2 UMC Utrecht, Utrecht, The Netherlands   Aim: A commonly found rationale is that multiple successful validation across different settings increase the likelihood that a model is valid for new settings.We aimed to develop a framework to critically assess the evi- dence of validation studies. Methods: We developed a model predicting 6 month mortality in pa- tients with traumatic brain injury from a single observational study. We validated the model on 14 other cohorts from the IMPACT database (3 observational studies and 11 RCTs). Overall calibration was assessed with calibration-in-the-large and average predictor strength with the calibra- tion slope.We constructed forest plots to summarize validation results.We quantified heterogeneity using the I2 statistic and calculated prediction intervals (PIs). Meta-regression was used to identify factors explaining the observed heterogeneity. Results: The pooled calibration slope indicated that predictor effects were less strong at validation (pooled estimate 0.72, PI 0.37-1.06), with substantial heterogeneity (I2 95%). Meta-regression showed that type of cohort (observational study/RCT) explained most of this heterogeneity. The pooled estimate of the calibration-in-the-large indicated that pre- dicted probabilities were on average too high (-0.62 PI [-1.50 - 0.26]). The observed heterogeneity was again substantial (I2 94%), but could not be explained with meta-regression. Conclusion: We propose the use of meta-analytic methods to summarize the cumulating evidence of validation studies for prediction models. If limited heterogeneity is observed, the model is likely generalizable to the studied settings. However, if heterogeneity is observed meta-regression may identify sources of heterogeneity to guide the interpretation of the validity and applicability of the prediction model. C44.3 Summarising the performance of prognostic models developed and validated using multiple studies KIE Snell1 , TPA Debray2 , J Ensor3 , MP Look4 , KG Moons2 , RD Riley3 1 MRC Midland Hub for Trials Methodology Research, Birmingham, United Kingdom, 2 University Medical Center Utrecht, Utrecht, The Netherlands, 3 University of Birmingham, Birmingham, United Kingdom, 4 Erasmus MC Cancer Institute, Rotterdam, The Netherlands   Internal-external cross-validation (IECV) is an approach for developing and validating a prognostic model when data from multiple studies are available. The model is developed multiple times, each time excluding a different study for external validation of its performance (discrimination and calibration). This produces multiple values for every validation statis- tic of interest (e.g. C-statistic, calibration slope). In this presentation we extend IECV by using random-effects meta-analy- sis to combine and summarise the validation statistics across the omitted studies. We show it provides two crucial summaries: (i) the average model performance in the different populations, and (ii) the heterogeneity of model performance across populations. A good prognostic model will have excellent average performance with little or no heterogeneity. We explain how the meta-analysis approach also allows model implementa- tion strategies to be compared; for example regarding the choice of inter- cept or baseline hazard. The presentation concludes with some novel extensions. First, we use the meta-analysis results to produce 95% prediction intervals for the valida- tion performance in a new population. Narrow intervals are desirable if a model is likely to perform consistently in new populations. Then we propose multivariate meta-analysis to summarise correlated validation statistics (such as the C-statistic and calibration slope), to determine the probability that both discrimination and calibration performance will be acceptable in practice. Real examples in breast cancer and deep vein thrombosis are used throughout.   C44.4 Incorporating retrospective information to reduce the sample size of prospective diagnostic‑biomarker‑validation designs L García Barrado1 , E Coart2 , T Burzykowski1,2 1 I-Biostat, Hasselt University, Diepenbeek, Belgium, 2 International Drug Development Institute (IDDI), Louvain-la-Neuve, Belgium   Problem setting: The sample size of a prospective clinical study aimed at validation of a diagnostic biomarker may be prohibitively large. A Bayesian framework that would incorporate available retrospective data on the ac- curacy of the biomarker might allow reducing the sample size and render- ing the study feasible. Methods: A Bayesian design is presented for planning and analyzing a prospective clinical validation study that incorporates retrospective data.

Pages Overview