Please activate JavaScript!
Please install Adobe Flash Player, click here for download


ISCB 2014 Vienna, Austria • Abstracts - Oral Presentations 61Wednesday, 27th August 2014 • 9:00-10:48 Monday25thAugustTuesday26thAugustThursday28thAugustAuthorIndexPostersWednesday27thAugustSunday24thAugust C32.3 Pairwise residuals and diagnostic tests for misspecified dependence structures in models for binary longitudinal data N Breinegaard1 , S Rabe-Hesketh2 , A Skrondal3 1 University Hospital of Copenhagen, Copenhagen, Denmark, 2 University of California, Berkeley, United States, 3 Norwegian Institute of Public Health, Oslo, Norway   Maximum likelihood estimation of models for binary longitudinal data is inconsistent if the dependence structure is misspecified. Unfortunately, there are currently no diagnostics specifically designed for detecting mis- specified dependence structures in longitudinal models.Traditional good- ness-of-fit tests for categorical data that compare expected and observed frequencies often suffer from two fundamental problems: (1) sparseness invalidating the assumed null distributions and (2) low power since the tests are non-targeted. To address these problems, tests based on mar- ginalized tables have been proposed for log-linear, latent class, and item response models. We introduce these ideas to a longitudinal setting and extend the methods to handle covariates. For exploratory diagnostics, we recommend inspect- ing pairwise residuals based on second-order marginal tables. Diagnostic tests based on such residuals can be targeted to specific types of model vi- olation. We consider the important case where a random-intercept model is misspecified because of serial dependence that decays as the time-lag between pairs of observations increases. For this situation, adjacent-pair concordance statistics are shown to have substantially greater power than tests based on all pairwise residuals. The methods proposed in this paper are straightforward to implement. C32.4 Evaluation of LRT in joint modelling of repeated time-to-event and longitudinal data using nonlinear mixed effects models M Vigan1 , F Mentré1 1 IAME, UMR 1137, INSERM, Univ Paris Diderot, Paris, France   Joint modelling is used to describe the relationship between the evolution of biomarkers, and events, repeated or not. The Stochastic Approximation Expectation Maximization (SAEM) algorithm implemented in Monolix has been extended and assessed for joint model. In the present study, we aim to evaluate, by simulation, the properties of the Likelihood Ratio Test (LRT) for the assessment of biomarker evolu- tion on the occurrence of events. Simulation settings are inspired from a real clinical study. Evolution of biomarkers is defined by an exponential decrease nonlinear mixed effects model and the repeated time-to-event by a frailty model with an exponential hazard baseline function. Various scenarios are studied: i) no, mild or strong association between biomarker and events, ii) different probability of events, iii) different frequency of bio- markers measurements and iv) no or some independent dropout. For each scenario, we simulate 500 datasets with 200 patients. Estimations were performed using the Stochastic Approximation Expectation Maximization (SAEM) algorithm implemented in Monolix 4.3.0, with 3 Markov Chains, and the likelihood was evaluated by Importance Sampling (IS) with 20000 chains. We evaluate the type I error and the power of the Likelihood Ratio Test according to the different scenarios. For all scenarios, type I error was close to 5%. Powers were influenced by dropout and number of events. SAEM in Monolix and LRT with likelihood computed using IS gave good results. C32.5 Estimation of the linear mixed integrated Ornstein-Uhlenbeck stochastic model R Hughes1 , J Sterne1 , K Tilling1 1 University of Bristol, Bristol, United Kingdom   Background: Longitudinal biomarker data (e.g. CD4 counts) are common- ly analysed using a linear mixed model (LMM). For continuous data Taylor, Cumberland and Sy proposed a LMM with an added integrated Ornstein- Uhlenbeck (IOU) non-stationary stochastic process (LM-IOU model), which allows for autocorrelation and estimation of the degree of derivative track- ing. Due to lack of available software, the LM-IOU model is rarely used. Methods: We have implemented the LM-IOU model in Stata. Using simu- lations we assessed the feasibility and practicality of estimating the LM- IOU model by restricted maximum likelihood. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Results: The Newton-Raphson (NR) algorithm achieved convergence with fewer iterations and the computations were faster compared to a combination of the Fisher-Scoring and NR algorithms, and the Average- Information and NR algorithms. The combined algorithms did not provide additional robustness to starting values. When there was a strong degree of derivative tracking convergence depended upon the parameterization of the IOU process. With respect to bias of the point estimates, a dataset of 500 subjects each with 20 measurements was preferable over a dataset of 1000 subjects each with 10 measurements. In some cases, LM-IOU models with random effects other than the random intercept failed to converge due to competition for the same source of stochastic variation. Conclusion: The LM-IOU model can be fitted using standard software to balanced and unbalanced datasets, but LM-IOU models with two or more random-effects may be impractical.   C33 Relative and net survival C33.1 Flexible modeling of continuous covariates in Net Survival: additive vs multiplicative model A Mahboubi1 , L Remontet2 , M Abrahamowicz3 , C Binquet4 , R Giorgi5 , C Quantin4,6 1 Dijon University Hospital, Dijon, France, 2 Hospices Civils de Lyon, Lyon, France, 3 McGill University, Montreal, Canada, 4 Inserm, U866, Univ de Bourgogne, Dijon, France, 5 SESSTIM, Marseille, France, 6 CHRU, Service de Biostatistique et d’Informatique Médicale, Dijon, France   Accurate assessment of the effects of continuous prognostic factors re- quires flexible modeling of both time-dependent (TD) and non-linear (NL) effects. To address this issue, two alternative flexible extensions of the Estève et al modela have been developedb,c . Both models use cubic regression splines to estimate the TD and NL effects but differ in that the TD and NL effects of the covariate on the log-excess hazard are assumed to be: additiveb or multiplicativec . Specifically, the disease-specific hazards are written, respectively as: lc (t|z)=exp(g(t))*exp(ai (zi )+bi (t)*zi ) and lc (t|z)=exp(g(t))*exp(ai (zi )*bi (t)) where: g(t) represents the baseline log-hazard and ai (zi ) and bi (t) represent, respectively, the NL andTD effects of the continuous covariate zi . However, the impact of the differences in the assumptions underlying alternative models on the resulting estimates is unknown. To investigate the implications of these analytical differences, we applied both models to real-life datasets of cancers from registry-based studies. Results obtained with the two models were compared, in terms of estimat- ed hazards, TD and NL effects of age at diagnosis, and their significance

Pages Overview