Please activate JavaScript!
Please install Adobe Flash Player, click here for download

ISCB2014_abstract_book

ISCB 2014 Vienna, Austria • Abstracts - Oral Presentations 33Monday, 25th August 2014 • 16:00-17:30 Monday25thAugustTuesday26thAugustThursday28thAugustAuthorIndexPostersWednesday27thAugustSunday24thAugust remains, especially for very small meta-analyses. One solution is to test the sensitivity of the meta-analysis conclusions to assumed moderate and large degrees of heterogeneity. Equally, whenever heterogeneity is de- tected, it should not be ignored.   C14.4 Estimating heterogeneity when pooling proportions in a meta analysis A Benedetti1 , R Platt1 , D Thomas1 1 McGill University, Montreal, Canada The volume of medical research has exploded over the last several de- cades, increasing the focus on evidence based medicine, but also posing challenges on how to synthesize that evidence. Meta-analytic methods sit at the crossroads of these trends. We consider pooling proportions via a generalized linear mixed model (GLMM) as in (Hamza et al. J Clin Epidemiol. 61(1):41-51). Contrary to an inverse variance weighted pooling (e.g. der Simonian Laird) one of the ad- vantages of the GLMM is the ability to deal with proportions that are 0 or 1 without applying a continuity correction. Our interest lies in estimating the overall pooled proportion and heterogeneity as defined by the inter- study variability in situations when there are proportions that are 0 or 1. Via simulation study, we consider varying the study sizes, number of studies, true interstudy heterogeneity, and true pooled proportion. We compare estimating the pooled proportion and interstudy variability, via a GLMM, estimated via penalized quasi-likelihood (PQL) and adaptive Gaussian hermite quadrature (AGHQ), as well as Bayesian approaches. We further compare our estimates of heterogeneity to the I2, as estimated from a der Simonian and Laird random effects model with a continuity correction. We demonstrate our methods in a real life example. Preliminary results suggest that interstudy heterogeneity is underesti- mated when the model is estimated via PQL or AGHQ - though to a lesser degree when AGHQ is used when the level of heterogeneity is high. The degree of underestimation increased as the number of proportions that were 1 increased. C14.5 Meta-analysis and the surgeon general’s report on smoking and health M Schumacher1 , G Rücker1 , G Schwarzer1 1 Medical Center-University of Freiburg, Freiburg, Germany Although first meta-analyses in medicine have already been conducted at the beginning of 20th century, its major breakthrough came with the activities of the Cochrane Collaboration during the 1990s. It is less known that the landmark report on“Smoking and Health”to the Surgeon General of the Public Health Service, published fifty years ago, makes substantially use of meta-analyses performed by statistician William G. Cochran. Based on summary data given in the report we reconstructed meta-anal- yses of seven large, prospective studies that were initiated in the 1950s by concentrating on overall and lung cancer mortality (N Engl J Med 370; 2: 186-188; January 9, 2014). While visualization of results including confidence intervals was largely neglected in the report we are able to give a vivid impression of the overwhelming evidence on the harmful ef- fects of cigarette smoking. We will put William G. Cochran’s contribution into the context of the so-called lung cancer controversy in which other prominent statisticians, e.g. Sir Ronald Fisher, played a major role. In con- trast to the latter who selected a specific study that supported his per- sonal view, Cochran took an impartial systematic approach for evaluation and followed the major steps of a modern systematic review including an appraisal of risk of bias based on sensitivity analysis. For that he used state-of-the-art statistical methodology while neglecting visualization of results. Although substantially contributing to an important public policy issue this work is often overlooked and deserves much more attention. C15 Longitudinal data analysis I C15.1 Longitudinal models with outcome dependent follow-up times I Sousa1 1 Universidade do Minho, Guimarães, Portugal In longitudinal studies individuals are measured repeatedly over a pe- riod of time. In observational studies individuals have different number of measurements assessed at different times. In standard longitudinal models (Diggle et al. 2002), the follow up time process is assumed deter- ministic, the follow-up time process is noninformative about the outcome longitudinal process of interest. However, in medical research a patient is usually measured according to their clinical condition. Therefore, follow- up time process is considered dependent of the longitudinal outcome process and it should not be considered deterministic. The classical longi- tudinal analysis does not consider the dependence that can exist between the follow-up time process and the longitudinal outcome process. In this work we propose to joint model the longitudinal process and the follow- up time process. We propose a model where the follow-up time process is stochastic. The model is described through the joint distribution of the observed pro- cess and the follow-up time process. Estimation of model parameters is through maximum likelihood, where a Monte Carlo approximation is necessary. We conducted a simulation study of longitudinal data where model parameter estimates are compared, when using the model pro- posed and the model in Lipsitz et al. (2002). Finally, the model proposed is applied to data of an observational study on kidney function. References: Diggle, P.J., Liang, K-Y., Zeger, S.L. 2002: Analysis of Longitudinal Data, Oxford: Clarendon Press. Lipsitz, S.R., Fitzmaurice, G.M., Ibrahim, J.G., Gelber, R., Lipshultz, S. 2002: Parameter estimation in longitudinal studies with outcome-dependent follow-up, Biometrics, 58, 50 - 59. C15.2 Joint modelling of longitudinal and survival data incorporating delayed entry: application to longitudinal mammographic breast density and breast cancer survival MJ Crowther1 , TM-L Andersson2 , PC Lambert1,2 , K Humphreys2 1 University of Leicester, Leicester, United Kingdom, 2 Karolinska Institutet, Stockholm, Sweden This work is motivated by a study in breast cancer, where in particular, we are interested in how changes in mammographic density are associ- ated with survival. This type of question, where a longitudinally measured biomarker is associated with the risk of an event, can be addressed us- ing joint modelling. The most common approach to joint modelling com- bines a longitudinal mixed effects model for the repeated measurements with a proportional hazards survival model, where the models are linked through shared random effects. The analysis in the motivating study is complicated by the fact that pa- tients are only included if they have at least two density measurements. Therefore patients do not become at risk of the event of interest until the time of second measurement, which is by definition after the baseline of diagnosis. Consequently, delayed entry needs to be incorporated into the modelling framework. The extension to delayed entry requires a second set of numerical integration, beyond that required in a standard joint model. We therefore implement two sets of fully adaptive Gauss-Hermite quadra- ture with nested Gauss-Kronrod quadrature, conducted simultaneously, to evaluate the likelihood. We evaluate the importance of accounting for

Pages Overview