Please activate JavaScript!
Please install Adobe Flash Player, click here for download

ISCB2014_abstract_book

32 ISCB 2014 Vienna, Austria • Abstracts - Oral PresentationsMonday, 25th August 2014 • 16:00-17:30 Monday25thAugustTuesday26thAugustThursday28thAugustAuthorIndexPostersWednesday27thAugustSunday24thAugust C13.5 Bounds for causal interaction A Sjolander1 , W Lee2 , H Kallberg1 , Y Pawitan1 1 Karolinska Institutet, Stockholm, Sweden, 2 Inha University, Incheon, Republic of Korea Interaction in statistical and epidemiological literature appears in at least two forms: (i) statistical interaction as deviation from additive models, and (ii) causal interaction as latent classes in the population that describe how subjects causally respond to risk factors, e.g. a class of people that devel- ops a disease if and only if two risk factors are present. Almost all analyses of interaction in the literature are of the first type, which is conceptually problematic since statistical interaction is scale-dependent. For example, lack of interaction in the multiplicative scale -- such as the logistic model -- must mean the existence of additive interaction in the linear probability scale. In contrast, causal interaction is invariant to the choice of scale, but has the disadvantage that the latent classes are not estimable from the observed data. A well-known solution is simply to test the presence of the causal interaction, but this does not tell us its magnitude. In this work we solve the problem by providing lower and upper bounds for the causal interaction between two risk factors with arbitrary number of levels. The magnitude is well captured when the bounds are tight. In a real data ex- ample of rheumatoid arthritis, we observe these tight bounds for two genetic risk factors when we further assume that they have monotone ef- fects. In conclusion, the concept of causal interaction is a useful general data-analytic concept, complementary to the standard statistical interac- tion, and can be practically assessed in commonly available datasets.   C14 Meta-analysis C14.1 Addressing continuous missing outcomes in pair-wise and network meta-analysis D Mavridis1,2 , IR White3 , JP Higgins4,5 , A Cipriani6 , G Salanti2 1 University of Ioannina, Department of Primary Education, Ioannina, Greece, 2 University of Ioannina, School of Medicine, Ioannina, Greece, 3 MRC Biostatistics Unit, Cambridge, United Kingdom, 4 University of Bristol, Bristol, United Kingdom, 5 University of York, York, United Kingdom, 6 University of Oxford, Oxford, United Kingdom Missing outcome data may affect results of individual trials and their meta-analysis by reducing precision and, if the missing-at-random (MAR) assumption does not hold, by introducing bias in the estimated treatment effects. We propose a pattern-mixture model to estimate meta-analytic treatment effects for continuous outcomes when these are missing for some of the randomised individuals. Our model is applicable to both pairwise and net- work meta-analysis (NMA) and makes explicit assumptions about param- eters in the unobserved data conditional on observed data. Specifically, in each study we quantify departures from the MAR assumption via a miss- ingness parameter that relates the outcome means in the observed and missing data. This leads to an adjusted estimate of the effect size and un- certainty in this estimate is estimated using either Monte Carlo methods or a Taylor series approximation. The adjusted effect size accounts prop- erly for the fact that some of the outcome data are missing. We illustrate the suggested methodology using a meta-analysis of studies comparing mirtazapine to placebo for depression and a NMA involving nine antidepressants. Summary mean difference of mirtazapine relative to placebo decreases from -2.34(95% -4.67,0) to -2.66 (95% -4.90,-0.41) as we depart from the MAR assumption.When we account for missing out- come data, study weights depend on the missingness rate, and summary results may change if missing rates vary considerably across studies. As we depart from the MAR assumption, within-study uncertainty increases but between-study heterogeneity decreases, and changes in summary esti- mates depend on the trade-off between these two sources of variability.   C14.2 The impact of choice of heterogeneity estimator in meta-analysis D Langan1 , J Higgins1 , M Simmonds1 1 University of York, York, United Kingdom In meta-analyses, effect estimates from different studies usually vary above and beyond what would be expected by chance alone, due to in- herent variation in the design and conduct of the studies.This type of vari- ance is known as heterogeneity and is most commonly estimated using a moment-based approach described by DerSimonian & Laird. However, this method has been shown through simulation studies to produce bi- ased estimates. Alternative methods to estimate the heterogeneity vari- ance include proposals from Paule & Mandel and Hartung & Makambi, and estimators derived from maximum likelihood and restricted maximum likelihood approaches. This presentation compares these methods and the impact they have on the results of a meta-analysis using 12,894 meta- analyses extracted from the Cochrane Database of Systematic Reviews. The methods are compared in terms of: (1) the extent of heterogeneity, expressed in terms of the I2 statistic; (2) the overall effect estimate; (3) the 95% confidence interval of the overall effect estimate; and (4) p-values testing the hypothesis of no effect. Results suggest that in some meta- analyses, I2 estimates can differ by more than 50% when different hetero- geneity estimators are used. Conclusions based naively on statistical sig- nificance (at a 5% level) were discordant for at least one pair of estimators in 7.4% of meta-analyses, indicating that the choice of heterogeneity esti- mator can affect the conclusions of a meta-analysis. These findings high- light the need for a greater understanding of why heterogeneity estimates disagree and the need for guidance on alternatives to the DerSimonian & Laird method. C14.3 A re-analysis of the Cochrane Library data: the dangers of unobserved heterogeneity in meta- analyses E Kontopantelis1 , D Springate1 , D Reeves1 1 University of Manchester, Manchester, United Kingdom Background: Heterogeneity has a key role in meta-analysis methods and can greatly affect conclusions. However, true levels of heterogeneity are unknown and often researchers assume homogeneity. Methods and findings: We accessed 57,397 meta-analyses, available in the Cochrane Library in August 2012. Using simulated data we assessed the performance of various meta-analysis methods in different scenarios. The prevalence of a zero heterogeneity estimate in the simulated scenari- os was compared with that in the Cochrane data, to estimate the degree of unobserved heterogeneity in the latter. We re-analysed all meta-analyses and assessed the sensitivity of the statistical conclusions. Levels of unobserved heterogeneity in the Cochrane data appeared to be high, especially for small meta-analyses. A bootstrapped version of the DerSimonian-Laird approach performed best in both detecting heteroge- neity and in returning more accurate overall effect estimates. Re-analysing all meta-analyses with this new method we found that in cases where het- erogeneity had originally been detected but ignored, 17-20% of the statis- tical conclusions changed. Conclusions: When evidence for heterogeneity is lacking, standard prac- tice is to assume homogeneity and apply a simpler fixed-effect meta- analysis. We find that assuming homogeneity often results in a misleading analysis, since heterogeneity is very likely present but undetected. Our new method represents a small improvement but the problem largely

Pages Overview