Please activate JavaScript!
Please install Adobe Flash Player, click here for download

ISCB2014_abstract_book

ISCB 2014 Vienna, Austria • Abstracts - Oral Presentations 37Monday, 25th August 2014 • 16:00-17:30 Monday25thAugustTuesday26thAugustThursday28thAugustAuthorIndexPostersWednesday27thAugustSunday24thAugust cation to them remains high; on the other hand if they are ineffective, the allocation changes over the course of the trial to ones that are effective. The proposed design has a high power to recommend treatments that work well in subgroups, especially if the initial pairings were suitable. Also considered is a biomarker discovery step, where a new biomarker can be substituted in during the trial. This can lead to increased power when the new biomarker is truly predictive for one of the treatments. C17.5 Design of telehealth trials - introducing adaptive approaches LM Law1 , J Wason1 1 MRC Biostatistics Unit, Cambridge, United Kingdom Telehealth is the use of technology to allow communication of information between patient and care-provider whilst the patient is outside the clini- cal environment, e.g. in their own home. The range of telehealth is broad, from the self-monitoring of blood glucose levels in diabetics to patients of mental illness receiving therapy treatment online. The field of telehealth and telemedicine is expanding as the need to improve efficiency of health care becomes more pressing. The decision to implement a telehealth system can be an expensive undertaking that impacts a large number of patients and other stakeholders. It is important that the decision is fully supported by accurate evaluation of telehealth interventions. Numerous reviews of telehealth have described the evidence base as inconsistent. In response they call for larger, more rigorously controlled trials, and tri- als which go beyond evaluation of clinical effectiveness alone. Adaptive designs could be ideal for addressing these needs. This presentation dis- cusses various options of adaptive design, which have so far been applied in drug trials only. These include sample size reviews to address uncertain parameters, group sequential and multi-arm multi-stage trials to improve efficiency, and enrichment designs to target the patient population that responds best to the intervention. The presentation will then focus in on an example of a telehealth study, using simulated data to demonstrate the benefit of employing an adaptive design over a standard design.   C18 Binary and count data analysis C18.1 Multiple comparisons of treatments with highly skewed ordinal responses T-Y Lu1 , W-Y Poon2 , SH Cheung2 1 China Jiliang University, Hangzhou, China, 2 The Chinese University of Hong Kong, Hong Kong, China Clinical studies frequently involve the comparisons of treatments with ordinal responses. The Wilcoxon-Mann-Whitney test and its modified ver- sions based on the proportional odds assumption are popular methods used to compare treatments with ordinal responses. However, it has long been recognized that the validity of these methods depends heavily on the equal variance assumption. A recently proposed latent normal model has been shown to be a better alternative when treatments are having heterogeneous variances. However, for highly skewed ordinal data, the latent normal model that relies on the assumption of symmetric under- lying distributions does not perform satisfactorily. To remedy the prob- lem, we propose a new approach for treatment comparisons for highly skewed ordinal responses, with the adoption of the latent Weibull model for multiple comparisons, including multiple comparisons with a control and pairwise comparisons. Our findings indicate that this new approach is superior to the latent normal model. Data from clinical studies are also used to illustrate our proposed procedure. C18.2 Calculating confidence intervals for risk differences by means of MOVER-R R Bender1 , RG Newcombe2 1 Institute for Quality and Efficiency in Health Care (IQWiG), Cologne, Germany, 2 Cardiff University, Cardiff, United Kingdom In Cochrane reviews as well as in the GRADE system absolute estimates of treatment effect are frequently calculated by using relative risk (RR) estimates based on a meta-analysis in combination with an independent baseline risk (BR) estimate. Spencer et al. (BMJ 2012; 345: e7401) pointed out that GRADE and all other systems for rating confidence in absolute treatment effect estimates do not fully address uncertainties in BR esti- mates. If BR and RR are estimated from different independent sources, confidence limits for the corresponding RD can be calculated from those for BR and RR by a procedure called method of variance estimates recov- ery (MOVER-R) according to Newcombe (Stat. Methods Med. Res. 2013). This method is explained and applied to examples. The resulting confi- dence intervals are compared to those obtained by the method currently used in Cochrane reviews, and to those obtained by the naive method of directly combining the confidence limits for RR and BR. It is shown that a simple and effective method is available to calculate confidence intervals for the absolute treatment effect from independent interval estimates of BR and RR taking both sources of uncertainty into account. This method should be applied in practice. C18.3 Misspecified Poisson regression models for large-scale registry data: problems with “large n and small p” R Grøn1 , TA Gerds1 , PK Andersen1 1 Section of Biostatistics, University of Copenhagen, Copenhagen K, Denmark Poisson regression based on registry data is an important tool in applied epidemiology which is used to study the association between exposure and event rates. In this talk we will illustrate problems related to “small p and large n” where p is the number of available covariates and n the sample size. Specifically, we are concerned with modeling options when there are mul- tiple timescales and time-varying covariates which can have time-varying effects. One problem is that tests for proportional hazard assumptions, in- teractions of exposure with other observed variables, and linearity of the exposure effects have large power due to the large sample size, and will often indicate statistical significance even for numerically small deviations that have no interest for the subject matter. In practice this insight may lead to simple working models (which are then likely misspecified and po- tentially confounded). To support and improve conclusions drawn from such models, we shall discuss the use of robust standard errors, the choice of time-scales, and sensitivity analysis. The methods are illustrated with data from the Danish national registries.

Pages Overview