Please activate JavaScript!
Please install Adobe Flash Player, click here for download

ISCB2014_abstract_book

138 ISCB 2014 Vienna, Austria • Abstracts - Poster PresentationsWednesday, 27th August 2014 • 15:30-16:00 Monday25thAugustTuesday26thAugustThursday28thAugustAuthorIndexPostersWednesday27thAugustSunday24thAugust Thanks to the XworX platform further analysis can be appended to the workflow by anyone with minimal programming skills, because the plat- form allows for easy and direct integration of R-, Python- and Java-code. P4.5.98 Use of factor analysis in assessments of clinical teaching evaluations J Mandrekar1 1 Mayo Clinic, Rochester, MN, United States   Factor analysis is a generic term for a family of statistical techniques con- cerned with the reduction of a set of observable variables in terms of a small number of latent factors. It has been developed primarily for analyz- ing relationships among a number of measurable entities (such as survey items or test scores). The underlying assumption of factor analysis is that there exists a num- ber of unobserved latent variables (or “factors”) that account for the cor- relations among observed variables, such that if the latent variables are partialled out or held constant, the partial correlations among observed variables all become zero. In other words, the latent factors determine the values of the observed variables. Factor analysis has been widely used, es- pecially in behavioral sciences, to assess the construct validity of a test or a scale. The focus of this talk is to provide an introduction to factor analysis in the context of research projects from Medical Education that involve clinical teaching evaluations, for example, resident-teacher evaluations, resident´s reflection on quality improvement etc. P4.5.128 How to choose a two-sample test for continuous variables: a new solution to an old problem A Poncet1 , DS Courvoisier1 , C Combescure1 , TV Perneger1 1 University Hospital of Geneva, Geneva, Switzerland   Objectives: To explore the importance of normality and sample size when choosing the best two-sample test for continuous data. Study design: Simulation study that compared tests (T, Mann-Whitney, robust T, permutation) applied to samples of various sizes (10 to 500) drawn from 4 distributions (normal, uniform, log-normal, bimodal) under the null hypothesis and under the alternate (difference between means of 0.25 standard deviation), with equal unit variance in all distributions. Results: Type 1 errors were well controlled in all situations. The T test was most powerful for data drawn from the normal and the uniform distribu- tions, but only by a narrow margin. The Mann-Whitney test was the most powerful option for data drawn from asymmetric distributions; compared to the T test the gain of power was often large, especially for the highly skewed log-normal distribution. Of note, even the T test was more pow- erful under asymmetric distributions than under the normal distribution. In presence of outliers (bimodal distribution), the robust T test was most powerful. Conclusions: All tests performed well under the four distributions, at all sample sizes: type 1 errors were on target, and assumptions violations did not reduce power. This justifies opting for the test that fits the scientific hypothesis the best, regardless of normality or sample size. To select the most powerful test, the symmetry of the distribution is the key criterion: for asymmetric distributions the Mann-Whitney test is the most powerful, for symmetric distributions it is the T test.   P4.5.133 Properties of ANOVA and its alternatives under violation of their assumptions: a simulation study Z Reiczigel1 , M Ladanyi1 1 Corvinus University of Budapest, Budapest, Hungary   One-way ANOVA is applied in almost all research areas but many users are uncertain what to do if its applicability conditions, normality and ho- moscedasticity seem to be violated. There are alternative methods for unequal group variances (Welch, Brown-Forsythe, WLS), and nonparametric methods like the Kruskal-Wallis test which are applicable under non-normality. There is a widespread mis- belief that the Kruskal-Wallis test works fine under heteroscedasticity, or that ANOVA is robust against violations of the assumptions if the group sizes are equal. Our aim is to carry out a systematic simulation study to explore the prop- erties of some available methods and make recommendations for the non-statistician users based on the results. We compared one-way ANOVA, Kruskal-Wallis test, Welch’s method, Brown-Forsythe method, bootstrap ANOVA, as well as the method of pre- testing heteroscedasticity by Levene’s test and using ANOVA or Welch de- pending on its result. We used normal as well as non-normal distributions (uniform, chi-squared, exponential, symmetric bimodal and skewed bimodal), combined with heteroscedasticity (2-fold or a 3-fold SD). Nominal alpha was set to 5%, and we simulated the actual alpha. Under non-normality combined with heteroscedascity, classical ANOVA performs rather poor, even in case of large samples and balanced design. Heteroscedasticity totally invalidates the Kruskal-Wallis test. Alternative methods, such as Welch, Brown-Forsythe, and bootstrap ANOVA perform better. Pre-testing the equality of variances by Levene’s test and choosing between the Welch’s test or classical ANOVA based on its result was found almost always worse than using the Welch’s test without any pre-testing. P4.5.139 Estimation of the ROC curve with a time-dependent disease variable in the presence of covariates MX Rodríguez-Álvarez1 , L Meira-Machado2 1 University of Vigo, Vigo, Spain, 2 University of Minho, Guimarães, Portugal   The receiver operating characteristic (ROC) curve is the most widely used measure for evaluating the performance of a diagnostic biomarker when predicting a binary outcome. The ROC curve displays the sensitivity and specificity for different cut-offs values used to classify an individual as healthy or diseased. In many studies, however, the target of a biomarker may involve progno- sis instead of diagnosis. In such cases, when evaluating the performance of the biomarker, several issues should be taken into account: first, the time-dependent nature of the outcome (i.e., the disease status of an indi- vidual varies with time); and second, the presence of incomplete data (e.g., censored data typically present in survival studies). Accordingly, to assess the discrimination power of continuous prognostic biomarkers for time- dependent disease outcomes, time-dependent extensions of sensitivity, specificity and ROC curve have been recently proposed. In this work we present a new nonparametric estimator for the cumula- tive-dynamic time-dependent ROC curve that allows accounting for the possible modifying effect of current or past covariate measures on the discriminatory power of the biomarker. The proposed estimators can ac- commodate right-censored data, as well as covariate-dependent censor- ing. The behavior of the estimator proposed in this study will be explored through simulations and illustrated using real data.  

Pages Overview