Avoiding model selection bias in small-sample genomic datasets

Daniel Berrar, Ian Bradbury, Werner Dubitzky

Research output: Contribution to journalArticle

37 Citations (Scopus)

Abstract

Motivation: Genomic datasets generated by high-throughput technologies are typically characterized by a moderate number of samples and a large number of measurements per sample. As a consequence, classification models are commonly compared based on resampling techniques. This investigation discusses the conceptual difficulties involved in comparative classification studies. Conclusions derived from such studies are often optimistically biased, because the apparent differences in performance are usually not controlled in a statistically stringent framework taking into account the adopted sampling strategy. We investigate this problem by means of a comparison of various classifiers in the context of multiclass microarray data. Results: Commonly used accuracy-based performance values, with or without confidence intervals, are inadequate for comparing classifiers for small-sample data. We present a statistical methodology that avoids bias in cross-validated model selection in the context of small-sample scenarios. This methodology is valid for both k-fold cross-validation and repeated random sampling.

Original languageEnglish
Pages (from-to)1245-1250
Number of pages6
JournalBioinformatics
Volume22
Issue number10
DOIs
Publication statusPublished - 2006 May 15

    Fingerprint

ASJC Scopus subject areas

  • Statistics and Probability
  • Biochemistry
  • Molecular Biology
  • Computer Science Applications
  • Computational Theory and Mathematics
  • Computational Mathematics

Cite this