Significance tests or confidence intervals: Which are preferable for the comparison of classifiers?

Daniel Berrar, Jose A. Lozano

研究成果: Article

13 引用 (Scopus)

抜粋

Null hypothesis significance tests and their p-values currently dominate the statistical evaluation of classifiers in machine learning. Here, we discuss fundamental problems of this research practice. We focus on the problem of comparing multiple fully specified classifiers on a small-sample test set. On the basis of the method by Quesenberry and Hurst, we derive confidence intervals for the effect size, i.e. the difference in true classification performance. These confidence intervals disentangle the effect size from its uncertainty and thereby provide information beyond the p-value. This additional information can drastically change the way in which classification results are currently interpreted, published and acted upon. We illustrate how our reasoning can change, depending on whether we focus on p-values or confidence intervals. We argue that the conclusions from comparative classification studies should be based primarily on effect size estimation with confidence intervals, and not on significance tests and p-values.

元の言語English
ページ(範囲)189-206
ページ数18
ジャーナルJournal of Experimental and Theoretical Artificial Intelligence
25
発行部数2
DOI
出版物ステータスPublished - 2013 6 1

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Artificial Intelligence

フィンガープリント Significance tests or confidence intervals: Which are preferable for the comparison of classifiers?' の研究トピックを掘り下げます。これらはともに一意のフィンガープリントを構成します。

  • これを引用