Significance tests or confidence intervals: Which are preferable for the comparison of classifiers?

Daniel Berrar, Jose A. Lozano

研究成果: Article査読

16 被引用数 (Scopus)

抄録

Null hypothesis significance tests and their p-values currently dominate the statistical evaluation of classifiers in machine learning. Here, we discuss fundamental problems of this research practice. We focus on the problem of comparing multiple fully specified classifiers on a small-sample test set. On the basis of the method by Quesenberry and Hurst, we derive confidence intervals for the effect size, i.e. the difference in true classification performance. These confidence intervals disentangle the effect size from its uncertainty and thereby provide information beyond the p-value. This additional information can drastically change the way in which classification results are currently interpreted, published and acted upon. We illustrate how our reasoning can change, depending on whether we focus on p-values or confidence intervals. We argue that the conclusions from comparative classification studies should be based primarily on effect size estimation with confidence intervals, and not on significance tests and p-values.

本文言語English
ページ(範囲)189-206
ページ数18
ジャーナルJournal of Experimental and Theoretical Artificial Intelligence
25
2
DOI
出版ステータスPublished - 2013 6 1

ASJC Scopus subject areas

  • ソフトウェア
  • 理論的コンピュータサイエンス
  • 人工知能

フィンガープリント

「Significance tests or confidence intervals: Which are preferable for the comparison of classifiers?」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル