Confidence curves: an alternative to null hypothesis significance testing for the comparison of classifiers

Daniel Berrar

研究成果: Article査読

10 被引用数 (Scopus)

抄録

Null hypothesis significance testing is routinely used for comparing the performance of machine learning algorithms. Here, we provide a detailed account of the major underrated problems that this common practice entails. For example, omnibus tests, such as the widely used Friedman test, are not appropriate for the comparison of multiple classifiers over diverse data sets. In contrast to the view that significance tests are essential to a sound and objective interpretation of classification results, our study suggests that no such tests are needed. Instead, greater emphasis should be placed on the magnitude of the performance difference and the investigator’s informed judgment. As an effective tool for this purpose, we propose confidence curves, which depict nested confidence intervals at all levels for the performance difference. These curves enable us to assess the compatibility of an infinite number of null hypotheses with the experimental results. We benchmarked several classifiers on multiple data sets and analyzed the results with both significance tests and confidence curves. Our conclusion is that confidence curves effectively summarize the key information needed for a meaningful interpretation of classification results while avoiding the intrinsic pitfalls of significance tests.

本文言語English
ページ(範囲)911-949
ページ数39
ジャーナルMachine Learning
106
6
DOI
出版ステータスPublished - 2017 6 1

ASJC Scopus subject areas

  • ソフトウェア
  • 人工知能

フィンガープリント

「Confidence curves: an alternative to null hypothesis significance testing for the comparison of classifiers」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル