Abstract
Class-prediction accuracy provides a quick but superficial way of determining classifier performance. It does not inform on the reproducibility of the findings or whether the selected or constructed features used are meaningful and specific. Furthermore, the class-prediction accuracy oversummarizes and does not inform on how training and learning have been accomplished: two classifiers providing the same performance in one validation can disagree on many future validations. It does not provide explainability in its decision-making process and is not objective, as its value is also affected by class proportions in the validation set. Despite these issues, this does not mean we should omit the class-prediction accuracy. Instead, it needs to be enriched with accompanying evidence and tests that supplement and contextualize the reported accuracy. This additional evidence serves as augmentations and can help us perform machine learning better while avoiding naive reliance on oversimplified metrics. There is a huge potential for machine learning, but blind reliance on oversimplified metrics can mislead. Class-prediction accuracy is a common metric used for determining classifier performance. This article provides examples to show how the class-prediction accuracy is superficial and even misleading. We propose some augmentative measures to supplement the class-prediction accuracy. This in turn helps us to better understand the quality of learning of the classifier. Class-prediction accuracy is an evaluative method for machine-learning classifiers. However, this method is simple and may produce spurious interpretations when used without caution. Contextualization, dimensionality reduction approaches, and bootstrapping with Jaccard coefficients are possible strategies that can be used to better inform the learning outcome.
Original language | English |
---|---|
Article number | 100025 |
Journal | Patterns |
Volume | 1 |
Issue number | 2 |
DOIs | |
Publication status | Published - May 8 2020 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2020 The Authors
ASJC Scopus Subject Areas
- General Decision Sciences
Keywords
- artificial intelligence
- data science
- DSML 5: Mainstream: Data science output is well understood and (nearly) universally adopted
- machine learning
- validation