Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
Publication type: Journal Article
Publication date: 2019-05-13
scimago Q1
wos Q1
SJR: 5.876
CiteScore: 37.6
Impact factor: 23.9
ISSN: 25225839
PubMed ID:
35603010
Computer Networks and Communications
Artificial Intelligence
Software
Human-Computer Interaction
Computer Vision and Pattern Recognition
Abstract
Black box machine learning models are currently being used for high-stakes decision making throughout society, causing problems in healthcare, criminal justice and other domains. Some people hope that creating methods for explaining these black box models will alleviate some of the problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practice and can potentially cause great harm to society. The way forward is to design models that are inherently interpretable. This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare and computer vision. There has been a recent rise of interest in developing methods for ‘explainable AI’, where models are created to explain how a first ‘black box’ machine learning model arrives at a specific decision. It can be argued that instead efforts should be directed at building inherently interpretable models in the first place, in particular where they are applied in applications that directly affect human lives, such as in healthcare and criminal justice.
Found
Nothing found, try to update filter.
Found
Nothing found, try to update filter.
Top-30
Journals
|
50
100
150
200
250
|
|
|
Lecture Notes in Computer Science
246 publications, 4.28%
|
|
|
IEEE Access
83 publications, 1.44%
|
|
|
Communications in Computer and Information Science
83 publications, 1.44%
|
|
|
Scientific Reports
69 publications, 1.2%
|
|
|
Applied Sciences (Switzerland)
46 publications, 0.8%
|
|
|
PLoS ONE
39 publications, 0.68%
|
|
|
Expert Systems with Applications
36 publications, 0.63%
|
|
|
SSRN Electronic Journal
36 publications, 0.63%
|
|
|
Lecture Notes in Networks and Systems
33 publications, 0.57%
|
|
|
Nature Machine Intelligence
30 publications, 0.52%
|
|
|
Sensors
29 publications, 0.5%
|
|
|
AI and Society
28 publications, 0.49%
|
|
|
Engineering Applications of Artificial Intelligence
26 publications, 0.45%
|
|
|
AI and Ethics
25 publications, 0.44%
|
|
|
Machine Learning
25 publications, 0.44%
|
|
|
Nature Communications
25 publications, 0.44%
|
|
|
Neurocomputing
24 publications, 0.42%
|
|
|
Knowledge-Based Systems
24 publications, 0.42%
|
|
|
ACM Computing Surveys
23 publications, 0.4%
|
|
|
Computers in Biology and Medicine
23 publications, 0.4%
|
|
|
Frontiers in Artificial Intelligence
23 publications, 0.4%
|
|
|
Neural Computing and Applications
21 publications, 0.37%
|
|
|
Electronics (Switzerland)
20 publications, 0.35%
|
|
|
Information Fusion
20 publications, 0.35%
|
|
|
Advances in Computational Intelligence and Robotics
20 publications, 0.35%
|
|
|
BMC Medical Informatics and Decision Making
19 publications, 0.33%
|
|
|
npj Digital Medicine
19 publications, 0.33%
|
|
|
Frontiers in Medicine
18 publications, 0.31%
|
|
|
Algorithms
18 publications, 0.31%
|
|
|
50
100
150
200
250
|
Publishers
|
200
400
600
800
1000
1200
1400
1600
|
|
|
Springer Nature
1454 publications, 25.31%
|
|
|
Elsevier
1248 publications, 21.72%
|
|
|
Institute of Electrical and Electronics Engineers (IEEE)
731 publications, 12.72%
|
|
|
MDPI
391 publications, 6.81%
|
|
|
Association for Computing Machinery (ACM)
239 publications, 4.16%
|
|
|
Wiley
220 publications, 3.83%
|
|
|
Frontiers Media S.A.
138 publications, 2.4%
|
|
|
Taylor & Francis
133 publications, 2.32%
|
|
|
Cold Spring Harbor Laboratory
117 publications, 2.04%
|
|
|
SAGE
87 publications, 1.51%
|
|
|
IGI Global
76 publications, 1.32%
|
|
|
Oxford University Press
76 publications, 1.32%
|
|
|
Public Library of Science (PLoS)
60 publications, 1.04%
|
|
|
American Chemical Society (ACS)
56 publications, 0.97%
|
|
|
JMIR Publications
53 publications, 0.92%
|
|
|
IOP Publishing
40 publications, 0.7%
|
|
|
Cambridge University Press
34 publications, 0.59%
|
|
|
Royal Society of Chemistry (RSC)
26 publications, 0.45%
|
|
|
Ovid Technologies (Wolters Kluwer Health)
22 publications, 0.38%
|
|
|
Institute for Operations Research and the Management Sciences (INFORMS)
21 publications, 0.37%
|
|
|
Social Science Electronic Publishing
21 publications, 0.37%
|
|
|
BMJ
21 publications, 0.37%
|
|
|
AIP Publishing
19 publications, 0.33%
|
|
|
Emerald
17 publications, 0.3%
|
|
|
American Geophysical Union
17 publications, 0.3%
|
|
|
Proceedings of the National Academy of Sciences (PNAS)
14 publications, 0.24%
|
|
|
Annual Reviews
13 publications, 0.23%
|
|
|
The Royal Society
11 publications, 0.19%
|
|
|
AME Publishing Company
11 publications, 0.19%
|
|
|
200
400
600
800
1000
1200
1400
1600
|
- We do not take into account publications without a DOI.
- Statistics recalculated weekly.
Are you a researcher?
Create a profile to get free access to personal recommendations for colleagues and new articles.
Metrics
5.8k
Total citations:
5756
Citations from 2024:
2928
(50.96%)
Cite this
GOST |
RIS |
BibTex |
MLA
Cite this
GOST
Copy
Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead // Nature Machine Intelligence. 2019. Vol. 1. No. 5. pp. 206-215.
GOST all authors (up to 50)
Copy
Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead // Nature Machine Intelligence. 2019. Vol. 1. No. 5. pp. 206-215.
Cite this
RIS
Copy
TY - JOUR
DO - 10.1038/s42256-019-0048-x
UR - https://doi.org/10.1038/s42256-019-0048-x
TI - Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
T2 - Nature Machine Intelligence
AU - Rudin, Cynthia
PY - 2019
DA - 2019/05/13
PB - Springer Nature
SP - 206-215
IS - 5
VL - 1
PMID - 35603010
SN - 2522-5839
ER -
Cite this
BibTex (up to 50 authors)
Copy
@article{2019_Rudin,
author = {Cynthia Rudin},
title = {Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead},
journal = {Nature Machine Intelligence},
year = {2019},
volume = {1},
publisher = {Springer Nature},
month = {may},
url = {https://doi.org/10.1038/s42256-019-0048-x},
number = {5},
pages = {206--215},
doi = {10.1038/s42256-019-0048-x}
}
Cite this
MLA
Copy
Rudin, Cynthia. “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.” Nature Machine Intelligence, vol. 1, no. 5, May. 2019, pp. 206-215. https://doi.org/10.1038/s42256-019-0048-x.