Artificial intelligence systems are being increasingly deployed due to their potential to increase the efficiency, scale, consistency, fairness, and accuracy of decisions. However, as many of these systems are opaque in their operation, there is a growing demand for such systems to provide explanations for their decisions. Conventional approaches to this problem attempt to expose or discover the inner workings of a machine learning model with the hope that the resulting explanations will be meaningful to the consumer. In contrast, this paper suggests a new approach to this problem. It introduces a simple, practical framework, called Teaching Explanations for Decisions (TED), that provides meaningful explanations that match the mental model of the consumer. We illustrate the generality and effectiveness of this approach with two different examples, resulting in highly accurate explanations with no loss of prediction accuracy for these two examples.
TED: Teaching AI to Explain its Decisions
N. Codella,M. Hind,K. Ramamurthy,Murray Campbell,Amit Dhurandhar,Kush R. Varshney,Dennis Wei,A. Mojsilovic
Published 2018 in AAAI/ACM Conference on AI, Ethics, and Society
ABSTRACT
PUBLICATION RECORD
- Publication year
2018
- Venue
AAAI/ACM Conference on AI, Ethics, and Society
- Publication date
2018-11-12
- Fields of study
Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-36 of 36 references · Page 1 of 1