With the advent of highly predictive but opaque deep learning models, it has become more important than ever to understand and explain the predictions of such models. Existing approaches define interpretability as the inverse of complexity and achieve interpretability at the cost of accuracy. This introduces a risk of producing interpretable but misleading explanations. As humans, we are prone to engage in this kind of behavior \cite{mythos}. In this paper, we take a step in the direction of tackling the problem of interpretability without compromising the model accuracy. We propose to build a Treeview representation of the complex model via hierarchical partitioning of the feature space, which reveals the iterative rejection of unlikely class labels until the correct association is predicted.
TreeView: Peeking into Deep Neural Networks Via Feature-Space Partitioning
Jayaraman J. Thiagarajan,B. Kailkhura,P. Sattigeri,K. Ramamurthy
Published 2016 in arXiv.org
ABSTRACT
PUBLICATION RECORD
- Publication year
2016
- Venue
arXiv.org
- Publication date
2016-11-03
- Fields of study
Mathematics, Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-5 of 5 references · Page 1 of 1
CITED BY
Showing 1-43 of 43 citing papers · Page 1 of 1