Not Just a Black Box: Learning Important Features Through Propagating Activation Differences

Avanti Shrikumar,Peyton Greenside,A. Shcherbina,Anshul B Kundaje

Published 2016 in arXiv.org

ABSTRACT

Note: This paper describes an older version of DeepLIFT. See this https URL for the newer version. Original abstract follows: The purported "black box" nature of neural networks is a barrier to adoption in applications where interpretability is essential. Here we present DeepLIFT (Learning Important FeaTures), an efficient and effective method for computing importance scores in a neural network. DeepLIFT compares the activation of each neuron to its 'reference activation' and assigns contribution scores according to the difference. We apply DeepLIFT to models trained on natural images and genomic data, and show significant advantages over gradient-based methods.

PUBLICATION RECORD

  • Publication year

    2016

  • Venue

    arXiv.org

  • Publication date

    2016-05-05

  • Fields of study

    Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

CITED BY

Showing 1-100 of 858 citing papers · Page 1 of 9