A unified view of gradient-based attribution methods for Deep Neural Networks
arXiv (Cornell University)2017
Citations Over TimeTop 10% of 2017 papers
Abstract
Understanding the flow of information in Deep Neural Networks is a challenging problem that has gain increasing attention over the last few years. While several methods have been proposed to explain network predictions, only few attempts to analyze them from a theoretical perspective have been made in the past. In this work we analyze various state-of-the-art attribution methods and prove unexplored connections between them. We also show how some methods can be reformulated and more conveniently implemented. Finally, we perform an empirical evaluation with six attribution methods on a variety of tasks and architectures and discuss their strengths and limitations.
Related Papers
- → "Why Should I Trust You?"(2016)14,307 cited
- → On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation(2015)4,469 cited
- → Towards better understanding of gradient-based attribution methods for Deep Neural Networks(2018)386 cited
- → Deep Inside Convolutional Networks: Visualising Image Classification\n Models and Saliency Maps(2013)4,905 cited
- → Axiomatic Attribution for Deep Networks(2017)2,626 cited