Learning Deep Features for Discriminative Localization
Citations Over TimeTop 1% of 2016 papers
Abstract
In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network (CNN) to have remarkable localization ability despite being trained on imagelevel labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that exposes the implicit attention of CNNs on an image. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1% top-5 error for object localization on ILSVRC 2014 without training on any bounding box annotation. We demonstrate in a variety of experiments that our network is able to localize the discriminative image regions despite just being trained for solving classification task1.
Related Papers
- → Region-Based Discriminative Feature Pooling for Scene Text Recognition(2014)101 cited
- → Image classification with caffe deep learning framework(2017)44 cited
- → Use of OWA operators for feature aggregation in image classification(2017)6 cited
- Deep Neural Networks for Dynamic Visual Data(2016)
- → Discriminative Competitive Representation for Image Classification(2019)