Learning Deep Features for Discriminative Localization
Author(s)
Zhou, Bolei; Khosla, Aditya; Lapedriza Garcia, Agata; Oliva, Aude; Torralba, Antonio
DownloadTorralba_Learn deep.pdf (2.432Mb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
In this work, we revisit the global average pooling layer proposed in, and shed light on how it explicitly enables the convolutional neural network (CNN) to have remarkable localization ability despite being trained on imagelevel labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that exposes the implicit attention of CNNs on an image. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1% top-5 error for object localization on ILSVRC 2014 without training on any bounding box annotation. We demonstrate in a variety of experiments that our network is able to localize the discriminative image regions despite just being trained for solving classification task1.
Date issued
2016-12Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science; Massachusetts Institute of Technology. Media Laboratory; Program in Media Arts and Sciences (Massachusetts Institute of Technology)Journal
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Zhou, Bolei, et al. "Learning Deep Features for Discriminative Localization." 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 27-30 June, 2016, Las Vegas, NV, IEEE, 2016, pp. 2921–29.
Version: Original manuscript
ISBN
978-1-4673-8851-1