dc.contributor.author | Zhou, Bolei | |
dc.contributor.author | Lapedriza Garcia, Agata | |
dc.contributor.author | Xiao, Jianxiong | |
dc.contributor.author | Torralba, Antonio | |
dc.contributor.author | Oliva, Aude | |
dc.date.accessioned | 2015-05-08T16:44:39Z | |
dc.date.available | 2015-05-08T16:44:39Z | |
dc.date.issued | 2014 | |
dc.identifier.issn | 1049-5258 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/96941 | |
dc.description.abstract | Scene recognition is one of the hallmark tasks of computer vision, allowing definition of a context for object recognition. Whereas the tremendous recent progress in object recognition tasks is due to the availability of large datasets like ImageNet and the rise of Convolutional Neural Networks (CNNs) for learning high-level features, performance at scene recognition has not attained the same level of success. This may be because current deep features trained from ImageNet are not competitive enough for such tasks. Here, we introduce a new scene-centric database called Places with over 7 million labeled pictures of scenes. We propose new methods to compare the density and diversity of image datasets and show that Places is as dense as other scene datasets and has more diversity. Using CNN, we learn deep features for scene recognition tasks, and establish new state-of-the-art results on several scene-centric datasets. A visualization of the CNN layers' responses allows us to show differences in the internal representations of object-centric and scene-centric networks. | en_US |
dc.description.sponsorship | National Science Foundation (U.S.) (Grant 1016862) | en_US |
dc.description.sponsorship | United States. Office of Naval Research. Multidisciplinary University Research Initiative (N000141010933) | en_US |
dc.description.sponsorship | Google (Firm) | en_US |
dc.description.sponsorship | Xerox Corporation | en_US |
dc.description.sponsorship | Grant TIN2012-38187-C03-02 | en_US |
dc.description.sponsorship | United States. Intelligence Advanced Research Projects Activity (United States. Air Force Research Laboratory Contract FA8650-12-C-7211) | en_US |
dc.language.iso | en_US | |
dc.publisher | Neural Information Processing Systems Foundation | en_US |
dc.relation.isversionof | http://papers.nips.cc/paper/5349-learning-deep-features-for-scene-recognition-using-places-database | en_US |
dc.rights | Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. | en_US |
dc.source | MIT web domain | en_US |
dc.title | Learning Deep Features for Scene Recognition using Places Database | en_US |
dc.type | Article | en_US |
dc.identifier.citation | Zhou, Bolei, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. "Learning Deep Features for Scene Recognition using Places Database." Advances in Neural Information Processing Systems (NIPS) 27, 2014. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
dc.contributor.mitauthor | Zhou, Bolei | en_US |
dc.contributor.mitauthor | Lapedriza Garcia, Agata | en_US |
dc.contributor.mitauthor | Torralba, Antonio | en_US |
dc.contributor.mitauthor | Oliva, Aude | en_US |
dc.relation.journal | Advances in Neural Information Processing Systems (NIPS) 27 | en_US |
dc.eprint.version | Final published version | en_US |
dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
dspace.orderedauthors | Zhou, Bolei; Lapedriza, Agata; Xiao, Jianxiong; Torralba, Antonio; Oliva, Aude | en_US |
dc.identifier.orcid | https://orcid.org/0000-0002-3570-4396 | |
dc.identifier.orcid | https://orcid.org/0000-0003-4915-0256 | |
mit.license | PUBLISHER_POLICY | en_US |
mit.metadata.status | Complete | |