Show simple item record

dc.contributor.authorZhou, Bolei
dc.contributor.authorLapedriza Garcia, Agata
dc.contributor.authorXiao, Jianxiong
dc.contributor.authorTorralba, Antonio
dc.contributor.authorOliva, Aude
dc.date.accessioned2015-05-08T16:44:39Z
dc.date.available2015-05-08T16:44:39Z
dc.date.issued2014
dc.identifier.issn1049-5258
dc.identifier.urihttp://hdl.handle.net/1721.1/96941
dc.description.abstractScene recognition is one of the hallmark tasks of computer vision, allowing definition of a context for object recognition. Whereas the tremendous recent progress in object recognition tasks is due to the availability of large datasets like ImageNet and the rise of Convolutional Neural Networks (CNNs) for learning high-level features, performance at scene recognition has not attained the same level of success. This may be because current deep features trained from ImageNet are not competitive enough for such tasks. Here, we introduce a new scene-centric database called Places with over 7 million labeled pictures of scenes. We propose new methods to compare the density and diversity of image datasets and show that Places is as dense as other scene datasets and has more diversity. Using CNN, we learn deep features for scene recognition tasks, and establish new state-of-the-art results on several scene-centric datasets. A visualization of the CNN layers' responses allows us to show differences in the internal representations of object-centric and scene-centric networks.en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Grant 1016862)en_US
dc.description.sponsorshipUnited States. Office of Naval Research. Multidisciplinary University Research Initiative (N000141010933)en_US
dc.description.sponsorshipGoogle (Firm)en_US
dc.description.sponsorshipXerox Corporationen_US
dc.description.sponsorshipGrant TIN2012-38187-C03-02en_US
dc.description.sponsorshipUnited States. Intelligence Advanced Research Projects Activity (United States. Air Force Research Laboratory Contract FA8650-12-C-7211)en_US
dc.language.isoen_US
dc.publisherNeural Information Processing Systems Foundationen_US
dc.relation.isversionofhttp://papers.nips.cc/paper/5349-learning-deep-features-for-scene-recognition-using-places-databaseen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceMIT web domainen_US
dc.titleLearning Deep Features for Scene Recognition using Places Databaseen_US
dc.typeArticleen_US
dc.identifier.citationZhou, Bolei, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. "Learning Deep Features for Scene Recognition using Places Database." Advances in Neural Information Processing Systems (NIPS) 27, 2014.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.mitauthorZhou, Boleien_US
dc.contributor.mitauthorLapedriza Garcia, Agataen_US
dc.contributor.mitauthorTorralba, Antonioen_US
dc.contributor.mitauthorOliva, Audeen_US
dc.relation.journalAdvances in Neural Information Processing Systems (NIPS) 27en_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsZhou, Bolei; Lapedriza, Agata; Xiao, Jianxiong; Torralba, Antonio; Oliva, Audeen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-3570-4396
dc.identifier.orcidhttps://orcid.org/0000-0003-4915-0256
mit.licensePUBLISHER_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record