Show simple item record

dc.contributor.authorBau, David
dc.contributor.authorZhu, Jun-Yan
dc.contributor.authorStrobelt, Hendrik
dc.contributor.authorLapedriza Garcia, Agata
dc.contributor.authorZhou, Bolei
dc.contributor.authorTorralba, Antonio
dc.date.accessioned2021-03-29T21:05:50Z
dc.date.available2021-03-29T21:05:50Z
dc.date.issued2020-09
dc.date.submitted2019-08
dc.identifier.issn0027-8424
dc.identifier.issn1091-6490
dc.identifier.urihttps://hdl.handle.net/1721.1/130269
dc.description.abstractDeep neural networks excel at finding hierarchical representations that solve complex tasks over large datasets. How can we humans understand these learned representations? In this work, we present network dissection, an analytic framework to systematically identify the semantics of individual hidden units within image classification and image generation networks. First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts. We find evidence that the network has learned many object classes that play crucial roles in classifying scene classes. Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes. By analyzing changes made when small sets of units are activated or deactivated, we find that objects can be added and removed from the output scenes while adapting to the context. Finally, we apply our analytic framework to understanding adversarial attacks and to semantic image editing.en_US
dc.description.sponsorshipDefense Advanced Research Projects Agency (Award FA8750-18-C-0004)en_US
dc.description.sponsorshipNSF (Grants 1524817 and BIGDATA-1447476)en_US
dc.language.isoen
dc.publisherProceedings of the National Academy of Sciencesen_US
dc.relation.isversionofhttp://dx.doi.org/10.1073/pnas.1907375117en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourcePNASen_US
dc.titleUnderstanding the role of individual units in a deep neural networken_US
dc.typeArticleen_US
dc.identifier.citationBau, David et al. "Understanding the role of individual units in a deep neural network." Proceedings of the National Academy of Sciences 117, 48 (September 2020): 30071-30078 © 2020 National Academy of Sciencesen_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Media Laboratoryen_US
dc.contributor.departmentMIT-IBM Watson AI Laben_US
dc.relation.journalProceedings of the National Academy of Sciencesen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2021-03-16T15:02:16Z
dspace.orderedauthorsBau, D; Zhu, J-Y; Strobelt, H; Lapedriza, A; Zhou, B; Torralba, Aen_US
dspace.date.submission2021-03-16T15:02:23Z
mit.journal.volume117en_US
mit.journal.issue48en_US
mit.licensePUBLISHER_POLICY
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record