Show simple item record

dc.contributor.authorZhou, Bolei
dc.contributor.authorSun, Yiyou
dc.contributor.authorTorralba, Antonio
dc.contributor.authorBau, David
dc.date.accessioned2019-11-01T15:27:17Z
dc.date.available2019-11-01T15:27:17Z
dc.date.issued2018-10
dc.date.submitted2018-09
dc.identifier.isbn9783030012366
dc.identifier.isbn9783030012373
dc.identifier.issn0302-9743
dc.identifier.issn1611-3349
dc.identifier.urihttps://hdl.handle.net/1721.1/122673
dc.description.abstractExplanations of the decisions made by a deep neural network are important for human end-users to be able to understand and diagnose the trustworthiness of the system. Current neural networks used for visual recognition are generally used as black boxes that do not provide any human interpretable justification for a prediction. In this work we propose a new framework called Interpretable Basis Decomposition for providing visual explanations for classification networks. By decomposing the neural activations of the input image into semantically interpretable components pre-trained from a large concept corpus, the proposed framework is able to disentangle the evidence encoded in the activation feature vector, and quantify the contribution of each piece of evidence to the final prediction. We apply our framework for providing explanations to several popular networks for visual recognition, and show it is able to explain the predictions given by the networks in a human-interpretable way. The human interpretability of the visual explanations provided by our framework and other recent explanation methods is evaluated through Amazon Mechanical Turk, showing that our framework generates more faithful and interpretable explanations (The code and data are available at https://github.com/CSAILVision/IBD).en_US
dc.description.sponsorshipUnited States. Defense Advanced Research Projects Agency (Contract FA8750-18-C0004)en_US
dc.description.sponsorshipNational Science Foundation (Grant 1524817)en_US
dc.language.isoen
dc.publisherSpringer International Publishingen_US
dc.relation.isversionofhttp://dx.doi.org/10.1007/978-3-030-01237-3_8en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceMIT web domainen_US
dc.titleInterpretable Basis Decomposition for Visual Explanationen_US
dc.typeBooken_US
dc.identifier.citationZhou, Bolei et al. "Interpretable Basis Decomposition for Visual Explanation." European Conference on Computer Vision, September 2018, Munich, Germany, Springer Nature, October 2018 © 2018 Springer Natureen_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Laboratory for Computer Scienceen_US
dc.relation.journalEuropean Conference on Computer Visionen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2019-07-11T17:00:17Z
dspace.date.submission2019-07-11T17:00:18Z


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record