Show simple item record

dc.contributor.authorBau, David
dc.contributor.authorZhu, Junyan
dc.contributor.authorTenenbaum, Joshua B
dc.contributor.authorFreeman, William T
dc.contributor.authorTorralba, Antonio
dc.date.accessioned2020-08-16T15:50:02Z
dc.date.available2020-08-16T15:50:02Z
dc.date.issued2019-05
dc.identifier.urihttps://hdl.handle.net/1721.1/126607
dc.description.abstractAll Rights Reserved. Generative Adversarial Networks (GANs) have recently achieved impressive results for many real-world applications, and many GAN variants have emerged with improvements in sample quality and training stability. However, they have not been well visualized or understood. How does a GAN represent our visual world internally? What causes the artifacts in GAN results? How do architectural choices affect GAN learning? Answering such questions could enable us to develop new insights and better models. In this work, we present an analytic framework to visualize and understand GANs at the unit-, object-, and scene-level. We first identify a group of interpretable units that are closely related to object concepts using a segmentation-based network dissection method. Then, we quantify the causal effect of interpretable units by measuring the ability of interventions to control objects in the output. We examine the contextual relationship between these units and their surroundings by inserting the discovered object concepts into new images. We show several practical applications enabled by our framework, from comparing internal representations across different layers, models, and datasets, to improving GANs by locating and removing artifact-causing units, to interactively manipulating objects in a scene. We provide open source interpretation tools to help researchers and practitioners better understand their GAN models.en_US
dc.language.isoen
dc.publisherInternational Society of the Learning Sciencesen_US
dc.relation.isversionofhttps://openreview.net/forum?id=Hyg_X2C5FXen_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceMIT web domainen_US
dc.titleGAN dissection: Visualizing and understanding generative adversarial networksen_US
dc.typeArticleen_US
dc.identifier.citationBau, David et al. “GAN dissection: Visualizing and understanding generative adversarial networks.” Paper presented at the ICLR 2019 International Conference on Learning Representations, New Orleans, Louisiana, May 6-9, 2019, International Society of the Learning Sciences © 2019 The Author(s)en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciencesen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.relation.journalICLR 2019 International Conference on Learning Representationsen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2019-10-08T15:36:36Z
dspace.date.submission2019-10-08T15:36:45Z
mit.journal.volume2019en_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record