Diverse Image Generation via Self-Conditioned GANs
Author(s)
Liu, Steven; Wang, Tongzhou; Bau, David; Zhu, Jun-Yan; Torralba, Antonio
DownloadSubmitted version (5.239Mb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
© 2020 IEEE. We introduce a simple but effective unsupervised method for generating diverse images. We train a class-conditional GAN model without using manually annotated class labels. Instead, our model is conditional on labels automatically derived from clustering in the discriminator's feature space. Our clustering step automatically discovers diverse modes, and explicitly requires the generator to cover them. Experiments on standard mode collapse benchmarks show that our method outperforms several competing methods when addressing mode collapse. Our method also performs well on large-scale datasets such as ImageNet and Places365, improving both diversity and standard metrics (e.g., Fréchet Inception Distance), compared to previous methods.
Date issued
2020Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence LaboratoryJournal
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Liu, Steven, Wang, Tongzhou, Bau, David, Zhu, Jun-Yan and Torralba, Antonio. 2020. "Diverse Image Generation via Self-Conditioned GANs." Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
Version: Original manuscript