What Makes Fake Images Detectable? Understanding Properties that Generalize
Author(s)
Chai, Lucy; Bau, David; Isola, Phillip John
DownloadAccepted version (9.061Mb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
The quality of image generation and manipulation is reaching impressive levels, making it increasingly difficult for a human to distinguish between what is real and what is fake. However, deep networks can still pick up on the subtle artifacts in these doctored images. We seek to understand what properties of fake images make them detectable and identify what generalizes across different model architectures, datasets, and variations in training. We use a patch-based classifier with limited receptive fields to visualize which regions of fake images are more easily detectable. We further show a technique to exaggerate these detectable properties and demonstrate that, even when the image generator is adversarially finetuned against a fake image classifier, it is still imperfect and leaves detectable artifacts in certain image patches. Code is available at https://github.com/chail/patch-forensics.
Date issued
2020-08Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence LaboratoryJournal
Lecture Notes in Computer Science
Publisher
Springer International Publishing
Citation
Chai, Lucy et al. “What Makes Fake Images Detectable? Understanding Properties that Generalize.” ECCV 2020: 16th European Conference on Computer Vision, Lecture Notes in Computer Science, 12371. © 2020 The Author(s)
Version: Author's final manuscript
ISSN
0302-9743