Sparse, Smart Contours to Represent and Edit Images
Author(s)
Dekel, Tali; Gan, Chuang; Krishnan, Dilip; Liu, Ce; Freeman, William T.
DownloadAccepted version (8.247Mb)
Terms of use
Metadata
Show full item recordAbstract
© 2018 IEEE. We study the problem of reconstructing an image from information stored at contour locations. We show that high-quality reconstructions with high fidelity to the source image can be obtained from sparse input, e.g., comprising less than 6% of image pixels. This is a significant improvement over existing contour-based reconstruction methods that require much denser input to capture subtle texture information and to ensure image quality. Our model, based on generative adversarial networks, synthesizes texture and details in regions where no input information is provided. The semantic knowledge encoded into our model and the sparsity of the input allows to use contours as an intuitive interface for semantically-aware image manipulation: local edits in contour domain translate to long-range and coherent changes in pixel space. We can perform complex structural changes such as changing facial expression by simple edits of contours. Our experiments demonstrate that humans as well as a face recognition system mostly cannot distinguish between our reconstructions and the source images.
Date issued
2018-06Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; MIT-IBM Watson AI LabPublisher
IEEE
Citation
Dekel, Tali, Gan, Chuang, Krishnan, Dilip, Liu, Ce and Freeman, William T. 2018. "Sparse, Smart Contours to Represent and Edit Images."
Version: Author's final manuscript