Learning Low-Level Priors from Images for Inference and Synthesis
Author(s)
Sharma, Prafull
DownloadThesis PDF (116.1Mb)
Advisor
Durand, Fredo
Freeman, William T.
Terms of use
Metadata
Show full item recordAbstract
With the recent advancements in computer vision, scene understanding is critical for both downstream applications and photorealistic synthesis. Tasks such as image classification, semantic segmentation, and text-to-image generation parse the scene in terms of high-level properties of objects and scene. Along with understanding and creating visual media along these dimensions, it is important to understand the low-level information such as geometry, material, lighting configuration, and camera parameters. Such understanding would help us with tasks such as material acquisition, fine-grained synthesis, and robotics. In this thesis, we discuss learning priors over low-level properties to facilitate inference of geometry, static-dynamic disentanglement, and material properties. We present a self-supervised method to construct a persistent representation for inferring geometry and appearance inferred using a single image at test time. This representation can be leveraged to infer static-dynamic disentanglement and can used for 3D-aware scene editing. We employ representations from a pre-trained visual encoder for selecting similar materials in images. Additionally, we demonstrate fine-grained control over material properties for image editing using pre-trained text-to-image models. This fine-grained control is achieved by maintaining the photorealistic image ability of text-to-image models while learning control based on synthetic rendered images.
Date issued
2024-05Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology