Understanding Human Perception Through MooneyFaces
Author(s)
Arora, Riya
DownloadThesis PDF (11.82Mb)
Advisor
Siegel, Max
Egger, Bernhard
Tenenbaum, Joshua B.
Terms of use
Metadata
Show full item recordAbstract
Human vision is remarkably tolerant to image distortions: even when every pixel in an image has been destructively altered, as in classic Mooney displays, humans can still extract information about identity, pose, and more. Most current deep learning computer vision models perform well with standard face images, but they struggle with stimuli which differ from their training data, like Mooney faces. What makes human perception so comparatively robust? We consider a version of the analysis-by-synthesis proposal for perception, in which visual input is interpreted by inverting a model of image formation, as a potential model for human visual perception. Taking Mooney faces as a case study, we evaluate the model against human performance in a test domain, determining head pose, with the objective of replicating human perception. Previous human psychophysical studies have identified an illusion in which the perceived pose of a Mooney face differs from the pose recovered from an uncorrupted image. The analysis-by-synthesis model does not show a similar effect.
Date issued
2023-06Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology