Show simple item record

dc.contributor.authorHarari, Daniel
dc.contributor.authorGao, Tao
dc.contributor.authorKanwisher, Nancy
dc.contributor.authorTenenbaum, Joshua
dc.contributor.authorUllman, Shimon
dc.date.accessioned2016-11-30T17:01:05Z
dc.date.available2016-11-30T17:01:05Z
dc.date.issued2016-11-28
dc.identifier.urihttp://hdl.handle.net/1721.1/105477
dc.description.abstractHumans are remarkably adept at interpreting the gaze direction of other individuals in their surroundings. This skill is at the core of the ability to engage in joint visual attention, which is essential for establishing social interactions. How accurate are humans in determining the gaze direction of others in lifelike scenes, when they can move their heads and eyes freely, and what are the sources of information for the underlying perceptual processes? These questions pose a challenge from both empirical and computational perspectives, due to the complexity of the visual input in real-life situations. Here we measure empirically human accuracy in perceiving the gaze direction of others in lifelike scenes, and study computationally the sources of information and representations underlying this cognitive capacity. We show that humans perform better in face-to-face conditions compared with recorded conditions, and that this advantage is not due to the availability of input dynamics. We further show that humans are still performing well when only the eyes-region is visible, rather than the whole face. We develop a computational model, which replicates the pattern of human performance, including the finding that the eyes-region contains on its own, the required information for estimating both head orientation and direction of gaze. Consistent with neurophysiological findings on task-specific face regions in the brain, the learned computational representations reproduce perceptual effects such as the Wollaston illusion, when trained to estimate direction of gaze, but not when trained to recognize objects or faces.en_US
dc.description.sponsorshipThis work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF – 1231216.en_US
dc.language.isoen_USen_US
dc.publisherCenter for Brains, Minds and Machines (CBMM), arXiven_US
dc.relation.ispartofseriesCBMM Memo Series;059
dc.rightsAttribution-NonCommercial-ShareAlike 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/us/*
dc.subjectgaze directionen_US
dc.subjectsocial interactionen_US
dc.subjecthuman visionen_US
dc.titleMeasuring and modeling the perception of natural and unconstrained gaze in humans and machinesen_US
dc.typeTechnical Reporten_US
dc.typeWorking Paperen_US
dc.typeOtheren_US
dc.identifier.citationarXiv:1611.09819en_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record