Following Gaze in Video
Author(s)
Recasens Continente, Adria; Vondrick, Carl Martin; Khosla, Aditya; Torralba, Antonio
DownloadAccepted version (1.822Mb)
Terms of use
Metadata
Show full item recordAbstract
Following the gaze of people inside videos is an important signal for understanding people and their actions. In this paper, we present an approach for following gaze in video by predicting where a person (in the video) is looking even when the object is in a different frame. We collect VideoGaze, a new dataset which we use as a benchmark to both train and evaluate models. Given one frame with a person in it, our model estimates a density for gaze location in every frame and the probability that the person is looking in that particular frame. A key aspect of our approach is an end-to-end model that jointly estimates: saliency, gaze pose, and geometric relationships between views while only using gaze as supervision. Visualizations suggest that the model learns to internally solve these intermediate tasks automatically without additional supervision. Experiments show that our approach follows gaze in video better than existing approaches, enabling a richer understanding of human activities in video. Keywords: Motion pictures, Head, Three-dimensional displays, Predictive models, Geometry, Semantics, gaze tracking, learning (artificial intelligence), video signal processing
Date issued
2017-12Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
2017 IEEE International Conference on Computer Vision (ICCV)
Publisher
Institute of Electrical and Electronics Engineers
Citation
Recasens Continente, Adria et al. "Following Gaze in Video," 2017 IEEE International Conference on Computer Vision (ICCV), October 2017, Venice, Italy, Institute of Electrical and Electronics Engineers, December 2017 ©IEEE
Version: Author's final manuscript
ISSN
2380-7504