Modelling search for people in 900 scenes: A combined source model of eye guidance
Author(s)
Ehinger, Krista A.; Hidalgo-Sotelo, Barbara Irene; Torralba, Antonio; Oliva, Aude
DownloadOliva-2009-Modeling.pdf (2.286Mb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
How predictable are human eye movements during search in real world scenes? We recorded 14 observers’ eye movements as they performed a search task (person detection) in 912 outdoor scenes. Observers were highly consistent in the regions fixated during search, even when the target was absent from the scene. These eye movements were used to evaluate computational models of search guidance from three sources: Saliency, target features, and scene context. Each of these models independently outperformed a cross-image control in predicting human fixations. Models that combined sources of guidance ultimately predicted 94% of human agreement, with the scene context component providing the most explanatory power. None of the models, however, could reach the precision and fidelity of an attentional map defined by human fixations. This work puts forth a benchmark for computational models of search in real world scenes. Further improvements in modelling should capture mechanisms underlying the selectivity of observers’ fixations during search.
Date issued
2009-08Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
Visual Cognition
Publisher
Taylor & Francis Group
Citation
Ehinger, Krista A. et al. “Modelling Search for People in 900 Scenes: A Combined Source Model of Eye Guidance.” Visual Cognition 17.6-7 (2009): 945–978. Web.
Version: Author's final manuscript
ISSN
1350-6285
1464-0716