Learning to predict where humans look
Author(s)
Judd, Tilke M.; Ehinger, Krista A.; Durand, Fredo; Torralba, Antonio
DownloadTilke-2009-Learning to predict where humans look.pdf (3.587Mb)
PUBLISHER_POLICY
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
For many applications in graphics, design, and human computer interaction, it is essential to understand where humans look in a scene. Where eye tracking devices are not a viable option, models of saliency can be used to predict fixation locations. Most saliency approaches are based on bottom-up computation that does not consider top-down image semantics and often does not match actual eye movements. To address this problem, we collected eye tracking data of 15 viewers on 1003 images and use this database as training and testing examples to learn a model of saliency based on low, middle and high-level image features. This large database of eye tracking data is publicly available with this paper.
Date issued
2010-05Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
2009 IEEE 12th International Conference on Computer Vision
Publisher
Institute of Electrical and Electronics Engineers
Citation
Judd, T. et al. “Learning to Predict Where Humans Look.” Computer Vision, 2009 IEEE 12th International Conference On. 2009. 2106-2113.© 2010 IEEE.
Version: Final published version
Other identifiers
INSPEC Accession Number: 11367893
ISBN
978-1-4244-4420-5
ISSN
1550-5499