Show simple item record

dc.contributor.authorEhinger, Krista A.
dc.contributor.authorHidalgo-Sotelo, Barbara Irene
dc.contributor.authorTorralba, Antonio
dc.contributor.authorOliva, Aude
dc.date.accessioned2012-05-25T16:03:43Z
dc.date.available2012-05-25T16:03:43Z
dc.date.issued2009-08
dc.identifier.issn1350-6285
dc.identifier.issn1464-0716
dc.identifier.urihttp://hdl.handle.net/1721.1/70942
dc.description.abstractHow predictable are human eye movements during search in real world scenes? We recorded 14 observers’ eye movements as they performed a search task (person detection) in 912 outdoor scenes. Observers were highly consistent in the regions fixated during search, even when the target was absent from the scene. These eye movements were used to evaluate computational models of search guidance from three sources: Saliency, target features, and scene context. Each of these models independently outperformed a cross-image control in predicting human fixations. Models that combined sources of guidance ultimately predicted 94% of human agreement, with the scene context component providing the most explanatory power. None of the models, however, could reach the precision and fidelity of an attentional map defined by human fixations. This work puts forth a benchmark for computational models of search in real world scenes. Further improvements in modelling should capture mechanisms underlying the selectivity of observers’ fixations during search.en_US
dc.description.sponsorshipNational Eye Institute (Integrative Training Program in Vision grant T32 EY013935)en_US
dc.description.sponsorshipMassachusetts Institute of Technology (Singleton Graduate Research Fellowship)en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Graduate Research Fellowship)en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (CAREER Award (0546262))en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (NSF contract (0705677))en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Career Award (0747120))en_US
dc.language.isoen_US
dc.publisherTaylor & Francis Groupen_US
dc.relation.isversionofhttp://dx.doi.org/10.1080/13506280902834720en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alike 3.0en_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/en_US
dc.sourcePubMed Centralen_US
dc.titleModelling search for people in 900 scenes: A combined source model of eye guidanceen_US
dc.typeArticleen_US
dc.identifier.citationEhinger, Krista A. et al. “Modelling Search for People in 900 Scenes: A Combined Source Model of Eye Guidance.” Visual Cognition 17.6-7 (2009): 945–978. Web.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciencesen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.approverOliva, Aude
dc.contributor.mitauthorOliva, Aude
dc.contributor.mitauthorEhinger, Krista A.
dc.contributor.mitauthorHidalgo-Sotelo, Barbara Irene
dc.contributor.mitauthorTorralba, Antonio
dc.relation.journalVisual Cognitionen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dspace.orderedauthorsEhinger, Krista A.; Hidalgo-Sotelo, Barbara Irene; Torralba, Antonio; Oliva, Audeen
dc.identifier.orcidhttps://orcid.org/0000-0003-4915-0256
mit.licenseOPEN_ACCESS_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record