Show simple item record

dc.contributor.authorEhinger, Krista A
dc.contributor.authorRosenholtz, Ruth Ellen
dc.date.accessioned2018-02-05T14:40:45Z
dc.date.available2018-02-05T14:40:45Z
dc.date.issued2016-11
dc.date.submitted2015-06
dc.identifier.issn1534-7362
dc.identifier.urihttp://hdl.handle.net/1721.1/113404
dc.description.abstractPeople are good at rapidly extracting the "gist" of a scene at a glance, meaning with a single fixation. It is generally presumed that this performance cannot be mediated by the same encoding that underlies tasks such as visual search, for which researchers have suggested that selective attention may be necessary to bind features from multiple preattentively computed feature maps. This has led to the suggestion that scenes might be special, perhaps utilizing an unlimited capacity channel, perhaps due to brain regions dedicated to this processing. Here we test whether a single encoding might instead underlie all of these tasks. In our study, participants performed various navigation-relevant scene perception tasks while fixating photographs of outdoor scenes. Participants answered questions about scene category, spatial layout, geographic location, or the presence of objects. We then asked whether an encoding model previously shown to predict performance in crowded object recognition and visual search might also underlie the performance on those tasks. We show that this model does a reasonably good job of predicting performance on these scene tasks, suggesting that scene tasks may not be so special; they may rely on the same underlying encoding as search and crowded object recognition. We also demonstrate that a number of alternative "models" of the information available in the periphery also do a reasonable job of predicting performance at the scene tasks, suggesting that scene tasks alone may not be ideal for distinguishing between models. Keywords: scene perception; peripheral vision; crowding; parafoveal vision; navigationen_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Award IIS-1607486)en_US
dc.publisherAssociation for Research in Vision and Ophthalmology (ARVO)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1167/16.2.13en_US
dc.rightsCreative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)en_US
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/en_US
dc.sourceJournal of Visionen_US
dc.titleA general account of peripheral encoding also predicts scene perception performanceen_US
dc.typeArticleen_US
dc.identifier.citationEhinger, Krista A., and Rosenholtz, Ruth. “A General Account of Peripheral Encoding Also Predicts Scene Perception Performance.” Journal of Vision 16, 2 (November 2016): 13 © 2016 The Author(s)en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciencesen_US
dc.contributor.mitauthorEhinger, Krista A
dc.contributor.mitauthorRosenholtz, Ruth Ellen
dc.relation.journalJournal of Visionen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2018-02-02T18:18:13Z
dspace.orderedauthorsEhinger, Krista A.; Rosenholtz, Ruthen_US
dspace.embargo.termsNen_US
mit.licensePUBLISHER_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record