dc.contributor.author | Ehinger, Krista A | |
dc.contributor.author | Rosenholtz, Ruth Ellen | |
dc.date.accessioned | 2018-02-05T14:40:45Z | |
dc.date.available | 2018-02-05T14:40:45Z | |
dc.date.issued | 2016-11 | |
dc.date.submitted | 2015-06 | |
dc.identifier.issn | 1534-7362 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/113404 | |
dc.description.abstract | People are good at rapidly extracting the "gist" of a scene at a glance, meaning with a single fixation. It is generally presumed that this performance cannot be mediated by the same encoding that underlies tasks such as visual search, for which researchers have suggested that selective attention may be necessary to bind features from multiple preattentively computed feature maps. This has led to the suggestion that scenes might be special, perhaps utilizing an unlimited capacity channel, perhaps due to brain regions dedicated to this processing. Here we test whether a single encoding might instead underlie all of these tasks. In our study, participants performed various navigation-relevant scene perception tasks while fixating photographs of outdoor scenes. Participants answered questions about scene category, spatial layout, geographic location, or the presence of objects. We then asked whether an encoding model previously shown to predict performance in crowded object recognition and visual search might also underlie the performance on those tasks. We show that this model does a reasonably good job of predicting performance on these scene tasks, suggesting that scene tasks may not be so special; they may rely on the same underlying encoding as search and crowded object recognition. We also demonstrate that a number of alternative "models" of the information available in the periphery also do a reasonable job of predicting performance at the scene tasks, suggesting that scene tasks alone may not be ideal for distinguishing between models. Keywords: scene perception; peripheral vision; crowding; parafoveal vision; navigation | en_US |
dc.description.sponsorship | National Science Foundation (U.S.) (Award IIS-1607486) | en_US |
dc.publisher | Association for Research in Vision and Ophthalmology (ARVO) | en_US |
dc.relation.isversionof | http://dx.doi.org/10.1167/16.2.13 | en_US |
dc.rights | Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) | en_US |
dc.rights.uri | https://creativecommons.org/licenses/by-nc-nd/4.0/ | en_US |
dc.source | Journal of Vision | en_US |
dc.title | A general account of peripheral encoding also predicts scene perception performance | en_US |
dc.type | Article | en_US |
dc.identifier.citation | Ehinger, Krista A., and Rosenholtz, Ruth. “A General Account of Peripheral Encoding Also Predicts Scene Perception Performance.” Journal of Vision 16, 2 (November 2016): 13 © 2016 The Author(s) | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences | en_US |
dc.contributor.mitauthor | Ehinger, Krista A | |
dc.contributor.mitauthor | Rosenholtz, Ruth Ellen | |
dc.relation.journal | Journal of Vision | en_US |
dc.eprint.version | Final published version | en_US |
dc.type.uri | http://purl.org/eprint/type/JournalArticle | en_US |
eprint.status | http://purl.org/eprint/status/PeerReviewed | en_US |
dc.date.updated | 2018-02-02T18:18:13Z | |
dspace.orderedauthors | Ehinger, Krista A.; Rosenholtz, Ruth | en_US |
dspace.embargo.terms | N | en_US |
mit.license | PUBLISHER_POLICY | en_US |