Show simple item record

dc.contributor.authorMiconi, Thomas
dc.contributor.authorGroomes, Laura
dc.contributor.authorKreiman, Gabriel
dc.date.accessioned2015-12-10T18:27:44Z
dc.date.available2015-12-10T18:27:44Z
dc.date.issued2014-04-25
dc.identifier.urihttp://hdl.handle.net/1721.1/100172
dc.description.abstractWhen searching for an object in a scene, how does the brain decide where to look next? Theories of visual search suggest the existence of a global attentional map, computed by integrating bottom-up visual information with top-down, target-specific signals. Where, when and how this integration is performed remains unclear. Here we describe a simple mechanistic model of visual search that is consistent with neurophysiological and neuroanatomical constraints, can localize target objects in complex scenes, and predicts single-trial human behavior in a search task among complex objects. This model posits that target-specific modulation is applied at every point of a retinotopic area selective for complex visual features and implements local normalization through divisive inhibition. The combination of multiplicative modulation and divisive normalization creates an attentional map in which aggregate activity at any location tracks the correlation between input and target features, with relative and controllable independence from bottom-up saliency. We first show that this model can localize objects in both composite images and natural scenes and demonstrate the importance of normalization for successful search. We next show that this model can predict human fixations on single trials, including error and target-absent trials. We argue that this simple model captures non-trivial properties of the attentional system that guides visual search in humans.en_US
dc.description.sponsorshipThis work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF - 1231216.en_US
dc.language.isoen_USen_US
dc.publisherCenter for Brains, Minds and Machines (CBMM), arXiven_US
dc.relation.ispartofseriesCBMM Memo Series;008
dc.rightsAttribution-NonCommercial 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc/3.0/us/*
dc.subjectPattern Recognitionen_US
dc.subjectVisionen_US
dc.subjectNeuroscienceen_US
dc.titleA normalization model of visual search predicts single trial human fixations in an object search task.en_US
dc.typeTechnical Reporten_US
dc.typeWorking Paperen_US
dc.typeOtheren_US
dc.identifier.citationarXiv:1404.6453v1en_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record