Show simple item record

dc.contributor.advisorTomaso Poggio
dc.contributor.authorPoggio, Tomasoen_US
dc.contributor.authorSerre, Thomasen_US
dc.contributor.authorTan, Chestonen_US
dc.contributor.authorChikkerur, Sharaten_US
dc.contributor.otherCenter for Biological and Computational Learning (CBCL)en_US
dc.date.accessioned2009-06-22T17:15:20Z
dc.date.available2009-06-22T17:15:20Z
dc.date.issued2009-06-20
dc.identifier.urihttp://hdl.handle.net/1721.1/45598
dc.description.abstractApart from helping shed some light on human perceptual mechanisms, modeling visual attention has important applications in computer vision. It has been shown to be useful in priming object detection, pruning interest points, quantifying visual clutter as well as predicting human eye movements. Prior work has either relied on purely bottom-up approaches or top-down schemes using simple low-level features. In this paper, we outline a top-down visual attention model based on shape-based features. The same shape-based representation is used to represent both the objects and the scenes that contain them. The spatial priors imposed by the scene and the feature priors imposed by the target object are combined in a Bayesian framework to generate a task-dependent saliency map. We show that our approach can predict the location of objects as well as match eye movements (92% overlap with human observers). We also show that the proposed approach performs better than existing bottom-up and top-down computational models.en_US
dc.format.extent10 p.en_US
dc.relation.ispartofseriesCBCL-278
dc.relation.ispartofseriesMIT-CSAIL-TR-2009-029
dc.subjectattentionen_US
dc.subjectbayesian networken_US
dc.titleAn integrated model of visual attention using shape-based featuresen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record