Advanced Search
DSpace@MIT

Tiny images

Research and Teaching Output of the MIT Community

Show simple item record

dc.contributor.advisor William Freeman
dc.contributor.author Torralba, Antonio
dc.contributor.author Fergus, Rob
dc.contributor.author Freeman, William T.
dc.contributor.other Vision
dc.date.accessioned 2007-04-24T14:01:48Z
dc.date.available 2007-04-24T14:01:48Z
dc.date.issued 2007-04-23
dc.identifier.other MIT-CSAIL-TR-2007-024
dc.identifier.uri http://hdl.handle.net/1721.1/37291
dc.description.abstract The human visual system is remarkably tolerant to degradations in image resolution: in a scene recognition task, human performance is similar whether $32 \times 32$ color images or multi-mega pixel images are used. With small images, even object recognition and segmentation is performed robustly by the visual system, despite the object being unrecognizable in isolation. Motivated by these observations, we explore the space of 32x32 images using a database of 10^8 32x32 color images gathered from the Internet using image search engines. Each image is loosely labeled with one of the 70,399 non-abstract nouns in English, as listed in the Wordnet lexical database. Hence the image database represents a dense sampling of all object categories and scenes. With this dataset, we use nearest neighbor methods to perform objectrecognition across the 10^8 images.
dc.format.extent 9 p.
dc.relation.ispartofseries Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory
dc.subject Recognition
dc.subject Nearest neighbors methods
dc.subject Image databases
dc.title Tiny images


Files in this item

Name Size Format Description
MIT-CSAIL-TR-2007 ... 3.974Mb Postscript

Files in this item

Name Size Format Description
MIT-CSAIL-TR-2007 ... 844.5Kb PDF

This item appears in the following Collection(s)

Show simple item record

MIT-Mirage