Show simple item record

dc.contributor.advisorAude Oliva and Antonio Torralba.en_US
dc.contributor.authorOlsson, Catherine Anne Whiteen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2014-03-06T15:43:36Z
dc.date.available2014-03-06T15:43:36Z
dc.date.copyright2013en_US
dc.date.issued2013en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/85460
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 115-119).en_US
dc.description.abstractRecent work in human and machine vision has increasingly focused on the problem of scene recognition. Scene types are largely defined by the actions one might typically do there: an office is a place someone would typically "work". I introduce the SUN Action database (short for "Scene UNderstanding - Action"): the first effort to collect and analyze free-response data from human subjects about the typical actions associated with different scene types. Responses were gathered on Mechanical Turk for twenty images per catgory, each depicting a characteristic view of one of 397 different scene types. The distribution of phrases is shown to be heavy-tailed and Zipf-like, whereas the distribution of semantic roots is not Zipf-like. Categories strongly associated with particular tasks or actions are shown to have lower overall diversity of responses. A hierarchical clustering analysis reveals a heterogeneous clustering structure, with some categories readily grouping together, and other categories remaining apart even at coarse clustering levels. Finally, two simple classifiers are introduced for predicting scene types from associated actions: a nearest centroid classifier, and an empirical maximum likelihood classifier. Both classifiers demonstrate greater than 50% classification performance in a 397-way classification task.en_US
dc.description.statementofresponsibilityby Catherine Anne White Olsson.en_US
dc.format.extent119 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleThe SUN Action database : collecting and analyzing typical actions for visual scene typesen_US
dc.title.alternativeScene Understanding - Action databaseen_US
dc.title.alternativeCollecting and analyzing typical actions for visual scene typesen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc870968994en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record