dc.contributor.advisor | Aude Oliva and Antonio Torralba. | en_US |
dc.contributor.author | Olsson, Catherine Anne White | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. | en_US |
dc.date.accessioned | 2014-03-06T15:43:36Z | |
dc.date.available | 2014-03-06T15:43:36Z | |
dc.date.copyright | 2013 | en_US |
dc.date.issued | 2013 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/85460 | |
dc.description | Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013. | en_US |
dc.description | Cataloged from PDF version of thesis. | en_US |
dc.description | Includes bibliographical references (pages 115-119). | en_US |
dc.description.abstract | Recent work in human and machine vision has increasingly focused on the problem of scene recognition. Scene types are largely defined by the actions one might typically do there: an office is a place someone would typically "work". I introduce the SUN Action database (short for "Scene UNderstanding - Action"): the first effort to collect and analyze free-response data from human subjects about the typical actions associated with different scene types. Responses were gathered on Mechanical Turk for twenty images per catgory, each depicting a characteristic view of one of 397 different scene types. The distribution of phrases is shown to be heavy-tailed and Zipf-like, whereas the distribution of semantic roots is not Zipf-like. Categories strongly associated with particular tasks or actions are shown to have lower overall diversity of responses. A hierarchical clustering analysis reveals a heterogeneous clustering structure, with some categories readily grouping together, and other categories remaining apart even at coarse clustering levels. Finally, two simple classifiers are introduced for predicting scene types from associated actions: a nearest centroid classifier, and an empirical maximum likelihood classifier. Both classifiers demonstrate greater than 50% classification performance in a 397-way classification task. | en_US |
dc.description.statementofresponsibility | by Catherine Anne White Olsson. | en_US |
dc.format.extent | 119 pages | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Electrical Engineering and Computer Science. | en_US |
dc.title | The SUN Action database : collecting and analyzing typical actions for visual scene types | en_US |
dc.title.alternative | Scene Understanding - Action database | en_US |
dc.title.alternative | Collecting and analyzing typical actions for visual scene types | en_US |
dc.type | Thesis | en_US |
dc.description.degree | M. Eng. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
dc.identifier.oclc | 870968994 | en_US |