Show simple item record

dc.contributor.advisorWalter Bender.en_US
dc.contributor.authorMorgan, Boen_US
dc.contributor.otherMassachusetts Institute of Technology. Dept. of Architecture. Program In Media Arts and Sciencesen_US
dc.date.accessioned2007-05-16T18:28:14Z
dc.date.available2007-05-16T18:28:14Z
dc.date.copyright2006en_US
dc.date.issued2006en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/37383
dc.descriptionThesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.en_US
dc.descriptionIncludes bibliographical references (p. 105-109) and index.en_US
dc.description.abstractEmbedded-sensor platforms are advancing toward such sophistication that they can differentiate between subtle actions. For example, when placed in a wristwatch, such platforms can tell whether a person is shaking hands or turning a doorknob. Sensors placed on objects in the environment now report many parameters, including object location, movement, sound, and temperature. A persistent problem, however, is the description of these sense data in meaningful human-language. This is an important problem that appears across domains ranging from organizational security surveillance to individual activity journaling. Previous models of activity recognition pigeon-hole descriptions into small, formal categories specified in advance; for example, location is often categorized as "at home" or "at the office." These models have not been able to adapt to the wider range of complex, dynamic, and idiosyncratic human activities. We hypothesize that the commonsense, semantically related, knowledge bases can be used to bootstrap learning algorithms for classifying and recognizing human activities from sensors.en_US
dc.description.abstract(cont.) Our system, LifeNet, is a first-person commonsense inference model, which consists of a graph with nodes drawn from a large repository of commonsense assertions expressed in human-language phrases. LifeNet is used to construct a mapping between streams of sensor data and partially ordered sequences of events, co-located in time and space. Further, by gathering sensor data in vivo, we are able to validate and extend the commonsense knowledge from which LifeNet is derived. LifeNet is evaluated in the context of its performance on a sensor-network platform distributed in an office environment. We hypothesize that mapping sensor data into LifeNet will act as a "semantic mirror" to meaningfully interpret sensory data into cohesive patterns in order to understand and predict human action.en_US
dc.description.statementofresponsibilityby Bo Morgan.en_US
dc.format.extent109 p.en_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582
dc.subjectArchitecture. Program In Media Arts and Sciencesen_US
dc.titleLearning commonsense human-language descriptions from temporal and spatial sensor-network dataen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentProgram in Media Arts and Sciences (Massachusetts Institute of Technology)
dc.identifier.oclc122904958en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record