Show simple item record

dc.contributor.advisorDeb K. Roy.en_US
dc.contributor.authorHsiao, Kai-yuh, 1977-en_US
dc.contributor.otherMassachusetts Institute of Technology. Dept. of Architecture. Program In Media Arts and Sciencesen_US
dc.date.accessioned2008-04-23T12:34:23Z
dc.date.available2008-04-23T12:34:23Z
dc.date.copyright2007en_US
dc.date.issued2007en_US
dc.identifier.urihttp://dspace.mit.edu/handle/1721.1/39258en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/39258
dc.descriptionThesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2007.en_US
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionIncludes bibliographical references (p. 139-146).en_US
dc.description.abstractThis thesis presents the Object Schema Model (OSM) for grounded language interaction. Dynamic representations of objects are used as the central point of coordination between actions, sensations, planning, and language use. Objects are modeled as object schemas -- sets of multimodal, object-directed behavior processes -- each of which can make predictions, take actions, and collate sensations, in the modalities of touch, vision, and motor control. This process-centered view allows the system to respond continuously to real-world activity, while still viewing objects as stabilized representations for planning and speech interaction. The model can be described from four perspectives, each organizing and manipulating behavior processes in a different way. The first perspective views behavior processes like thread objects, running concurrently to carry out their respective functions. The second perspective organizes the behavior processes into object schemas. The third perspective organizes the behavior processes into plan hierarchies to coordinate actions. The fourth perspective creates new behavior processes in response to language input.en_US
dc.description.abstract(cont.) Results from interactions with objects are used to update the object schemas, which then influence subsequent plans and actions. A continuous planning algorithm examines the current object schemas to choose between candidate processes according to a set of primary motivations, such as responding to collisions, exploring objects, and interacting with the human. An instance of the model has been implemented using a physical robotic manipulator. The implemented system is able to interpret basic speech acts that relate to perception of, and actions upon, objects in the robot's physical environment.en_US
dc.description.statementofresponsibilityby Kai-yuh Hsiao.en_US
dc.format.extent146 p.en_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/39258en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectArchitecture. Program In Media Arts and Sciencesen_US
dc.titleEmbodied object schemas for grounding language useen_US
dc.typeThesisen_US
dc.description.degreePh.D.en_US
dc.contributor.departmentProgram in Media Arts and Sciences (Massachusetts Institute of Technology)
dc.identifier.oclc173610680en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record