Show simple item record

dc.contributor.advisorHiroshi Ishii.en_US
dc.contributor.authorLeithinger, Danielen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Architecture. Program in Media Arts and Sciences.en_US
dc.date.accessioned2016-03-25T13:40:13Z
dc.date.available2016-03-25T13:40:13Z
dc.date.copyright2015en_US
dc.date.issued2015en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/101848
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2015.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 165-183).en_US
dc.description.abstractThe vision to interact with computers through our whole body - to not only visually perceive information, but to engage with it through multiple senses has inspired human computer interaction (HCI) research for decades. Shape displays address this challenge by rendering dynamic physical shapes through computer controlled, actuated surfaces that users can view from different angles and touch with their hands to experience digital models, express their ideas and collaborate with each other. Similar to kinetic sculptures, shape displays do not just occupy, rather they redefine the physical space around them. By dynamically transforming their surface geometry, they directly push against hands and objects, yet they also form a perceptual connection with the users gestures and body movements at a distance. Based on this principle of spatial continuity, this thesis introduces a set of interaction techniques that move between touching the interface surface, to interacting with tangible objects on top, and to engaging through gestures in relation to it. These techniques are implemented on custom-built shape display systems that integrate physical rendering, synchronized visual display, shape sensing, and spatial tracking. On top of this hardware platform, applications for computer-aided design, urban planning, and volumetric data exploration allow users to manipulate data at different scales and modalities. To support remote collaboration, shared telepresence workspaces capture and remotely render the physical shapes of people and objects. Users can modify shared models, and handle remote objects, while augmenting their capabilities through altered remote body representations. The insights gained from building these prototype workspaces and from gathering user feedback point towards a future in which computationally transforming materials will enable new types of bodily, spatial interaction with computers.en_US
dc.description.statementofresponsibilityby Daniel Leithinger.en_US
dc.format.extent183 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectArchitecture. Program in Media Arts and Sciences.en_US
dc.titleGrasping information and collaborating through shape displaysen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentProgram in Media Arts and Sciences (Massachusetts Institute of Technology)
dc.identifier.oclc942904537en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record