dc.contributor.advisor | Hiroshi Ishii. | en_US |
dc.contributor.author | Leithinger, Daniel | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Department of Architecture. Program in Media Arts and Sciences. | en_US |
dc.date.accessioned | 2016-03-25T13:40:13Z | |
dc.date.available | 2016-03-25T13:40:13Z | |
dc.date.copyright | 2015 | en_US |
dc.date.issued | 2015 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/101848 | |
dc.description | Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2015. | en_US |
dc.description | Cataloged from PDF version of thesis. | en_US |
dc.description | Includes bibliographical references (pages 165-183). | en_US |
dc.description.abstract | The vision to interact with computers through our whole body - to not only visually perceive information, but to engage with it through multiple senses has inspired human computer interaction (HCI) research for decades. Shape displays address this challenge by rendering dynamic physical shapes through computer controlled, actuated surfaces that users can view from different angles and touch with their hands to experience digital models, express their ideas and collaborate with each other. Similar to kinetic sculptures, shape displays do not just occupy, rather they redefine the physical space around them. By dynamically transforming their surface geometry, they directly push against hands and objects, yet they also form a perceptual connection with the users gestures and body movements at a distance. Based on this principle of spatial continuity, this thesis introduces a set of interaction techniques that move between touching the interface surface, to interacting with tangible objects on top, and to engaging through gestures in relation to it. These techniques are implemented on custom-built shape display systems that integrate physical rendering, synchronized visual display, shape sensing, and spatial tracking. On top of this hardware platform, applications for computer-aided design, urban planning, and volumetric data exploration allow users to manipulate data at different scales and modalities. To support remote collaboration, shared telepresence workspaces capture and remotely render the physical shapes of people and objects. Users can modify shared models, and handle remote objects, while augmenting their capabilities through altered remote body representations. The insights gained from building these prototype workspaces and from gathering user feedback point towards a future in which computationally transforming materials will enable new types of bodily, spatial interaction with computers. | en_US |
dc.description.statementofresponsibility | by Daniel Leithinger. | en_US |
dc.format.extent | 183 pages | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Architecture. Program in Media Arts and Sciences. | en_US |
dc.title | Grasping information and collaborating through shape displays | en_US |
dc.type | Thesis | en_US |
dc.description.degree | Ph. D. | en_US |
dc.contributor.department | Program in Media Arts and Sciences (Massachusetts Institute of Technology) | |
dc.identifier.oclc | 942904537 | en_US |