dc.contributor.advisor | Andy Lippman. | en_US |
dc.contributor.author | Woo, Grace R | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. | en_US |
dc.date.accessioned | 2013-04-12T19:25:58Z | |
dc.date.available | 2013-04-12T19:25:58Z | |
dc.date.copyright | 2012 | en_US |
dc.date.issued | 2012 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/78458 | |
dc.description | Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012. | en_US |
dc.description | Cataloged from PDF version of thesis. | en_US |
dc.description | Includes bibliographical references (p. 97-101). | en_US |
dc.description.abstract | This thesis envisions a public space populated with active visible surfaces which appear different to a camera than to the human eye. Thus, they can act as general digital interfaces that transmit machine-compatible data as well as provide relative orientation without being obtrusive. We introduce a personal transceiver peripheral, and demonstrate this visual environment enables human participants to hear sound only from the location they are looking in, authenticate with proximal surfaces, and gather otherwise imperceptible data from an object in sight. We present a design methodology that assumes the availability of many independent and controllable light transmitters where each individual transmitter produces light at different color wavelengths. Today, controllable light transmitters take the form of digital billboards, signage and overhead lighting built for human use; light-capturing receivers take the form of mobile cameras and personal video camcorders. Following the software-defined approach, we leverage screens and cameras as parameterized hardware peripherals thus allowing flexibility and development of the proposed framework on general-purpose computers in a manner that is unobtrusive to humans. We develop VRCodes which display spatio-temporally modulated metamers on active screens thus conveying digital and positional information to a rolling-shutter camera; and physically-modified optical setups which encode data in a point-spread function thus exploiting the camera's wide-aperture. These techniques exploit how the camera sees something different from the human. We quantify the full potential of the system by characterizing basic bounds of a parameterized transceiver hardware along with the medium in which it operates. Evaluating performance highlights the underutilized temporal, spatial and frequency dimensions available to the interaction designer concerned with human perception. Results suggest that the one-way point-to-point transmission is good enough for extending the techniques toward a two-way bidrectional model with realizable hardware devices. The new visual environment contains a second data layer for machines that is synthetic and quantifiable; human interactions serve as the context. | en_US |
dc.description.statementofresponsibility | by Grace Woo. | en_US |
dc.format.extent | 101 p. | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | M.I.T. theses are protected by
copyright. They may be viewed from this source for any purpose, but
reproduction or distribution in any format is prohibited without written
permission. See provided URL for inquiries about permission. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Electrical Engineering and Computer Science. | en_US |
dc.title | VRCodes : embedding unobtrusive data for new devices in visible light | en_US |
dc.type | Thesis | en_US |
dc.description.degree | Ph.D. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
dc.identifier.oclc | 832744067 | en_US |