Show simple item record

dc.contributor.advisorAndy Lippman.en_US
dc.contributor.authorWoo, Grace Ren_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2013-04-12T19:25:58Z
dc.date.available2013-04-12T19:25:58Z
dc.date.copyright2012en_US
dc.date.issued2012en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/78458
dc.descriptionThesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (p. 97-101).en_US
dc.description.abstractThis thesis envisions a public space populated with active visible surfaces which appear different to a camera than to the human eye. Thus, they can act as general digital interfaces that transmit machine-compatible data as well as provide relative orientation without being obtrusive. We introduce a personal transceiver peripheral, and demonstrate this visual environment enables human participants to hear sound only from the location they are looking in, authenticate with proximal surfaces, and gather otherwise imperceptible data from an object in sight. We present a design methodology that assumes the availability of many independent and controllable light transmitters where each individual transmitter produces light at different color wavelengths. Today, controllable light transmitters take the form of digital billboards, signage and overhead lighting built for human use; light-capturing receivers take the form of mobile cameras and personal video camcorders. Following the software-defined approach, we leverage screens and cameras as parameterized hardware peripherals thus allowing flexibility and development of the proposed framework on general-purpose computers in a manner that is unobtrusive to humans. We develop VRCodes which display spatio-temporally modulated metamers on active screens thus conveying digital and positional information to a rolling-shutter camera; and physically-modified optical setups which encode data in a point-spread function thus exploiting the camera's wide-aperture. These techniques exploit how the camera sees something different from the human. We quantify the full potential of the system by characterizing basic bounds of a parameterized transceiver hardware along with the medium in which it operates. Evaluating performance highlights the underutilized temporal, spatial and frequency dimensions available to the interaction designer concerned with human perception. Results suggest that the one-way point-to-point transmission is good enough for extending the techniques toward a two-way bidrectional model with realizable hardware devices. The new visual environment contains a second data layer for machines that is synthetic and quantifiable; human interactions serve as the context.en_US
dc.description.statementofresponsibilityby Grace Woo.en_US
dc.format.extent101 p.en_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleVRCodes : embedding unobtrusive data for new devices in visible lighten_US
dc.typeThesisen_US
dc.description.degreePh.D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc832744067en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record