Show simple item record

dc.contributor.advisorJoseph A. Paradiso.en_US
dc.contributor.authorRussell, Spencer (Spencer Franklin)en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Architecture. Program in Media Arts and Sciences.en_US
dc.date.accessioned2016-03-25T13:38:21Z
dc.date.available2016-03-25T13:38:21Z
dc.date.copyright2015en_US
dc.date.issued2015en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/101826
dc.descriptionThesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2015.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages [81]-85).en_US
dc.description.abstractThis thesis presents HearThere, a system to present spatial audio that preserves alignment between the virtual audio sources and the user's environment. HearThere creates an auditory augmented reality with minimal equipment required for the user. Sound designers can create large-scale experiences to sonify a city with no infrastructure required, or by installing tracking anchors can take advantage of sub-meter location information to create more refined experiences. Audio is typically presented to listeners via speakers or headphones. Speakers make it extremely difficult to control what sound reaches each ear, which is necessary for accurately spatializing sounds. Headphones make it trivial to send separate left and right channels, but discard the relationship between the head the the rest of the world, so when the listener turns their head the whole world rotates with them. Head-tracking headphone systems have been proposed and implemented as a best of both worlds solution, but typically only operate within a small detection area (e.g. Oculus Rift) or with coarse-grained accuracy (e.g. GPS) that makes up-close interactions impossible. HearThere is a multi-technology solution to bridge this gap and provide large-area and outdoor tracking that is precise enough to imbue nearby objects with virtual sound that maintains the spatial persistence as the user moves throughout the space. Using bone-conduction headphones that don't occlude the ears along with this head tracking will enable true auditory augmented reality, where real and virtual sounds can be seamlessly mixed.en_US
dc.description.statementofresponsibilityby Spencer Russell.en_US
dc.format.extent85 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectArchitecture. Program in Media Arts and Sciences.en_US
dc.titleHearThere : infrastructure for ubiquitous augmented-reality audioen_US
dc.title.alternativeHear There : infrastructure for ubiquitous augmented-reality audioen_US
dc.title.alternativeInfrastructure for ubiquitous augmented-reality audioen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentProgram in Media Arts and Sciences (Massachusetts Institute of Technology)
dc.identifier.oclc941794268en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record