Show simple item record

dc.contributor.authorRussell, Spencer
dc.contributor.authorDublon, Gershon
dc.contributor.authorParadiso, Joseph A.
dc.date.accessioned2021-11-02T13:49:05Z
dc.date.available2021-11-02T13:49:05Z
dc.date.issued2016-02-25
dc.identifier.urihttps://hdl.handle.net/1721.1/137076
dc.description.abstract© 2016 ACM. In this paper we present a vision for scalable indoor and outdoor auditory augmented reality (AAR), as well as HearThere, a wearable device and infrastructure demonstrating the feasibility of that vision. HearThere preserves the spatial alignment between virtual audio sources and the user's environment, using head tracking and bone conduction headphones to achieve seamless mixing of real and virtual sounds. To scale between indoor, urban, and natural environments, our system supports multi-scale location tracking, using finegrained (20-cm) Ultra-WideBand (UWB) radio tracking when in range of our infrastructure anchors and mobile GPS otherwise. In our tests, users were able to navigate through an AAR scene and pinpoint audio source locations down to 1m. We found that bone conduction is a viable technology for producing realistic spatial sound, and show that users' audio localization ability is considerably better in UWB coverage zones than with GPS alone. HearThere is a major step towards realizing our vision of networked sensory prosthetics, in which sensor networks serve as collective sensory extensions into the world around us. In our vision, AAR would be used to mix spatialized data sonification with distributed, livestreaming microphones. In this concept, HearThere promises a more expansive perceptual world, or umwelt, where sensor data becomes immediately attributable to extrinsic phenomena, externalized in the wearer's perception. We are motivated by two goals: first, to remedy a fractured state of attention caused by existing mobile and wearable technologies; and second, to bring the distant or often invisible processes underpinning a complex natural environment more directly into human consciousness.en_US
dc.language.isoen
dc.publisherACMen_US
dc.relation.isversionof10.1145/2875194.2875247en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceMIT web domainen_US
dc.titleHearThere: Networked Sensory Prosthetics Through Auditory Augmented Realityen_US
dc.title.alternativeNetworked Sensory Prosthetics Through Auditory Augmented Realityen_US
dc.typeArticleen_US
dc.identifier.citationRussell, Spencer, Dublon, Gershon and Paradiso, Joseph A. 2016. "HearThere: Networked Sensory Prosthetics Through Auditory Augmented Reality."
dc.contributor.departmentMassachusetts Institute of Technology. Media Laboratory
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2019-07-24T17:12:28Z
dspace.date.submission2019-07-24T17:12:30Z
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record