Show simple item record

dc.contributor.authorGan, Chuang
dc.contributor.authorZhang, Yiwei
dc.contributor.authorWu, Jiajun
dc.contributor.authorGong, Boqing
dc.contributor.authorTenenbaum, Joshua B
dc.date.accessioned2022-02-03T20:01:42Z
dc.date.available2021-12-07T19:51:18Z
dc.date.available2022-02-03T20:01:42Z
dc.date.issued2020-09
dc.date.submitted2020-05
dc.identifier.isbn978-1-7281-7395-5
dc.identifier.issn2577-087X
dc.identifier.urihttps://hdl.handle.net/1721.1/138365.2
dc.description.abstract© 2020 IEEE. A crucial ability of mobile intelligent agents is to integrate the evidence from multiple sensory inputs in an environment and to make a sequence of actions to reach their goals. In this paper, we attempt to approach the problem of Audio-Visual Embodied Navigation, the task of planning the shortest path from a random starting location in a scene to the sound source in an indoor environment, given only raw egocentric visual and audio sensory data. To accomplish this task, the agent is required to learn from various modalities, i.e., relating the audio signal to the visual environment. Here we describe an approach to audio-visual embodied navigation that takes advantage of both visual and audio pieces of evidence. Our solution is based on three key ideas: a visual perception mapper module that constructs its spatial memory of the environment, a sound perception module that infers the relative location of the sound source from the agent, and a dynamic path planner that plans a sequence of actions based on the audio-visual observations and the spatial memory of the environment to navigate toward the goal. Experimental results on a newly collected Visual-Audio-Room dataset using the simulated multi-modal environment demonstrate the effectiveness of our approach over several competitive baselines.en_US
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/ICRA40945.2020.9197008en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleLook, Listen, and Act: Towards Audio-Visual Embodied Navigationen_US
dc.typeArticleen_US
dc.identifier.citationGan, Chuang, Zhang, Yiwei, Wu, Jiajun, Gong, Boqing and Tenenbaum, Joshua B. 2020. "Look, Listen, and Act: Towards Audio-Visual Embodied Navigation." Proceedings - IEEE International Conference on Robotics and Automation.en_US
dc.contributor.departmentMIT-IBM Watson AI Lab
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciences
dc.relation.journal2020 IEEE International Conference on Robotics and Automation (ICRA)en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2021-12-07T19:43:06Z
dspace.orderedauthorsGan, C; Zhang, Y; Wu, J; Gong, B; Tenenbaum, JBen_US
dspace.date.submission2021-12-07T19:43:07Z
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusAuthority Work Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

VersionItemDateSummary

*Selected version