Show simple item record

dc.contributor.authorRosinol, Antoni
dc.contributor.authorViolette, Andrew
dc.contributor.authorAbate, Marcus
dc.contributor.authorHughes, Nathan
dc.contributor.authorChang, Yun
dc.contributor.authorShi, Jingnan
dc.contributor.authorGupta, Arjun
dc.contributor.authorCarlone, Luca
dc.date.accessioned2022-09-07T17:48:42Z
dc.date.available2022-09-07T17:48:42Z
dc.date.issued2021
dc.identifier.urihttps://hdl.handle.net/1721.1/145298
dc.description.abstract<jats:p> Humans are able to form a complex mental model of the environment they move in. This mental model captures geometric and semantic aspects of the scene, describes the environment at multiple levels of abstractions (e.g., objects, rooms, buildings), includes static and dynamic entities and their relations (e.g., a person is in a room at a given time). In contrast, current robots’ internal representations still provide a partial and fragmented understanding of the environment, either in the form of a sparse or dense set of geometric primitives (e.g., points, lines, planes, and voxels), or as a collection of objects. This article attempts to reduce the gap between robot and human perception by introducing a novel representation, a 3D dynamic scene graph (DSG), that seamlessly captures metric and semantic aspects of a dynamic environment. A DSG is a layered graph where nodes represent spatial concepts at different levels of abstraction, and edges represent spatiotemporal relations among nodes. Our second contribution is Kimera, the first fully automatic method to build a DSG from visual–inertial data. Kimera includes accurate algorithms for visual–inertial simultaneous localization and mapping (SLAM), metric–semantic 3D reconstruction, object localization, human pose and shape estimation, and scene parsing. Our third contribution is a comprehensive evaluation of Kimera in real-life datasets and photo-realistic simulations, including a newly released dataset, uHumans2, which simulates a collection of crowded indoor and outdoor scenes. Our evaluation shows that Kimera achieves competitive performance in visual–inertial SLAM, estimates an accurate 3D metric–semantic mesh model in real-time, and builds a DSG of a complex indoor environment with tens of objects and humans in minutes. Our final contribution is to showcase how to use a DSG for real-time hierarchical semantic path-planning. The core modules in Kimera have been released open source. </jats:p>en_US
dc.language.isoen
dc.publisherSAGE Publicationsen_US
dc.relation.isversionof10.1177/02783649211056674en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleKimera: From SLAM to spatial perception with 3D dynamic scene graphsen_US
dc.typeArticleen_US
dc.identifier.citationRosinol, Antoni, Violette, Andrew, Abate, Marcus, Hughes, Nathan, Chang, Yun et al. 2021. "Kimera: From SLAM to spatial perception with 3D dynamic scene graphs." International Journal of Robotics Research, 40 (12-14).
dc.contributor.departmentMassachusetts Institute of Technology. Laboratory for Information and Decision Systems
dc.relation.journalInternational Journal of Robotics Researchen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2022-09-07T17:45:00Z
dspace.orderedauthorsRosinol, A; Violette, A; Abate, M; Hughes, N; Chang, Y; Shi, J; Gupta, A; Carlone, Len_US
dspace.date.submission2022-09-07T17:45:15Z
mit.journal.volume40en_US
mit.journal.issue12-14en_US
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record