Show simple item record

dc.contributor.authorBavle, Hriday
dc.contributor.authorDe La Puente, Paloma
dc.contributor.authorHow, Jonathan P
dc.contributor.authorCampoy, Pascual
dc.date.accessioned2021-10-27T20:30:02Z
dc.date.available2021-10-27T20:30:02Z
dc.date.issued2020
dc.identifier.urihttps://hdl.handle.net/1721.1/135936
dc.description.abstract© 2013 IEEE. Indoor environments have abundant presence of high-level semantic information which can provide a better understanding of the environment for robots to improve the uncertainty in their pose estimate. Although semantic information has proved to be useful, there are several challenges faced by the research community to accurately perceive, extract and utilize such semantic information from the environment. In order to address these challenges, in this paper we present a lightweight and real-time visual semantic SLAM framework running on board aerial robotic platforms. This novel method combines low-level visual/visual-inertial odometry (VO/VIO) along with geometrical information corresponding to planar surfaces extracted from detected semantic objects. Extracting the planar surfaces from selected semantic objects provides enhanced robustness and makes it possible to precisely improve the metric estimates rapidly, simultaneously generalizing to several object instances irrespective of their shape and size. Our graph-based approach can integrate several state of the art VO/VIO algorithms along with the state of the art object detectors in order to estimate the complete 6DoF pose of the robot while simultaneously creating a sparse semantic map of the environment. No prior knowledge of the objects is required, which is a significant advantage over other works. We test our approach on a standard RGB-D dataset comparing its performance with the state of the art SLAM algorithms. We also perform several challenging indoor experiments validating our approach in presence of distinct environmental conditions and furthermore test it on board an aerial robot. Video: https://vimeo.com/368217703 Released Code: https://bitbucket.org/hridaybavle/semantic_slam.git
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.relation.isversionof10.1109/ACCESS.2020.2983121
dc.rightsCreative Commons Attribution 4.0 International license
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.sourceIEEE
dc.titleVPS-SLAM: Visual Planar Semantic SLAM for Aerial Robotic Systems
dc.typeArticle
dc.contributor.departmentMassachusetts Institute of Technology. Aerospace Controls Laboratory
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics
dc.contributor.departmentMassachusetts Institute of Technology. Laboratory for Information and Decision Systems
dc.relation.journalIEEE Access
dc.eprint.versionFinal published version
dc.type.urihttp://purl.org/eprint/type/JournalArticle
eprint.statushttp://purl.org/eprint/status/PeerReviewed
dc.date.updated2021-04-30T15:16:00Z
dspace.orderedauthorsBavle, H; De La Puente, P; How, JP; Campoy, P
dspace.date.submission2021-04-30T15:16:02Z
mit.journal.volume8
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Needed


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record