Show simple item record

dc.contributor.advisorLeonard, John J.
dc.contributor.authorHuang, Qiangqiang
dc.date.accessioned2023-11-02T20:12:16Z
dc.date.available2023-11-02T20:12:16Z
dc.date.issued2023-09
dc.date.submitted2023-09-28T15:51:20.374Z
dc.identifier.urihttps://hdl.handle.net/1721.1/152738
dc.description.abstractRobot perception is crucial for both fully autonomous systems, like self-driving cars, and human-centric devices such as mixed reality glasses. While advances have been made in perception problems like simultaneous localization and mapping (SLAM) and visual localization, the quest for self-diagnosable, robust systems capable of operating in large, complex environments continues. This thesis aims to improve self-diagnosis and robustness in robot perception by promoting continuous uncertainty reasoning in localization and mapping, particularly under limited and ambiguous world observations. We investigate scalable and expressive approximations for posterior distributions in SLAM, overcoming the limited expressivity of Gaussian approximations for representing commonly encountered non-Gaussian posteriors. We harness the sparsity in factor graphs for scalability and utilize diverse density approximations to enhance expressivity. In advancing SLAM algorithms, we have achieved three contributions that provide unprecedented accuracy in describing posterior distributions, especially in highly non-Gaussian situations: 1) real-time inference of marginal posteriors by blending Gaussian approximation and particle filters, 2) incremental inference of joint posterior through learning normalizing flows on the Bayes tree, and 3) reference solutions to full posterior inference via nested sampling. Additionally, we develop a streaming platform that connects mobile devices and servers through web applications to conduct live demos of object-based SLAM, featuring the sharing of mapping results among online peers and continuous visualization of localization and mapping uncertainty. We also introduce a novel application of full posterior inference for uncertainty-aware robot perception, focusing on evaluating camera pose localizability to pinpoint visual localization challenges in 3D scenes. By employing this framework, we optimize fiducial marker placements in 3D environments, boosting localization rates by 20%.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleScalable Full Posterior Inference for Uncertainty-Aware Robot Perception
dc.typeThesis
dc.description.degreePh.D.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Mechanical Engineering
dc.identifier.orcidhttps://orcid.org/0000-0001-9079-0824
mit.thesis.degreeDoctoral
thesis.degree.nameDoctor of Philosophy


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record