Show simple item record

dc.contributor.advisorHow, Jonathan P.
dc.contributor.advisorKaraman, Sertac
dc.contributor.advisorGilitschenski, Igor
dc.contributor.authorTagliabue, Andrea
dc.date.accessioned2024-06-27T19:46:23Z
dc.date.available2024-06-27T19:46:23Z
dc.date.issued2024-05
dc.date.submitted2024-05-28T19:36:31.450Z
dc.identifier.urihttps://hdl.handle.net/1721.1/155345
dc.description.abstractExisting robust model predictive control (MPC) and vision-based state estimation algorithms for agile flight, while achieving impressive performance, still demand significant onboard computation, preventing deployment on robots with tight Cost, Size, Weight, and Power (CSWaP)constraints. The existing imitation learning strategies that can train computationally efficient deep neural network policies from those algorithms have limited robustness and/or are impractical (large number of demonstrations, training time), limiting rapid policy learning once new mission specifications or flight data become available. This thesis details efficient imitation learning strategies that make policy learning from MPC more practical while preserving robustness to uncertainties. First, this thesis contributes a method for efficiently learning trajectory tracking policies from robust MPC, enabling learning of a policy that achieves real-world robustness from a single real-world or simulated mission. Second, it presents a strategy for learning from MPCs with time-varying operating points, exploiting nonlinear models, and enabling acrobatic flights. The obtained policy has an onboard inference time of only 15 𝜇s and can perform a flip on a UAV subject to uncertainties. Third, it extends the previous approaches to vision-based policies, enabling onboard sensing-to-action with milliseconds-level latency, reducing the computational cost of vision-based state estimation, while using data from a single real-world mission. Fourth, it presents a method to reduce control errors under uncertainties, demonstrating rapid adaptation to unexpected failures and uncertainties while avoiding the challenging reward tuning/design of existing methods. Finally, this thesis evaluates the proposed contributions in simulation and hardware, including flights on an insect-scale (sub-gram), soft-actuated, flapping-wing UAV. The methods developed in this thesis achieve the world’s first deployment of policies learned from MPC on sub-gram soft-actuated aerial robots.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleEfficient Imitation Learning for Robust, Adaptive, Vision-based Agile Flight Under Uncertainty
dc.typeThesis
dc.description.degreePh.D.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics
mit.thesis.degreeDoctoral
thesis.degree.nameDoctor of Philosophy


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record