Robust Flight Navigation with Liquid Neural Networks
Author(s)
Kao, Patrick
DownloadThesis PDF (20.05Mb)
Advisor
Rus, Daniela L.
Terms of use
Metadata
Show full item recordAbstract
Autonomous robots can learn to perform visual navigation tasks from offline human demonstrations, and generalize well to online and unseen scenarios within the same environment they have been trained on. It is fundamentally challenging for these intelligent agents to take a step further and robustly generalize to new environments with drastic scenery changes they have never encountered before. Here, we present a method to create robust flight navigation agents that successfully perform vision-based fly-to-target tasks beyond their training environment under drastic distribution shifts. To this end, we design an imitation learning framework utilizing liquid neural networks, a brain-inspired class of continuous-time neural models that are causal and adapt to changing conditions. We observe that liquid agents learn to distill the task they are given from visual inputs, and drop irrelevant features. This way, they transfer their learned navigation skills to new environments. When compared to other advanced deep agents, we confirm this level of robustness in decision-making is exclusive to liquid networks, both in their differential equation and closed-form representation.
Date issued
2022-05Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology