Learning to Plan and Planning to Learn in Long-Horizon Robotics Tasks
Author(s)
Kumar, Nishanth Jay
DownloadThesis PDF (17.92Mb)
Advisor
Kaelbling, Leslie Pack
Lozano-Pérez, Tomás
Terms of use
Metadata
Show full item recordAbstract
A longstanding goal of robotics research has been to produce a single agent capable of solving a variety of useful long-horizon tasks, such as making a cup of tea or tidying up a living room, in multiple different environments (i.e., in any household). In recent years, two dominant paradigms have emerged for constructing such a system: end-to-end model-free learning and model-based planning. Approaches from both paradigms have produced impressive isolated results, but both paradigms themselves are known to have significant limitations. Learning-based approaches often require impractical amounts of data, and struggle to generalize beyond the data and tasks they have been trained on. Planning-based approaches depend on models, which often require significant manual engineering to define, especially as the number of complexity of tasks of interest grows. This thesis proposes a set of approaches that attempt to overcome these limitations by combining aspects of both paradigms. Specifically, we leverage learning to automate the process of designing planning models, and leverage planning to efficiently and autonomously collect data needed for learning. Experiments on a variety of simulated and real-robot domains illustrate that this combination of learning to plan and planning to learn could be a promising approach to enabling robots to solve complex, long-horizon tasks at scale.
Date issued
2024-05Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology