Learning to guide task and motion planning
Author(s)
Kim, Beomjoon.
Download1227520382-MIT.pdf (18.57Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Leslie Pack Kaelbling and Tomás Lozano-Pérez.
Terms of use
Metadata
Show full item recordAbstract
How can we enable robots to efficiently reason both at the discrete task-level and the continuous motion-level to achieve high-level goals such as tidying up a room or constructing a building? This is a challenging problem that requires integrated reasoning about the combinatoric aspects of the problem, such as deciding which object to manipulate, and continuous aspects of the problem, such as finding collision-free manipulation motions, to achieve goals. The classical robotics approach is to design a planner that, given an initial state, goal, and transition model, computes a plan. The advantage of this approach is its immense generalization capability. For any given state and goal, a planner will find a solution if there is one. The inherent drawback, however, is that a planner does not typically make use of planning experience, and computes a plan from scratch every time it encounters a new problem. For complex problems, this renders planners extremely inefficient. Alternatively, we can take a pure learning approach where the system learns, from either reinforcement signals or demonstrations, a policy that maps states to actions. The advantage of this approach is that computing the next action to execute becomes much cheaper than pure planning because it is simply making a prediction using a function approximator. The drawback, however, is that it is brittle. If a policy encounters a state that is very different from the ones seen in the training set, then it is likely to make mistakes and might get into a situation from which it does not know how to proceed. Our approach is to take the middle ground between these two extremes. More concretely, this thesis introduces several algorithms that learn to guide a planner from planning experience. We propose state representations, neural network architectures, and data-efficient algorithms for learning to perform both task and motion level reasoning using neural networks. We then use these neural networks to guide a planner and show that it performs more efficiently than pure planning and pure learning algorithms.
Description
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020 Cataloged from student-submitted PDF of thesis. Includes bibliographical references (pages [113]-124).
Date issued
2020Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.