dc.description.abstract | We aim to enable robots to act intelligently in complex environments not explicitly designed around them. In order to do so, robots can simplify decision making by forming hierarchical abstractions of their world, and planning within those representations. However, in reality, the types of abstractions robots are able to build are often poorly aligned with the planning problems they must solve, which limits how useful those abstractions can be in efficient decision making. For example, autonomous agents struggle in many real world scenarios, particularly when their environments are large, cluttered with obstructions, or beset by uncertainty. These factors often imply that decisions made at higher levels of abstraction may not be easily refined to low level plans, leading to backtracking during either search or execution. In this thesis, we consider contributions which improve the efficiency and quality of long-horizon hierarchical planning in robotics. Specifically, we propose approaches which explicitly reason about the imperfections of the abstractions available to robots during planning, and show how those methods can improve performance on a variety of tasks and environments.
There are three primary settings for which we make contributions in this thesis. First, we will consider the problem of solving tasks in partially revealed environments, wherein our abstract plans cannot be known to be feasible until we attempt execution because the world is not fully known at planning time. To solve this problem, we first develop a high level planning representation which recognizes that actions that enter unknown space can either succeed or fail with some probability. The first contribution of this work is then to learn to predict the feasibility and cost of actions within that abstraction from visual input. We also describe a method for planning which uses these predictions, and we are able to show that our approach can generate plans that are significantly faster at completing tasks in unknown environments experimentally when compared with heuristic driven baselines. Next, we will discuss work in Task and Motion Planning (TAMP), where the world is fully known, but the problems require complex interaction with the environment to the point that we must intelligently guide search in order to find plans efficiently. We build upon our work in the first setting by once again learning to predict the outcome and cost of different sub-tasks within a TAMP abstraction. We further contribute a novel method to guide search in this setting for plans which minimize cost given our learned predictions, and demonstrate the ability to find faster plans than established TAMP approaches both in simulation, and on real world robots. In our final problem setting, we consider attempting to solve TAMP problems in real world, large-scale environments. To do this, we define an approach for constructing tractable planning abstractions from real perception using hierarchical scene graphs, ensuring that when we refine our abstract plans within these representations, the low-level trajectories still satisfy the given task’s constraints. A major contribution of this work is an approach for planning efficiently in these domains by pruning provably superfluous information from the world model. The unifying aim of the work in this thesis is to develop approaches which enable robots to solve complex tasks in large-scale, real world environments without human intervention. To that end, across all contributions, we demonstrate experimentally on real robots the importance of accounting for imperfections in hierarchical abstraction during planning. | |