Enabling Semantically Grounded, Long Horizon Planning and Execution for Autonomous Agents
Author(s)
Covarrubias, Lucian
DownloadThesis PDF (2.681Mb)
Advisor
Williams, Brian C.
Terms of use
Metadata
Show full item recordAbstract
Robots have been playing an ever increasing role in complex environments, often in coordination with teams of systems or humans. Autonomous systems of the future will need to be tightly grounded in the real world, drawing information directly from their environment to develop an understanding of the world. They will need to maintain a semantic understanding of their environment, including the kinds of objects they observe and their relationships to each other. At the same time, they must be able to reason over diverse constraints related to their tasks, such as time limits and resource usage. While there are existing approaches which enable robots to execute tasks with semantic goals, such as finding a certain type of object in a room, they often fail to consider the multitude fo task specific constraints which are vital to robust performance. On the other hand, planners which consider task specific constraints require a human to provide all information about the environment manually. These systems are too cumbersome to model complex tasks, requiring hours of manual effort which is prone to errors. This thesis presents an architecture for semantically grounded planning which leverages the strengths of constraint based planners while automating the environmental modeling step with an advanced semantic perception engine. By automating environmental modeling, we are able to create a system which executes complex semantically grounded tasks such as navigating to certain objects within a certain room, without major user input which is typically required of these systems.
Date issued
2025-02Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology