Compositional RL Agents That Follow Language Commands in Temporal Logic
Author(s)
Kuo, Yen-Ling; Katz, Boris; Barbu, Andrei
Downloadfrobt-08-689550.pdf (1.572Mb)
Publisher with Creative Commons License
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
We demonstrate how a reinforcement learning agent can use compositional recurrent neural networks to learn to carry out commands specified in linear temporal logic (LTL). Our approach takes as input an LTL formula, structures a deep network according to the parse of the formula, and determines satisfying actions. This compositional structure of the network enables zero-shot generalization to significantly more complex unseen formulas. We demonstrate this ability in multiple problem domains with both discrete and continuous state-action spaces. In a symbolic domain, the agent finds a sequence of letters that satisfy a specification. In a Minecraft-like environment, the agent finds a sequence of actions that conform to a formula. In the Fetch environment, the robot finds a sequence of arm configurations that move blocks on a table to fulfill the commands. While most prior work can learn to execute one formula reliably, we develop a novel form of multi-task learning for RL agents that allows them to learn from a diverse set of tasks and generalize to a new set of diverse tasks without any additional training. The compositional structures presented here are not specific to LTL, thus opening the path to RL agents that perform zero-shot generalization in other compositional domains.
Date issued
2021-07Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence LaboratoryJournal
Frontiers in Robotics and AI
Publisher
Frontiers Media SA
Citation
Kuo, Yen-Ling et al. "Compositional RL Agents That Follow Language Commands in Temporal Logic." Frontiers in Robotics and AI 8 (July 2021): 689550. © 2021 Kuo et al.
Version: Final published version
ISSN
2296-9144