Reinforcement Learning for Mapping Instructions to Actions
Author(s)Branavan, Satchuthanan R.; Chen, Harr; Zettlemoyer, Luke S.; Barzilay, Regina
MetadataShow full item record
In this paper, we present a reinforcement learning approach for mapping natural language instructions to sequences of executable actions. We assume access to a reward function that defines the quality of the executed actions. During training, the learner repeatedly constructs action sequences for a set of documents, executes those actions, and observes the resulting reward. We use a policy gradient algorithm to estimate the parameters of a log-linear model for action selection. We apply our method to interpret instructions in two domains --- Windows troubleshooting guides and game tutorials. Our results demonstrate that this method can rival supervised learning techniques while requiring few or no annotated training examples.
DepartmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP
Association for Computational Linguistics
Branavan, S.R.K., Harr Chen, Luke S. Zettlemoyer, and Regina Barzilay (2009). "Reinforcement learning for mapping instructions to actions." Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP (Morristown, N.J.: Association for Computational Linguistics): 82-90. © Association for Computing Machinery.
Author's final manuscript
algorithms, design, experimentation, languages, measurement, performance