MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Reinforcement learning with natural language signals

Author(s)
Sidor, Szymon
Thumbnail
DownloadFull printable version (8.739Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Brian C. Williams.
Terms of use
M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
In this thesis we introduce a technique that allows one to use Natural Language as part of the state in Reinforcement Learning. We show that it is capable of solving Natural Language problems, similar to Sequence-to-Sequence models, but using multistage reasoning. We use Long Short-Term Memory Networks to parse the Natural Language input, whose final hidden state is used to compute action scores for the Deep Q-learning algorithm. First part of the thesis introduces the necessary theoretical background, including Deep Learning approach to Natural Language Processing, Recurrent Neural Networks and Sequence-to-Sequence modeling. We consider two case studies: translation and dialogue. In addition, we provide an overview of the existing techniques for the Reinforcement Learning problems, with focus on Deep Q-learning algorithm. In the second part of the thesis we present the multi-stage reasoning approach, and demonstrate it on solving the sentence unshuffling problem. It achieves accuracy 5% better than a Sequence-to-Sequence model, while requiring 3 times less examples to converge. Furthermore, we show that our approach is flexible and can be used with multi-modal inputs - Natural Language and agent's sensory data. We propose a system capable of understanding and executing Natural Language commands. It can be used for many different tasks with minimal engineering effort - the only required components being the reward function and example commands. We demonstrate its performance using an experiment in which an agent is required to learn to complete four types of manipulation tasks. The approach achieves nearly perfect performance on two of them and good performance in the two others.
Description
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
 
Cataloged from PDF version of thesis.
 
Includes bibliographical references (pages 79-83).
 
Date issued
2016
URI
http://hdl.handle.net/1721.1/103745
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.