Show simple item record

dc.contributor.advisorBrian C. Williams.en_US
dc.contributor.authorSidor, Szymonen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2016-07-18T20:05:52Z
dc.date.available2016-07-18T20:05:52Z
dc.date.copyright2016en_US
dc.date.issued2016en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/103745
dc.descriptionThesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 79-83).en_US
dc.description.abstractIn this thesis we introduce a technique that allows one to use Natural Language as part of the state in Reinforcement Learning. We show that it is capable of solving Natural Language problems, similar to Sequence-to-Sequence models, but using multistage reasoning. We use Long Short-Term Memory Networks to parse the Natural Language input, whose final hidden state is used to compute action scores for the Deep Q-learning algorithm. First part of the thesis introduces the necessary theoretical background, including Deep Learning approach to Natural Language Processing, Recurrent Neural Networks and Sequence-to-Sequence modeling. We consider two case studies: translation and dialogue. In addition, we provide an overview of the existing techniques for the Reinforcement Learning problems, with focus on Deep Q-learning algorithm. In the second part of the thesis we present the multi-stage reasoning approach, and demonstrate it on solving the sentence unshuffling problem. It achieves accuracy 5% better than a Sequence-to-Sequence model, while requiring 3 times less examples to converge. Furthermore, we show that our approach is flexible and can be used with multi-modal inputs - Natural Language and agent's sensory data. We propose a system capable of understanding and executing Natural Language commands. It can be used for many different tasks with minimal engineering effort - the only required components being the reward function and example commands. We demonstrate its performance using an experiment in which an agent is required to learn to complete four types of manipulation tasks. The approach achieves nearly perfect performance on two of them and good performance in the two others.en_US
dc.description.statementofresponsibilityby Szymon Sidor.en_US
dc.format.extent83 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleReinforcement learning with natural language signalsen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc953582881en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record