Show simple item record

dc.contributor.authorKulkarni, Tejas D.
dc.contributor.authorNarasimhan, Karthik Rajagopal
dc.contributor.authorSaeedi, Ardavan
dc.contributor.authorTenenbaum, Joshua B
dc.date.accessioned2017-12-14T15:46:13Z
dc.date.available2017-12-14T15:46:13Z
dc.date.issued2016-12
dc.identifier.urihttp://hdl.handle.net/1721.1/112755
dc.description.abstractLearning goal-directed behavior in environments with sparse feedback is a major challenge for reinforcement learning algorithms. One of the key difficulties is insufficient exploration, resulting in an agent being unable to learn robust policies. Intrinsically motivated agents can explore new behavior for their own sake rather than to directly solve external goals. Such intrinsic behaviors could eventually help the agent solve tasks posed by the environment. We present hierarchical-DQN (h-DQN), a framework to integrate hierarchical action-value functions, operating at different temporal scales, with goal-driven intrinsically motivated deep reinforcement learning. A top-level q-value function learns a policy over intrinsic goals, while a lower-level function learns a policy over atomic actions to satisfy the given goals. h-DQN allows for flexible goal specifications, such as functions over entities and relations. This provides an efficient space for exploration in complicated environments. We demonstrate the strength of our approach on two problems with very sparse and delayed feedback: (1) a complex discrete stochastic decision process with stochastic transitions, and (2) the classic ATARI game - 'Montezuma's Revenge'.en_US
dc.publisherNeural Information Processing Systems Foundationen_US
dc.relation.isversionofhttps://papers.nips.cc/paper/6233-hierarchical-deep-reinforcement-learning-integrating-temporal-abstraction-and-intrinsic-motivationen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceNeural Information Processing Systems (NIPS)en_US
dc.titleHierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivationen_US
dc.typeArticleen_US
dc.identifier.citationKulkarni, Tejas D. et al. "Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation." Advances in Neural Information Processing Systems 29 (NIPS 2016), Barcelona, Spain, December 5-10, 2016. © 2016 Neural Information Processing Systems Foundationen_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciencesen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.mitauthorNarasimhan, Karthik Rajagopal
dc.contributor.mitauthorSaeedi, Ardavan
dc.contributor.mitauthorTenenbaum, Joshua B
dc.relation.journalAdvances in Neural Information Processing Systems 29 (NIPS 2016)en_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2017-12-08T14:38:12Z
dspace.orderedauthorsKulkarni, Tejas D.; Narasimhan, Karthik; Saeedi, Ardavan; Tenenbaum, Joshen_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0001-9894-9983
dc.identifier.orcidhttps://orcid.org/0000-0002-4616-8250
dc.identifier.orcidhttps://orcid.org/0000-0002-1925-2035
mit.licensePUBLISHER_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record