Show simple item record

dc.contributor.advisorJoshua B. Tenenbaum.en_US
dc.contributor.authorKulkarni, Tejas Dattatrayaen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Brain and Cognitive Sciences.en_US
dc.date.accessioned2017-03-20T19:39:55Z
dc.date.available2017-03-20T19:39:55Z
dc.date.copyright2016en_US
dc.date.issued2016en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/107557
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2016.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 117-129).en_US
dc.description.abstractI argue that the intersection of deep learning, hierarchical reinforcement learning, and generative models provides a promising avenue towards building agents that learn to produce goal-directed behavior given sensations. I present models and algorithms that learn from raw observations and will emphasize on minimizing their sample complexity and number of training steps required for convergence. To this end, I introduce hierarchical variants of deep reinforcement learning algorithms, which produce and utilize temporally extended abstractions over actions. I also present a hybrid model-free and model-based deep reinforcement learning model, which can also be potentially used to automatically extract subgoals for bootstrapping temporal abstractions. I will then present a model-based approach for perception, which unifies deep learning and probabilistic models, to learn powerful representations of images without labeled data or external rewards. Learning goal-directed behavior with sparse and delayed rewards is a fundamental challenge for reinforcement learning algorithms. The primary difficulty arises due to insufficient exploration, resulting in an agent being unable to learn robust value functions. I present the Deep Hierarchical Reinforcement Learning (h-DQN) approach, which integrates hierarchical value functions operating at different time scales, along with goal-driven intrinsically motivated behavior for efficient exploration. Intrinsically motivated agents can explore new behavior for its own sake rather than to directly solve problems. Such intrinsic behaviors could eventually help the agent solve tasks posed by the environment. h-DQN allows for flexible goal specifications, such as functions over entities and relations. This provides an efficient space for exploration in complicated environments. I will demonstrate h-DQN's ability to learn optimal behavior given raw pixels in environments with very sparse and delayed feedback. I will then introduce the Deep Successor Reinforcement (DSR) learning approach. DSR is a hybrid model-free and model-based RL algorithm. It learns the value function of a state by taking the inner product between the state's expected future feature occupancy and the corresponding immediate rewards. This factorization of the value function has several appealing properties - increased sensitivity to changes in the reward structure and potentially the ability to automatically extract subgoals for learning temporal abstractions. Finally, I argue for the need for better representations of images, both in reinforcement learning tasks and in general. Existing deep learning approaches learn useful representations given lots of labeled data or rewards. Moreover, they also lack the inductive biases needed to disentangle causal structure in images such as objects, shape, pose and other intrinsic scene properties. I present generative models of vision, often referred to as analysis-by-synthesis approaches, by combining deep generative methods with probabilistic modeling. This approach aims to learn structured representations of images given raw observations. I argue that such intermediate representations will be crucial to scale-up deep reinforcement learning algorithms, and to bridge the gap between machine and human learning.en_US
dc.description.statementofresponsibilityby Tejas Dattatraya Kulkarni.en_US
dc.format.extent129 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectBrain and Cognitive Sciences.en_US
dc.titleLearning structured representations for perception and controlen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciences
dc.identifier.oclc974640245en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record