Show simple item record

dc.contributor.advisorJoshua B. Tenenbaum.en_US
dc.contributor.authorTsividis, Pedro A.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Brain and Cognitive Sciences.en_US
dc.date.accessioned2019-07-18T20:31:54Z
dc.date.available2019-07-18T20:31:54Z
dc.date.copyright2019en_US
dc.date.issued2019en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/121813
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2019en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 123-130).en_US
dc.description.abstractHumans are remarkable in their ability to rapidly learn complex tasks from little experience. Recent successes in Al have produced algorithms that can perform complex tasks well in environments whose simple dynamics are known in advance, as well as models that can learn to perform expertly in unknown environments after a great amount of experience. Despite this, no current AI models are able to learn sufficiently rich and general representations so as to support rapid, human-level learning on new, complex, tasks. This thesis examines some of the epistemic practices, representations, and algorithms that we believe underlie humans' ability to quickly learn about their world and to deploy that understanding to achieve their aims. In particular, the thesis examines humans' ability to effectively query their environment for information that helps distinguish between competing hypotheses (Chapter 2); children's ability to use higher-level amodal features of data to match causes and effects (Chapter 3); and adult human rapid-learning abilities in artificial video-game environments (Chapter 4). The thesis culminates by presenting and testing a model, inspired by human inductive biases and epistemic practices, that learns to perform complex video-game tasks at human levels with human-level amounts of experience (Chapter 5). The model is an instantiation of a more general approach, Theory-Based Reinforcement Learning, which we believe can underlie the development of human-level agents that may eventually learn and act adaptively in the real world.en_US
dc.description.statementofresponsibilityby Pedro A. Tsividis.en_US
dc.format.extent130 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectBrain and Cognitive Sciences.en_US
dc.titleTheory-based learning in humans and machinesen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciencesen_US
dc.identifier.oclc1103712391en_US
dc.description.collectionPh.D. Massachusetts Institute of Technology, Department of Brain and Cognitive Sciencesen_US
dspace.imported2019-07-18T20:31:52Zen_US
mit.thesis.degreeDoctoralen_US
mit.thesis.departmentBrainen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record