Show simple item record

dc.contributor.advisorTomaso Poggio.en_US
dc.contributor.authorLewis, Owen,Ph. D.Massachusetts Institute of Technology.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Brain and Cognitive Sciences.en_US
dc.date.accessioned2019-07-18T20:31:33Z
dc.date.available2019-07-18T20:31:33Z
dc.date.copyright2019en_US
dc.date.issued2019en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/121810
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2019en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 91-100).en_US
dc.description.abstractNeural networks and probabilistic models have different and in many ways complementary strengths and weaknesses: neural networks are flexible and support efficient inference, but rely on large quantities of labeled training data. Probabilistic models can learn from fewer examples, but in many cases remain limited by time-consuming inference algorithms. Thus, both classes of models have drawbacks that both limit their engineering applications and prevent them from being fully satisfying as process models of human learning. This thesis aims to address this state of affairs from both directions, exploring case studies where we make neural networks that learn from less data, and in which we design more efficient inference procedures for generative models. First, we explore recurrent neural networks that learn list-processing procedures (sort, reverse, etc.), and show how ideas from type theory and programming language theory can be used to design a data augmentation scheme that enables effective learning from small datasets. Next, we show how error-driven proposal mechanisms can speed up stochastic search for generative model inversion, first developing a symbolic model for inferring Boolean functions and Horn clause theories, and then a general-purpose neural network model for doing inference in continuous domains such as inverse graphics.en_US
dc.description.statementofresponsibilityby Owen Lewis.en_US
dc.format.extent100 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectBrain and Cognitive Sciences.en_US
dc.titleStructured learning and inference with neural networks and generative modelsen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciencesen_US
dc.identifier.oclc1103712241en_US
dc.description.collectionPh.D. Massachusetts Institute of Technology, Department of Brain and Cognitive Sciencesen_US
dspace.imported2019-07-18T20:31:30Zen_US
mit.thesis.degreeDoctoralen_US
mit.thesis.departmentBrainen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record