MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Structured learning and inference with neural networks and generative models

Author(s)
Lewis, Owen,Ph. D.Massachusetts Institute of Technology.
Thumbnail
Download1103712241-MIT.pdf (10.29Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences.
Advisor
Tomaso Poggio.
Terms of use
MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
Neural networks and probabilistic models have different and in many ways complementary strengths and weaknesses: neural networks are flexible and support efficient inference, but rely on large quantities of labeled training data. Probabilistic models can learn from fewer examples, but in many cases remain limited by time-consuming inference algorithms. Thus, both classes of models have drawbacks that both limit their engineering applications and prevent them from being fully satisfying as process models of human learning. This thesis aims to address this state of affairs from both directions, exploring case studies where we make neural networks that learn from less data, and in which we design more efficient inference procedures for generative models. First, we explore recurrent neural networks that learn list-processing procedures (sort, reverse, etc.), and show how ideas from type theory and programming language theory can be used to design a data augmentation scheme that enables effective learning from small datasets. Next, we show how error-driven proposal mechanisms can speed up stochastic search for generative model inversion, first developing a symbolic model for inferring Boolean functions and Horn clause theories, and then a general-purpose neural network model for doing inference in continuous domains such as inverse graphics.
Description
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2019
 
Cataloged from PDF version of thesis.
 
Includes bibliographical references (pages 91-100).
 
Date issued
2019
URI
https://hdl.handle.net/1721.1/121810
Department
Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences
Publisher
Massachusetts Institute of Technology
Keywords
Brain and Cognitive Sciences.

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.