Structured learning and inference with neural networks and generative models
Author(s)Lewis, Owen,Ph. D.Massachusetts Institute of Technology.
Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences.
MetadataShow full item record
Neural networks and probabilistic models have different and in many ways complementary strengths and weaknesses: neural networks are flexible and support efficient inference, but rely on large quantities of labeled training data. Probabilistic models can learn from fewer examples, but in many cases remain limited by time-consuming inference algorithms. Thus, both classes of models have drawbacks that both limit their engineering applications and prevent them from being fully satisfying as process models of human learning. This thesis aims to address this state of affairs from both directions, exploring case studies where we make neural networks that learn from less data, and in which we design more efficient inference procedures for generative models. First, we explore recurrent neural networks that learn list-processing procedures (sort, reverse, etc.), and show how ideas from type theory and programming language theory can be used to design a data augmentation scheme that enables effective learning from small datasets. Next, we show how error-driven proposal mechanisms can speed up stochastic search for generative model inversion, first developing a symbolic model for inferring Boolean functions and Horn clause theories, and then a general-purpose neural network model for doing inference in continuous domains such as inverse graphics.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2019Cataloged from PDF version of thesis.Includes bibliographical references (pages 91-100).
DepartmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Brain and Cognitive Sciences.