Advanced Search

The acquisition of inductive constraints

Research and Teaching Output of the MIT Community

Show simple item record

dc.contributor.advisor Joshua Tenenbaum. en_US Kemp, Charles, Ph. D. Massachusetts Institute of Technology en_US
dc.contributor.other Massachusetts Institute of Technology. Dept. of Brain and Cognitive Sciences. en_US 2008-09-02T17:59:07Z 2008-09-02T17:59:07Z 2008 en_US 2008 en_US
dc.description Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2008. en_US
dc.description This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. en_US
dc.description Includes bibliographical references (p. 197-216). en_US
dc.description.abstract Human learners routinely make inductive inferences, or inferences that go beyond the data they have observed. Inferences like these must be supported by constraints, some of which are innate, although others are almost certainly learned. This thesis presents a hierarchical Bayesian framework that helps to explain the nature, use and acquisition of inductive constraints. Hierarchical Bayesian models include multiple levels of abstraction, and the representations at the upper levels place constraints on the representations at the lower levels. The probabilistic nature of these models allows them to make statistical inferences at multiple levels of abstraction. In particular, they show how knowledge can be acquired at levels quite remote from the data of experience--levels where the representations learned are naturally described as inductive constraints. Hierarchical Bayesian models can address inductive problems from many domains but this thesis focuses on models that address three aspects of high-level cognition. The first model is sensitive to patterns of feature variability, and acquires constraints similar to the shape bias in word learning. The second model acquires causal schemata--systems of abstract causal knowledge that allow learners to discover causal relationships given very sparse data. The final model discovers the structural form of a domain--for instance, it discovers whether the relationships between a set of entities are best described by a tree, a chain, a ring, or some other kind of representation. The hierarchical Bayesian approach captures several principles that go beyond traditional formulations of learning theory. en_US
dc.description.abstract (cont.) It supports learning at multiple levels of abstraction, it handles structured representations, and it helps to explain how learning can succeed given sparse and noisy data. Principles like these are needed to explain how humans acquire rich systems of knowledge, and hierarchical Bayesian models point the way towards a modern learning theory that is better able to capture the sophistication of human learning. en_US
dc.description.statementofresponsibility by Charles Kemp. en_US
dc.format.extent 216 p. en_US
dc.language.iso eng en_US
dc.publisher Massachusetts Institute of Technology en_US
dc.rights M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. en_US
dc.rights.uri en_US
dc.subject Brain and Cognitive Sciences. en_US
dc.title The acquisition of inductive constraints en_US
dc.type Thesis en_US Ph.D. en_US
dc.contributor.department Massachusetts Institute of Technology. Dept. of Brain and Cognitive Sciences. en_US
dc.identifier.oclc 238611597 en_US

Files in this item

Name Size Format Description
238611597-MIT.pdf 1.720Mb PDF Full printable version

This item appears in the following Collection(s)

Show simple item record