Show simple item record

dc.contributor.advisorJoshua Tenenbaum.en_US
dc.contributor.authorSchmidt, Lauren Aen_US
dc.contributor.otherMassachusetts Institute of Technology. Dept. of Brain and Cognitive Sciences.en_US
dc.date.accessioned2010-04-28T17:11:12Z
dc.date.available2010-04-28T17:11:12Z
dc.date.issued2009en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/54624
dc.descriptionThesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2009.en_US
dc.description"September 2009." Cataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (p. 191-201).en_US
dc.description.abstractWhat do words and phrases mean? How do we infer their meaning in a given context? How do we know which sets of words have sensible meanings when combined, as opposed to being nonsense? As language learners and speakers, we can solve these problems starting at a young age, but as scientists, our understanding of these processes is limited. This thesis seeks to address these questions using a computational approach. Bayesian modeling provides a method of combining categories and logical constraints with probabilistic inference, yielding word and phrase meanings that involve graded category memberships and are governed by probabilistically inferred structures. The Bayesian approach also allows an investigation to separately identify the prior beliefs a language user brings to a particular situation involving meaning-based inference (e.g., learning a word meaning or identifying which objects an adjective applies to within a given context), and to identify what the language user can infer from the context. This approach therefore provides the foundation also for investigations of how different prior beliefs affect what a language user infers in a given situation, and how prior beliefs can develop over time. Using a computational approach, I address the following questions: (1) How do people generalize about a word's meaning from limited evidence? (2) How do people understand and use phrases, particularly when some of the words in those phrases depend on context for interpretation? (3) How do people know and learn which combinations of predicates and noun phrases can sensibly be combined and which are nonsensical?en_US
dc.description.abstract(cont.) I show how each of these topics involves the probabilistic induction of categories, and I examine the constraints on inference in each domain. I also explore which of these constraints may themselves be learned.en_US
dc.description.statementofresponsibilityby Lauren A. Schmidt.en_US
dc.format.extent201 p.en_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectBrain and Cognitive Sciences.en_US
dc.titleMeaning and compositionality as statistical induction of categories and constraintsen_US
dc.typeThesisen_US
dc.description.degreePh.D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciences
dc.identifier.oclc601820947en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record