MIT Libraries homeMIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Meaning and compositionality as statistical induction of categories and constraints

Author(s)
Schmidt, Lauren A
Thumbnail
DownloadFull printable version (19.51Mb)
Other Contributors
Massachusetts Institute of Technology. Dept. of Brain and Cognitive Sciences.
Advisor
Joshua Tenenbaum.
Terms of use
M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
What do words and phrases mean? How do we infer their meaning in a given context? How do we know which sets of words have sensible meanings when combined, as opposed to being nonsense? As language learners and speakers, we can solve these problems starting at a young age, but as scientists, our understanding of these processes is limited. This thesis seeks to address these questions using a computational approach. Bayesian modeling provides a method of combining categories and logical constraints with probabilistic inference, yielding word and phrase meanings that involve graded category memberships and are governed by probabilistically inferred structures. The Bayesian approach also allows an investigation to separately identify the prior beliefs a language user brings to a particular situation involving meaning-based inference (e.g., learning a word meaning or identifying which objects an adjective applies to within a given context), and to identify what the language user can infer from the context. This approach therefore provides the foundation also for investigations of how different prior beliefs affect what a language user infers in a given situation, and how prior beliefs can develop over time. Using a computational approach, I address the following questions: (1) How do people generalize about a word's meaning from limited evidence? (2) How do people understand and use phrases, particularly when some of the words in those phrases depend on context for interpretation? (3) How do people know and learn which combinations of predicates and noun phrases can sensibly be combined and which are nonsensical?
 
(cont.) I show how each of these topics involves the probabilistic induction of categories, and I examine the constraints on inference in each domain. I also explore which of these constraints may themselves be learned.
 
Description
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2009.
 
"September 2009." Cataloged from PDF version of thesis.
 
Includes bibliographical references (p. 191-201).
 
Date issued
2009
URI
http://hdl.handle.net/1721.1/54624
Department
Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences
Publisher
Massachusetts Institute of Technology
Keywords
Brain and Cognitive Sciences.

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries homeMIT Libraries logo

Find us on

Twitter Facebook Instagram YouTube RSS

MIT Libraries navigation

SearchHours & locationsBorrow & requestResearch supportAbout us
PrivacyPermissionsAccessibility
MIT
Massachusetts Institute of Technology
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.