MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Multi-level models of language comprehension in the mind and brain

Author(s)
Gauthier, Jon
Thumbnail
DownloadThesis PDF (3.670Mb)
Advisor
Levy, Roger P.
Terms of use
Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) Copyright retained by author(s) https://creativecommons.org/licenses/by-sa/4.0/
Metadata
Show full item record
Abstract
What are the mental and neural representations that drive language understanding and acquisition? This thesis presents a two-part suite of methods for addressing these questions, rooted in the idea that representational claims must jointly assert a content — what a mental representation is about — but also a computational role — why it is there, and what concrete function it serves in bringing about language behavior. The first part of the thesis explores the computational role of syntactic and semantic representations in language acquisition and use. I instantiate a theory of syntactic bootstrapping, demonstrating through computational simulations how correspondences between the syntactic behaviors of words and their meanings can be exploited to efficiently construct a lexicon. This modeling work recapitulates classical dynamics of language learning exhibited by children acquiring their first language, and more broadly presents an expanded view of the computational role of these representational systems. The second part of the thesis addresses the neural side of these questions. I take a critical view on the present model-based cognitive neuroscience of language, arguing that some popular evaluation paradigms are limited in the types of claims about representational content they can safely support. I then present two case studies of a path forward, both exploiting measures drawn from modern large language models (LLMs). The first designs controlled interventions on LLMs’ internal representational contents, and tests the consequences of these interventions in a brain mapping evaluation. We apply this method in an fMRI brain decoding study, which reveals findings about the time-course of human syntactic representations. The second study integrates an LLM into a structured model of auditory word recognition, which is designed from the start for model interpretability. I apply this model to explain EEG data recorded as subjects listened to naturalistic English speech. The model enables us to discover distinct neural traces of how humans recognize and integrate the meanings of words in real time. I conclude by discussing the implications of these findings for the mental computations that drive online language comprehension.
Date issued
2023-06
URI
https://hdl.handle.net/1721.1/152560
Department
Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences
Publisher
Massachusetts Institute of Technology

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.