Show simple item record

dc.contributor.advisorLevy, Roger P.
dc.contributor.authorHu, Jennifer
dc.date.accessioned2023-10-30T20:04:19Z
dc.date.available2023-10-30T20:04:19Z
dc.date.issued2023-06
dc.date.submitted2023-10-17T14:43:30.272Z
dc.identifier.urihttps://hdl.handle.net/1721.1/152578
dc.description.abstractLanguage is one of the hallmarks of intelligence, demanding explanation in a theory of human cognition. However, language presents unique practical challenges for quantitative empirical research, making many linguistic theories difficult to test at naturalistic scales. Artificial neural network language models (LMs) provide a new tool for studying language with mathematical precision and control, as they exhibit remarkably sophisticated linguistic behaviors while being fully intervenable. While LMs differ from humans in many ways, the learning outcomes of these models can reveal the behaviors that may emerge through expressive statistical learning algorithms applied to linguistic input. In this thesis, I demonstrate this approach through three case studies using LMs to investigate open questions in language acquisition and comprehension. First, I use LMs to perform controlled manipulations of language learning, and find that syntactic generalizations depend more on a learner's inductive bias than on training data size. Second, I use LMs to explain systematic variation in scalar inferences by approximating human listeners' expectations over unspoken alternative sentences (e.g., "The bill was supported overwhelmingly" implies that the bill was not supported unanimously). Finally, I show that LMs and humans exhibit similar behaviors on a set of non-literal comprehension tasks which are hypothesized to require social reasoning (e.g., inferring a speaker's intended meaning from ironic statements). These findings suggest that certain aspects of linguistic knowledge could emerge through domain-general prediction mechanisms, while other aspects may require specific inductive biases and conceptual structures.
dc.publisherMassachusetts Institute of Technology
dc.rightsAttribution 4.0 International (CC BY 4.0)
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.titleNeural language models and human linguistic knowledge
dc.typeThesis
dc.description.degreePh.D.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciences
mit.thesis.degreeDoctoral
thesis.degree.nameDoctor of Philosophy


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record