Using Computational Models to Test Syntactic Learnability
Author(s)
Wilcox, Ethan Gotlieb; Futrell, Richard; Levy, Roger
DownloadPublished version (9.957Mb)
Publisher Policy
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
<jats:title>Abstract</jats:title>
<jats:p>We study the learnability of English filler–gap dependencies and the “island” constraints on them by assessing the generalizations made by autoregressive (incremental) language models that use deep learning to predict the next word given preceding context. Using factorial tests inspired by experimental psycholinguistics, we find that models acquire not only the basic contingency between fillers and gaps, but also the unboundedness and hierarchical constraints implicated in the dependency. We evaluate a model’s acquisition of island constraints by demonstrating that its expectation for a filler–gap contingency is attenuated within an island environment. Our results provide empirical evidence against the Argument from the Poverty of the Stimulus for this particular structure.</jats:p>
Date issued
2022Department
Massachusetts Institute of Technology. Department of Brain and Cognitive SciencesJournal
Linguistic Inquiry
Publisher
MIT Press
Citation
Wilcox, Ethan Gotlieb, Futrell, Richard and Levy, Roger. 2022. "Using Computational Models to Test Syntactic Learnability." Linguistic Inquiry.
Version: Final published version