Show simple item record

dc.contributor.authorSprouse, Jon
dc.contributor.authorYankama, Beracah
dc.contributor.authorYankama, Beracah
dc.contributor.authorIndurkhya, Sagar
dc.contributor.authorFong, Sandiway
dc.contributor.authorBerwick, Robert C
dc.date.accessioned2020-06-22T19:59:32Z
dc.date.available2020-06-22T19:59:32Z
dc.date.issued2018-09
dc.identifier.issn0167-6318
dc.identifier.issn1613-3676
dc.identifier.urihttps://hdl.handle.net/1721.1/125923
dc.description.abstractIn their recent paper, Lau, Clark, and Lappin explore the idea that the probability of the occurrence of word strings can form the basis of an adequate theory of grammar (Lau, Jey H., Alexander Clark & 15 Shalom Lappin. 2017. Grammaticality, acceptability, and probability: A prob- abilistic view of linguistic knowledge. Cognitive Science 41(5):1201-1241). To make their case, they present the results of correlating the output of several probabilistic models trained solely on naturally occurring sentences with the gradient acceptability judgments that humans report for ungrammatical sentences derived from roundtrip machine translation errors. In this paper, we first explore the logic of the Lau et al. argument, both in terms of the choice of evaluation metric (gradient acceptability), and in the choice of test data set (machine translation errors on random sentences from a corpus). We then present our own series of studies intended to allow for a better comparison between LCL's models and existing grammatical theories. We evaluate two of LCL's probabilistic models (trigrams and recurrent neural network) against three data sets (taken from journal articles, a textbook, and Chomsky's famous colorless-green-ideas sentence), using three evaluation metrics (LCL's gradience metric, a categorical version of the metric, and the experimental-logic metric used in the syntax literature). Our results suggest there are very real, measurable cost-benefit tradeoffs inherent in LCL's models across the three evaluation metrics. The gain in explanation of gradience (between 13% and 31% of gradience) is offset by losses in the other two metrics: a 43%-49% loss in coverage based on a categorical metric of explaining acceptability, and a loss of 12%-35% in explaining experimentally-defined phenomena. This suggests that anyone wishing to pursue LCL's models as competitors with existing syntactic theories must either be satisfied with this tradeoff, or modify the models to capture the phenomena that are not currently captured.en_US
dc.language.isoen
dc.publisherWalter de Gruyter GmbHen_US
dc.relation.isversionofhttp://dx.doi.org/10.1515/tlr-2018-0005en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceDe Gruyteren_US
dc.titleColorless green ideas do sleep furiously: gradient acceptability and the nature of the grammaren_US
dc.typeArticleen_US
dc.identifier.citationSprouse, Jon et al. "Colorless green ideas do sleep furiously: gradient acceptability and the nature of the grammar." Linguistic Review 35, 3 (September 2018): 575–599 © 2018 Walter de Gruyteren_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.relation.journalLingustic Reviewen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2019-05-09T13:52:12Z
dspace.date.submission2019-05-09T13:52:13Z
mit.journal.volume35en_US
mit.journal.issue3en_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record