Show simple item record

dc.contributor.advisorEvelina Fedorenko and Noga Zaslavsky.en_US
dc.contributor.authorRakocevic, Lara I.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2021-05-24T19:52:42Z
dc.date.available2021-05-24T19:52:42Z
dc.date.copyright2021en_US
dc.date.issued2021en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/130713
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021en_US
dc.descriptionCataloged from the official PDF of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 55-58).en_US
dc.description.abstractRecent research has seen the rise of powerful neural-network language models that are sufficiently computationally precise and neurally plausible as to serve as a jumping-off base for our understanding of language processing in the brain. Because these models have been developed for optimizing a similar objective (word prediction), their brain predictions are often correlated, even though the models differ along several architectural and conceptual features, yielding a major challenge for testing which model features are most relevant for predicting language processing in the brain. Here, we address this challenge by synthesizing new sentence stimuli that maximally expose the disagreement between the predictions of a set of language models ('controversial stimuli'), which would not naturally occur in large language corpora . To do so, we develop a platform for systematizing this sentence synthesis process, providing a way to test different model-based hypotheses easily and efficiently. An initial exploration with this platform has begun to give us some intuition for how choosing from different pools of candidate words affect the kinds of sentences produced, and what kinds of changes tend to produce controversial sentences. For example, we show that the disagreement score, or the maximum amount of disagreement between models for a sentence, converges. This approach will eventually allow us to determine which models perform in the most human-like way and are most successful in predicting language processing in the brain, thus hopefully leading to insights on the mechanisms of human language understanding.en_US
dc.description.statementofresponsibilityby Lara I. Rakocevic.en_US
dc.format.extent58 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleSynthesizing controversial sentences for testing the brain-predictivity of language modelsen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1251801747en_US
dc.description.collectionM.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2021-05-24T19:52:41Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record