Show simple item record

dc.contributor.advisorJoshua B. Tenenbaum.en_US
dc.contributor.authorHartman, William R.,M. Eng.Massachusetts Institute of Technology.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2019-12-05T18:07:33Z
dc.date.available2019-12-05T18:07:33Z
dc.date.copyright2019en_US
dc.date.issued2019en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/123176
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019en_US
dc.descriptionCataloged from student-submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 41-42).en_US
dc.description.abstractYou are using your brain to understand this sentence, but can you explain precisely how you do it? Although we constantly experience language processing first-hand, we're still not entirely sure how it's done. Linguists and computer scientists have been separately working on discovering mechanisms that are necessary for high-performing language processing, but they have yet to discover the holy-grail. In this work, we take the first steps toward bridging the gap between these two approaches by developing a method that discovers, with statistical significance, brain-like neural network sub-architectures in their simplest form. Instead of just evaluating established NLP models for brain-likeness, our objective is to find new architectures and computations that are especially brain-like. The method randomly generates a large and varied collection of neural network architectures in the pursuit of finding architectures that mimic fMRI data in tasks like language modeling, translation, and summarization. All hyper-parameters are fixed across models. Thus, the method is then able to identify the sub-architectures that are associated with the most brain-like models, and return them in their simplest form. The sub-architectures that the method returns enable two important analyses: because they are pruned to the most brain-like components, the computations that these smaller sub-architectures perform are easier to interpret than those of the architecture as a whole. And interpretability is crucial for understanding the mechanisms that are intrinsic to language processing. The second reason is that the sub-architectures may help improve future architecture samples. For instance, those brain-like computations may be defined as unit operations in order to bias more models to include them.en_US
dc.description.statementofresponsibilityby William R. Hartman.en_US
dc.format.extent42 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleUncovering brain-like computations for natural language processingen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1129456672en_US
dc.description.collectionM.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2019-12-05T18:07:32Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record