MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Uncovering brain-like computations for natural language processing

Author(s)
Hartman, William R.,M. Eng.Massachusetts Institute of Technology.
Thumbnail
Download1129456672-MIT.pdf (1008.Kb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Joshua B. Tenenbaum.
Terms of use
MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
You are using your brain to understand this sentence, but can you explain precisely how you do it? Although we constantly experience language processing first-hand, we're still not entirely sure how it's done. Linguists and computer scientists have been separately working on discovering mechanisms that are necessary for high-performing language processing, but they have yet to discover the holy-grail. In this work, we take the first steps toward bridging the gap between these two approaches by developing a method that discovers, with statistical significance, brain-like neural network sub-architectures in their simplest form. Instead of just evaluating established NLP models for brain-likeness, our objective is to find new architectures and computations that are especially brain-like. The method randomly generates a large and varied collection of neural network architectures in the pursuit of finding architectures that mimic fMRI data in tasks like language modeling, translation, and summarization. All hyper-parameters are fixed across models. Thus, the method is then able to identify the sub-architectures that are associated with the most brain-like models, and return them in their simplest form. The sub-architectures that the method returns enable two important analyses: because they are pruned to the most brain-like components, the computations that these smaller sub-architectures perform are easier to interpret than those of the architecture as a whole. And interpretability is crucial for understanding the mechanisms that are intrinsic to language processing. The second reason is that the sub-architectures may help improve future architecture samples. For instance, those brain-like computations may be defined as unit operations in order to bias more models to include them.
Description
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
 
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
 
Cataloged from student-submitted PDF version of thesis.
 
Includes bibliographical references (pages 41-42).
 
Date issued
2019
URI
https://hdl.handle.net/1721.1/123176
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.