MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Deep learning benchmarks on L1000 gene expression data

Author(s)
McDermott, Matthew B. A.(Matthew Brian Andrew)
Thumbnail
Download1102050364-MIT.pdf (4.840Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Peter Szolovits.
Terms of use
MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
Gene expression data holds the potential to offer deep, physiological insights about the dynamic state of a cell beyond the static coding of the genome alone. I believe that realizing this potential requires specialized machine learning methods capable of using underlying biological structure, but the development of such models is hampered by the lack of an empirical methodological foundation, including published benchmarks and well characterized baselines. In this work, we lay that foundation by profiling a battery of classifiers against newly defined biologically motivated classification tasks on multiple L1000 gene expression datasets. In addition, on our smallest dataset, a privately produced L1000 corpus, we profile per-subject generalizability to provide a novel assessment of performance that is lost in many typical analyses. We compare traditional classifiers, including feed-forward artificial neural networks (FF-ANNs), linear methods, random forests, decision trees, and K nearest neighbor classifiers, as well as graph convolutional neural networks (GCNNs), which augment learning via prior biological domain knowledge. We find GCNNs offer performance improvements given sufficient data, excelling at all tasks on our largest dataset. On smaller datasets, FF-ANNs offer greatest performance. Linear models significantly underperform on all dataset scales, but offer the best per-subject generalizability. Ultimately, these results suggest that structured models such as GCNNs can represent a new direction of focus for the field as our scale of data continues to increase.
Description
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
 
Cataloged from PDF version of thesis.
 
Includes bibliographical references (pages 57-62).
 
Date issued
2019
URI
https://hdl.handle.net/1721.1/121738
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.