MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Classifying and Displaying Brain-waves through Self-supervised Learning

Author(s)
Mohsenvand, Mostafa
Thumbnail
DownloadThesis PDF (46.90Mb)
Advisor
Pattie Maes
Terms of use
In Copyright - Educational Use Permitted Copyright MIT http://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
Interpreting human electroencephalogram (EEG) is challenging and requires years of medical training. Hence, constructing labeled datasets for supervised learning from EEG signals is expensive and time-consuming. Moreover, the existing datasets use incompatible EEG setups (e.g. different numbers of channels, sampling rates, types of sensors, etc.) that make them hard to fuse to obtain larger datasets. To alleviate similar issues, self-supervised pretraining has been developed and utilized in other branches of machine learning. In this thesis, we introduce multiple self-supervised algorithms and data augmentation and mixup techniques to improve the accuracy and sample efficiency of downstream EEG classification. Our framework combines multiple EEG datasets for self-supervised learning and uses the resulting large-scale dataset to train our proposed algorithms SeqCLR (Sequential Contrastive Learning of Representations) and SeqDACL (Sequential Domain Agnostic Contrastive Learning). We apply our pre-trained algorithms to four downstream classification tasks. We show that our algorithms are able to compete and outperform other supervised and self-supervised methods. In particular, our methods achieve state-of-the-art accuracy and sample efficiency in Emotion Recognition (SEED dataset), Sleep-stage scoring (Sleep EDF dataset), and user identification (TUH dataset). We also explore using self-supervised representation learning for visualizing EEG data for diagnostic and research purposes. We present a sequential autoencoder architecture and a novel visualization method called chromograph. Our method visualizes multichannel EEG data through its latent representation in an economic and informative fashion that enables rapid and reliable recognition of abnormal EEG signals. Our user study shows that neurologists are able to make more accurate and faster detection of abnormal EEG using the chronograph. We also design and implement a real-time sonification device called the Physiophone for interactive sonification of electrophysiological signals. Our user study shows that novice users with four minutes audio-traiining could outperform medically trained users who used the conventional visualization of ECG signals for distinguishing normal and abnormal heart rhythms. We also observe a new superadditive bimodal effect in a conformity/priming test.
Date issued
2022-02
URI
https://hdl.handle.net/1721.1/143210
Department
Program in Media Arts and Sciences (Massachusetts Institute of Technology)
Publisher
Massachusetts Institute of Technology

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.