MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Self-Supervised ECG Learning for Multimodal Clinical Tasks

Author(s)
Chen, Peilin
Thumbnail
DownloadThesis PDF (2.138Mb)
Advisor
Liang, Paul
Terms of use
In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
We present a multimodal clinical AI framework that integrates time series, images, and text to support robust diagnostic reasoning across diverse input combinations. We first introduce ECG-JEPA, a self-supervised encoder pretrained on multiple ECG datasets to learn generalizable time series representations. This unimodal pretraining improves ECG classification, achieving a 23-point AUC gain on the underrepresented Ga dataset. We then align and fuse these ECG embeddings with chest X-rays and EHR text using a vision–language model backbone, enabling end-to-end multimodal inference. Our results show that incorporating ECG signals meaningfully improves diagnostic performance, highlighting the value of multitask time series pretraining and modular fusion for clinical AI.
Date issued
2025-05
URI
https://hdl.handle.net/1721.1/162719
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.