MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Understanding and Improving Representational Robustness of Machine Learning Models

Author(s)
Ko, Ching-Yun
Thumbnail
DownloadThesis PDF (6.039Mb)
Advisor
Daniel, Luca
Terms of use
In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
The fragility of modern machine learning models has drawn a considerable amount of attention from both academia and the public. In this thesis, we will do a systematic study on the understanding and improvement of several machine learning models, including smoothed models and generic representation networks. Specifically, we put our focus on studying representational robustness, which we define as the “robustness” (or generally trustworthy properties) in the induced hidden space of a given network. For a generic representation network, this corresponds to the representation space itself, while for a smoothed model, we will treat the logits of the network as the target space. Representational robustness is fundamental to many trustworthy AI areas, such as fairness and robustness. In the thesis, we discover that the certifiable robustness of randomized smoothing is at the cost of class unfairness. We further analyze ways to improve the training process of the base models and their limitations. For generic non-smooth representation models, we find a link between self-supervised contrastive learning and supervised neighborhood component analysis, which naturally allows us to propose a general framework that achieves better accuracy and robustness. Furthermore, we realize that the current evaluation practice of foundational representation models involves extensive experiments across various real-world tasks, which are computationally expensive and prone to test set leakage. As a solution, we propose a more lightweight, privacy-preserving, and sound evaluation framework for both vision and language models by utilizing synthetic data.
Date issued
2024-05
URI
https://hdl.handle.net/1721.1/156297
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.