Trustworthy Learning and Uncertainty Quantification under Constraints
Author(s)
Shen, Maohao
DownloadThesis PDF (4.553Mb)
Advisor
Wornell, Gregory W.
Terms of use
Metadata
Show full item recordAbstract
Machine learning techniques have become increasingly important in a wide range of fields, including medicine, finance, and autonomous driving. While state-of-theart machine learning models can achieve promising prediction performance, there is an increasing need for reliable and trustworthy machine learning techniques, which requires the models to impose other capabilities, such as privacy preservation, computational efficiency, interpretability, robustness, and uncertainty quantification. This thesis focus on proposing novel techniques for critical requirements of trustworthy machine learning models, including uncertainty quantification, computational efficiency, and privacy preservation. Within this realm, we focus on aspects of uncertainty quantification problems for different settings and tasks, as well as applications with privacy or computational constraints in the form of limited access to training data and model internals. In particularly, this thesis investigates and develops methods to address three important sub-problems of trustworthy machine learning, namely posthoc uncertainty learning; reliable gradient-free and likelihood-free prompt tuning; and trustworthy unsupervised multi-source-free domain adaptation.
Date issued
2023-06Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology