Last Layer Retraining of Selectively Sampled Wild Data Improves Performance
Author(s)
Yang, Hao Bang
DownloadThesis PDF (1.360Mb)
Advisor
Solomon, Justin
Yurochkin, Mikhail
Terms of use
Metadata
Show full item recordAbstract
While AI models perform well in labs where training and testing data are in a similar domain, they experience significant drops in performance in the wild where the data can lie in domains outside the training distribution. Out-of-distribution (OOD) generalization is difficult because these domains are underrepresented or non-existent in training data. The pursuit of a solution to bridging the performance gap between in-distribution and out-of-distribution data has led to the development of various generalization algorithms that target finding invariant/"good" features. Recent results have highlighted the possibility of poorly generalized classification layers as the main contributor to the performance difference while the featurizer is already able to produce sufficiently good features.
This thesis will verify this possibility over a combination of datasets, generalization algorithms, and training methods for the classifier. We show that we can improve the OOD performance significantly compared to the original models when evaluated in natural OOD domains by simply retraining a new classification layer using a small number of labeled examples. We further study methods for efficient selection of labeled OOD examples to train the classifier by utilizing clustering techniques on featurized unlabeled OOD data.
Date issued
2023-06Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology