Show simple item record

dc.contributor.advisorBarzilay, Regina
dc.contributor.authorBao, Yujia
dc.date.accessioned2022-08-29T16:09:35Z
dc.date.available2022-08-29T16:09:35Z
dc.date.issued2022-05
dc.date.submitted2022-06-21T19:15:23.619Z
dc.identifier.urihttps://hdl.handle.net/1721.1/144757
dc.description.abstractMachine learning models are biased when trained on biased datasets. Many recent approaches have been proposed to mitigate biases when they are identified a priori. However in real-world applications, annotating biases is not only time-consuming but also challenging. This thesis considers three different scenarios and presents novel algorithms for learning robust models. These algorithms are efficient as they do not require explicit annotations of the biases, enabling practical machine learning. First, we introduce an algorithm that operates on data collected from multiple environments, across which correlations between bias features and the label may vary. We show that when using a classifier trained on one environment to make predictions on examples from a different environment, its mistakes are informative of the hidden biases. We then leverages these mistakes to create groups of examples whose interpolation yields a distribution with only stable correlations. Our algorithm achieves the new state-of-the-art on four text and image classification tasks. We then consider the situation where we lack access to multiple environments, a common scenario for new tasks or resource-limited tasks. We show that in real-world applications related tasks often share similar biases. Based on this observation, we propose an algorithm that infers bias features from a resource-rich source task and transfers this knowledge to the target task. Compared to 15 baselines across five datasets, our method consistently delivers significant performance gain. Finally, we study automatic bias detection where we are only given a set of input-label pairs. Our algorithm learns to split the dataset so that classifiers trained on the training split cannot generalize to the testing split. The performance gap provides a proxy for measuring the degree of bias in the learned features and can therefore be used to identify unknown biases. Experiments on six NLP and vision tasks demonstrate that our method is able to genreate spurious splits that correlate with human-identified biases.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright MIT
dc.rights.urihttp://rightsstatements.org/page/InC-EDU/1.0/
dc.titleEfficient and Robust Algorithms for Practical Machine Learning
dc.typeThesis
dc.description.degreePh.D.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeDoctoral
thesis.degree.nameDoctor of Philosophy


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record