MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Towards Understanding Human-aligned Neural Representation in the Presence of Confounding Variables

Author(s)
Simonovikj, Sanja
Thumbnail
DownloadThesis PDF (2.272Mb)
Advisor
Agrawal, Pulkit
Terms of use
In Copyright - Educational Use Permitted Copyright MIT http://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
Deep Neural Networks (DNNs) find one out of many possible solutions to a given task such as classification. This solution is more likely to pick up on spurious features and low-level statistical patterns in the train data rather than semantic features and highlevel abstractions, resulting in poor Out-of-Distribution (OOD) performance. In this project we aim to broaden the current knowledge surrounding spurious correlations as they relate to DNNs. We do this by measuring their effect on generalization under various settings, determining the existence of subnetworks in a DNN that capture the core features and examine potential mitigation strategies. Finally, we discuss alternative approaches that are reserved for future work.
Date issued
2021-06
URI
https://hdl.handle.net/1721.1/139079
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.