Show simple item record

dc.contributor.advisorAgrawal, Pulkit
dc.contributor.authorSimonovikj, Sanja
dc.date.accessioned2022-01-14T14:48:42Z
dc.date.available2022-01-14T14:48:42Z
dc.date.issued2021-06
dc.date.submitted2021-06-17T20:14:23.018Z
dc.identifier.urihttps://hdl.handle.net/1721.1/139079
dc.description.abstractDeep Neural Networks (DNNs) find one out of many possible solutions to a given task such as classification. This solution is more likely to pick up on spurious features and low-level statistical patterns in the train data rather than semantic features and highlevel abstractions, resulting in poor Out-of-Distribution (OOD) performance. In this project we aim to broaden the current knowledge surrounding spurious correlations as they relate to DNNs. We do this by measuring their effect on generalization under various settings, determining the existence of subnetworks in a DNN that capture the core features and examine potential mitigation strategies. Finally, we discuss alternative approaches that are reserved for future work.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright MIT
dc.rights.urihttp://rightsstatements.org/page/InC-EDU/1.0/
dc.titleTowards Understanding Human-aligned Neural Representation in the Presence of Confounding Variables
dc.typeThesis
dc.description.degreeM.Eng.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Engineering in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record