Show simple item record

dc.contributor.authorKarantzas, Nikos
dc.contributor.authorBesier, Emma
dc.contributor.authorOrtega Caro, Josue
dc.contributor.authorPitkow, Xaq
dc.contributor.authorTolias, Andreas S.
dc.contributor.authorPatel, Ankit B.
dc.contributor.authorAnselmi, Fabio
dc.date.accessioned2026-03-03T15:23:42Z
dc.date.available2026-03-03T15:23:42Z
dc.date.issued2022-07-12
dc.identifier.issn2624-8212
dc.identifier.urihttps://hdl.handle.net/1721.1/164991
dc.description.abstractDespite the enormous success of artificial neural networks (ANNs) in many disciplines, the characterization of their computations and the origin of key properties such as generalization and robustness remain open questions. Recent literature suggests that robust networks with good generalization properties tend to be biased toward processing low frequencies in images. To explore the frequency bias hypothesis further, we develop an algorithm that allows us to learn modulatory masks highlighting the essential input frequencies needed for preserving a trained network's performance. We achieve this by imposing invariance in the loss with respect to such modulations in the input frequencies. We first use our method to test the low-frequency preference hypothesis of adversarially trained or data-augmented networks. Our results suggest that adversarially robust networks indeed exhibit a low-frequency bias but we find this bias is also dependent on directions in frequency space. However, this is not necessarily true for other types of data augmentation. Our results also indicate that the essential frequencies in question are effectively the ones used to achieve generalization in the first place. Surprisingly, images seen through these modulatory masks are not recognizable and resemble texture-like patterns.en_US
dc.publisherFrontiersen_US
dc.relation.isversionofhttps://doi.org/10.3389/frai.2022.890016en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceFrontiersen_US
dc.titleUnderstanding Robustness and Generalization of Artificial Neural Networks Through Fourier Masksen_US
dc.typeArticleen_US
dc.identifier.citationKarantzas N, Besier E, Ortega Caro J, Pitkow X, Tolias AS, Patel AB and Anselmi F (2022) Understanding Robustness and Generalization of Artificial Neural Networks Through Fourier Masks. Front. Artif. Intell. 5:890016.en_US
dc.contributor.departmentMcGovern Institute for Brain Research at MITen_US
dc.contributor.departmentLincoln Laboratoryen_US
dc.contributor.departmentCenter for Brains, Minds, and Machinesen_US
dc.relation.journalFrontiers in Artificial Intelligenceen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.identifier.doihttps://doi.org/10.3389/frai.2022.890016
dspace.date.submission2026-03-03T15:15:15Z
mit.journal.volume5en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record