MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Understanding Robustness and Generalization of Artificial Neural Networks Through Fourier Masks

Author(s)
Karantzas, Nikos; Besier, Emma; Ortega Caro, Josue; Pitkow, Xaq; Tolias, Andreas S.; Patel, Ankit B.; Anselmi, Fabio; ... Show more Show less
Thumbnail
Downloadfrai-05-890016.pdf (3.242Mb)
Publisher with Creative Commons License

Publisher with Creative Commons License

Creative Commons Attribution

Terms of use
Creative Commons Attribution https://creativecommons.org/licenses/by/4.0/
Metadata
Show full item record
Abstract
Despite the enormous success of artificial neural networks (ANNs) in many disciplines, the characterization of their computations and the origin of key properties such as generalization and robustness remain open questions. Recent literature suggests that robust networks with good generalization properties tend to be biased toward processing low frequencies in images. To explore the frequency bias hypothesis further, we develop an algorithm that allows us to learn modulatory masks highlighting the essential input frequencies needed for preserving a trained network's performance. We achieve this by imposing invariance in the loss with respect to such modulations in the input frequencies. We first use our method to test the low-frequency preference hypothesis of adversarially trained or data-augmented networks. Our results suggest that adversarially robust networks indeed exhibit a low-frequency bias but we find this bias is also dependent on directions in frequency space. However, this is not necessarily true for other types of data augmentation. Our results also indicate that the essential frequencies in question are effectively the ones used to achieve generalization in the first place. Surprisingly, images seen through these modulatory masks are not recognizable and resemble texture-like patterns.
Date issued
2022-07-12
URI
https://hdl.handle.net/1721.1/164991
Department
McGovern Institute for Brain Research at MIT; Lincoln Laboratory; Center for Brains, Minds, and Machines
Journal
Frontiers in Artificial Intelligence
Publisher
Frontiers
Citation
Karantzas N, Besier E, Ortega Caro J, Pitkow X, Tolias AS, Patel AB and Anselmi F (2022) Understanding Robustness and Generalization of Artificial Neural Networks Through Fourier Masks. Front. Artif. Intell. 5:890016.
Version: Final published version
ISSN
2624-8212

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.