Adversarial examples are not bugs, they are features
Author(s)
Ilyas, A; Santurkar, S; Tsipras, D; Engstrom, L; Tran, B; Madry, A; ... Show more Show less
DownloadPublished version (1.494Mb)
Publisher Policy
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
© 2019 Neural information processing systems foundation. All rights reserved. Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear. We demonstrate that adversarial examples can be directly attributed to the presence of non-robust features: features (derived from patterns in the data distribution) that are highly predictive, yet brittle and (thus) incomprehensible to humans. After capturing these features within a theoretical framework, we establish their widespread existence in standard datasets. Finally, we present a simple setting where we can rigorously tie the phenomena we observe in practice to a misalignment between the (human-specified) notion of robustness and the inherent geometry of the data.
Date issued
2019Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
Advances in Neural Information Processing Systems
Citation
Ilyas, A, Santurkar, S, Tsipras, D, Engstrom, L, Tran, B et al. 2019. "Adversarial examples are not bugs, they are features." Advances in Neural Information Processing Systems, 32.
Version: Final published version