Prior convictions: Black-box adversarial attacks with bandits and priors
Author(s)
Ilyas, Andrew.; Engstrom, Logan G.; Madry, Aleksander
DownloadAccepted version (954.6Kb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
We study the problem of generating adversarial examples in a black-box setting in which only loss-oracle access to a model is available. We introduce a framework that conceptually unifies much of the existing work on black-box attacks, and we demonstrate that the current state-of-the-art methods are optimal in a natural sense. Despite this optimality, we show how to improve black-box attacks by bringing a new element into the problem: gradient priors. We give a bandit optimization-based algorithm that allows us to seamlessly integrate any such priors, and we explicitly identify and incorporate two examples. The resulting methods use two to four times fewer queries and fail two to five times less than the current state-of-the-art.
Date issued
2019-03Department
MIT-IBM Watson AI LabJournal
7th International Conference on Learning Representations
Publisher
arXiv
Citation
Ilyas, Andrew et al. "Prior convictions: Black-box adversarial attacks with bandits and priors." 7th International Conference on Learning Representations (March 2019); © 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved.
Version: Author's final manuscript