Tight certificates of adversarial robustness for randomly smoothed classifiers
Author(s)
Lee, Guang-He; Yuan, Yang; Jaakkola, Tommi S
DownloadPublished version (545.2Kb)
Publisher Policy
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
Strong theoretical guarantees of robustness can be given for ensembles of classifiers generated by input randomization. Specifically, an `2 bounded adversary cannot alter the ensemble prediction generated by an additive isotropic Gaussian noise, where the radius for the adversary depends on both the variance of the distribution as well as the ensemble margin at the point of interest. We build on and considerably expand this work across broad classes of distributions. In particular, we offer adversarial robustness guarantees and associated algorithms for the discrete case where the adversary is `0 bounded. Moreover, we exemplify how the guarantees can be tightened with specific assumptions about the function class of the classifier such as a decision tree. We empirically illustrate these results with and without functional restrictions across image and molecule datasets.
Date issued
2020-02Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence LaboratoryJournal
32nd Conference on Neural Information Processing Systems (NeurIPS 2018)
Citation
Lee, Guang-He et al. “Tight certificates of adversarial robustness for randomly smoothed classifiers.”32nd Conference on Neural Information Processing Systems, December 2018, Montreal, Canada, Neural Information Processing Systems, 2018. © 2018 The Author(s)
Version: Final published version
ISSN
1049-5258