POPQORN: Quantifying robustness of recurrent neural networks
Author(s)
Weng, Tsui-Wei; Daniel, Luca
DownloadAccepted version (2.452Mb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
The vulnerability to adversarial attacks has been a critical issue for deep neural networks. Addressing this issue requires a reliable way to evaluate the robustness of a network. Recently, several methods have been developed to compute robustness quantification for neural networks, namely, certified lower bounds of the minimum adversarial perturbation. Such methods, however, were devised for feed-forward networks, e.g. multilayer perceptron or convolutional networks. It remains an open problem to quantify robustness for recurrent networks, especially LSTM and GRU. For such networks, there exist additional challenges in computing the robustness quantification, such as handling the inputs at multiple steps and the interaction between gates and states. In this work, we propose POPQORN (Propagated-output Quantified Robustness for RNNs), a general algorithm to quantify robustness of RNNs, including vanilla RNNs, LSTMs, and GRUs. We demonstrate its effectiveness on different network architectures and show that the robustness quantification on individual steps can lead to new insights.
Date issued
2019-06Department
MIT-IBM Watson AI Lab; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
Proceedings of Machine Learning Research
Publisher
International Machine Learning Society
Citation
Ko, Ching-Yun et al. “POPQORN: Quantifying robustness of recurrent neural networks.” Paper in the Proceedings of Machine Learning Research, 97, 36th International conference on machine learning, Long Beach CA, 9-15 June 2019, International Machine Learning Society: 30-39 © 2019 The Author(s)
Version: Author's final manuscript
ISSN
2640-3498