Show simple item record

dc.contributor.authorWeng, Tsui-Wei
dc.contributor.authorDaniel, Luca
dc.date.accessioned2021-03-04T13:28:23Z
dc.date.available2021-03-04T13:28:23Z
dc.date.issued2019-06
dc.identifier.issn2640-3498
dc.identifier.urihttps://hdl.handle.net/1721.1/130075
dc.description.abstractThe vulnerability to adversarial attacks has been a critical issue for deep neural networks. Addressing this issue requires a reliable way to evaluate the robustness of a network. Recently, several methods have been developed to compute robustness quantification for neural networks, namely, certified lower bounds of the minimum adversarial perturbation. Such methods, however, were devised for feed-forward networks, e.g. multilayer perceptron or convolutional networks. It remains an open problem to quantify robustness for recurrent networks, especially LSTM and GRU. For such networks, there exist additional challenges in computing the robustness quantification, such as handling the inputs at multiple steps and the interaction between gates and states. In this work, we propose POPQORN (Propagated-output Quantified Robustness for RNNs), a general algorithm to quantify robustness of RNNs, including vanilla RNNs, LSTMs, and GRUs. We demonstrate its effectiveness on different network architectures and show that the robustness quantification on individual steps can lead to new insights.en_US
dc.description.sponsorshipSenseTime Artificial intelligence company (CUHK Agreement TS1610626)en_US
dc.description.sponsorshipHong Kong Research Association. General Research Fund (Projects 14236516, 17246416)en_US
dc.language.isoen
dc.publisherInternational Machine Learning Societyen_US
dc.relation.isversionofhttp://proceedings.mlr.press/v97/ko19a.htmlen_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titlePOPQORN: Quantifying robustness of recurrent neural networksen_US
dc.typeArticleen_US
dc.identifier.citationKo, Ching-Yun et al. “POPQORN: Quantifying robustness of recurrent neural networks.” Paper in the Proceedings of Machine Learning Research, 97, 36th International conference on machine learning, Long Beach CA, 9-15 June 2019, International Machine Learning Society: 30-39 © 2019 The Author(s)en_US
dc.contributor.departmentMIT-IBM Watson AI Lab
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.relation.journalProceedings of Machine Learning Researchen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2020-12-07T16:13:04Z
dspace.orderedauthorsKo, CY; Lyu, Z; Weng, TW; Daniel, L; Wong, N; Lin, Den_US
dspace.date.submission2020-12-07T16:13:08Z
mit.journal.volume97en_US
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record