Show simple item record

dc.contributor.authorWeng, Tsui-Wei
dc.contributor.authorZhang, Huan
dc.contributor.authorChen, Hongge
dc.contributor.authorSong, Zhao
dc.contributor.authorHsieh, Cho-Jui
dc.contributor.authorBoning, Duane
dc.contributor.authorDhillon, Inderjit S.
dc.contributor.authorDaniel, Luca
dc.date.accessioned2021-11-04T19:06:08Z
dc.date.available2021-11-04T19:06:08Z
dc.date.issued2018
dc.identifier.urihttps://hdl.handle.net/1721.1/137395
dc.description.abstract© 35th International Conference on Machine Learning, ICML 2018.All Rights Reserved. Verifying the robustness property of a general Rectified Linear Unit (ReLU) network is an NP-complete problem. Although finding the exact minimum adversarial distortion is hard, giving a certified lower bound of the minimum distortion is possible. Current available methods of computing such a bound are either time-consuming or deliver low quality bounds that are too loose to be useful. In this paper, we exploit the special structure of ReLU networks and provide two computationally efficient algorithms (Fast-Lin,Fast-Lip) that are able to certify non-trivial lower bounds of minimum adversarial distortions. Experiments show that (1) our methods deliver bounds close to (the gap is 2-3X) exact minimum distortions found by Reluplex in small networks while our algorithms are more than 10,000 times faster; (2) our methods deliver similar quality of bounds (the gap is within 35% and usually around 10%; sometimes our bounds are even better) for larger networks compared to the methods based on solving linear programming problems but our algorithms are 33-14,000 times faster; (3) our method is capable of solving large MNIST and CIFAR networks up to 7 layers with more than 10,000 neurons within tens of seconds on a single CPU core. In addition, we show that there is no polynomial time algorithm that can approximately find the minimum ℓ 1 adversarial distortion of a ReLU network with a 0.99 In n approximation ratio unless N P=P, where n is the number of neurons in the network.en_US
dc.language.isoen
dc.relation.isversionofhttp://proceedings.mlr.press/v80/en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleTowards Fast computation of certified robustness for ReLU networksen_US
dc.typeArticleen_US
dc.identifier.citationWeng, Tsui-Wei, Zhang, Huan, Chen, Hongge, Song, Zhao, Hsieh, Cho-Jui et al. 2018. "Towards Fast computation of certified robustness for ReLU networks."
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2019-05-10T14:54:04Z
dspace.date.submission2019-05-10T14:54:05Z
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record