Show simple item record

dc.contributor.authorZhang, Huan
dc.contributor.authorWeng, Tsui-Wei
dc.contributor.authorChen, Pin-Yu
dc.contributor.authorHsieh, Cho-Jui
dc.contributor.authorDaniel, Luca
dc.date.accessioned2021-11-05T15:09:20Z
dc.date.available2021-11-05T15:09:20Z
dc.date.issued2018
dc.identifier.urihttps://hdl.handle.net/1721.1/137508
dc.description.abstract© 2018 Curran Associates Inc..All rights reserved. Finding minimum distortion of adversarial examples and thus certifying robustness in neural network classifiers for given data points is known to be a challenging problem. Nevertheless, recently it has been shown to be possible to give a nontrivial certified lower bound of minimum adversarial distortion, and some recent progress has been made towards this direction by exploiting the piece-wise linear nature of ReLU activations. However, a generic robustness certification for general activation functions still remains largely unexplored. To address this issue, in this paper we introduce CROWN, a general framework to certify robustness of neural networks with general activation functions for given input data points. The novelty in our algorithm consists of bounding a given activation function with linear and quadratic functions, hence allowing it to tackle general activation functions including but not limited to four popular choices: ReLU, tanh, sigmoid and arctan. In addition, we facilitate the search for a tighter certified lower bound by adaptively selecting appropriate surrogates for each neuron activation. Experimental results show that CROWN on ReLU networks can notably improve the certified lower bounds compared to the current state-of-the-art algorithm Fast-Lin, while having comparable computational efficiency. Furthermore, CROWN also demonstrates its effectiveness and flexibility on networks with general activation functions, including tanh, sigmoid and arctan.en_US
dc.language.isoen
dc.relation.isversionofhttps://papers.nips.cc/paper/7742-efficient-neural-network-robustness-certification-with-general-activation-functionsen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceNeural Information Processing Systems (NIPS)en_US
dc.titleEfficient Neural Network Robustness Certification with General Activation Functionsen_US
dc.typeArticleen_US
dc.identifier.citationZhang, Huan, Weng, Tsui-Wei, Chen, Pin-Yu, Hsieh, Cho-Jui and Daniel, Luca. 2018. "Efficient Neural Network Robustness Certification with General Activation Functions."
dc.contributor.departmentMIT-IBM Watson AI Laben_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2019-05-15T17:16:20Z
dspace.date.submission2019-05-15T17:16:20Z
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record