Show simple item record

dc.contributor.authorBoopathy, Akhilan
dc.contributor.authorWeng, Tsui-Wei
dc.contributor.authorChen, Pin-Yu
dc.contributor.authorLiu, Sijia
dc.contributor.authorDaniel, Luca
dc.date.accessioned2021-02-22T17:54:22Z
dc.date.available2021-02-22T17:54:22Z
dc.date.issued2019-01
dc.date.submitted2018-11
dc.identifier.issn2159-5399
dc.identifier.issn2374-3468
dc.identifier.urihttps://hdl.handle.net/1721.1/129951
dc.description.abstractVerifying robustness of neural network classifiers has attracted great interests and attention due to the success of deep neural networks and their unexpected vulnerability to adversarial perturbations. Although finding minimum adversarial distortion of neural networks (with ReLU activations) has been shown to be an NP-complete problem, obtaining a non-trivial lower bound of minimum distortion as a provable robustness guarantee is possible. However, most previous works only focused on simple fully-connected layers (multilayer perceptrons) and were limited to ReLU activations. This motivates us to propose a general and efficient framework, CNN-Cert, that is capable of certifying robustness on general convolutional neural networks. Our framework is general - we can handle various architectures including convolutional layers, max-pooling layers, batch normalization layer, residual blocks, as well as general activation functions; our approach is efficient - by exploiting the special structure of convolutional layers, we achieve up to 17 and 11 times of speed-up compared to the state-of-the-art certification algorithms (e.g. Fast-Lin, CROWN) and 366 times of speed-up compared to the dual-LP approach while our algorithm obtains similar or even better verification bounds. In addition, CNN-Cert generalizes state-of-the-art algorithms e.g. Fast-Lin and CROWN. We demonstrate by extensive experiments that our method outperforms state-of-the-art lower-bound-based certification algorithms in terms of both bound quality and speed.en_US
dc.language.isoen
dc.publisherAssociation for the Advancement of Artificial Intelligence (AAAI)en_US
dc.relation.isversionof10.1609/AAAI.V33I01.33013240en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleCNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networksen_US
dc.typeArticleen_US
dc.identifier.citationBoopathy, Akhilan et al. “CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks.” Paper in the Proceedings of the AAAI Conference on Artificial Intelligence, 33, 1 Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19), Honolulu, Hawaii, January 27–February 1, 2019, AAAI: 33013240 © 2019 The Author(s)en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.departmentMIT-IBM Watson AI Laben_US
dc.relation.journalProceedings of the AAAI Conference on Artificial Intelligenceen_US
dc.eprint.versionOriginal manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2020-12-07T16:09:19Z
dspace.orderedauthorsBoopathy, A; Weng, T-W; Chen, P-Y; Liu, S; Daniel, Len_US
dspace.date.submission2020-12-07T16:09:23Z
mit.journal.volume33en_US
mit.journal.issue1en_US
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record