Show simple item record

dc.contributor.authorMohapatra, Jeet
dc.contributor.authorWeng, Tsui-Wei
dc.contributor.authorChen, Pin-Yu
dc.contributor.authorLiu, Sijia
dc.contributor.authorDaniel, Luca
dc.date.accessioned2021-02-25T15:17:24Z
dc.date.available2021-02-25T15:17:24Z
dc.date.issued2020-06
dc.identifier.isbn9781728171685
dc.identifier.issn1063-6919
dc.identifier.urihttps://hdl.handle.net/1721.1/130001
dc.description.abstractVerifying robustness of neural networks given a specified threat model is a fundamental yet challenging task. While current verification methods mainly focus on the p-norm threat model of the input instances, robustness verification against semantic adversarial attacks inducing large p-norm perturbations, such as color shifting and lighting adjustment, are beyond their capacity. To bridge this gap, we propose Semantify-NN, a model-agnostic and generic robustness verification approach against semantic perturbations for neural networks. By simply inserting our proposed semantic perturbation layers (SP-layers) to the input layer of any given model, Semantify-NN is model-agnostic, and any p-norm based verification tools can be used to verify the model robustness against semantic perturbations. We illustrate the principles of designing the SP-layers and provide examples including semantic perturbations to image classification in the space of hue, saturation, lightness, brightness, contrast and rotation, respectively. In addition, an efficient refinement technique is proposed to further significantly improve the semantic certificate. Experiments on various network architectures and different datasets demonstrate the superior verification performance of Semantify-NN over p-norm-based verification frameworks that naively convert semantic perturbation to p-norm. The results show that Semantify-NN can support robustness verification against a wide range of semantic perturbations.en_US
dc.language.isoen
dc.publisherIEEEen_US
dc.relation.isversionof10.1109/CVPR42600.2020.00032en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleTowards verifying robustness of neural networks against a family of semantic perturbationsen_US
dc.typeArticleen_US
dc.identifier.citationMohapatra, Jeet et al. “Towards verifying robustness of neural networks against a family of semantic perturbations.” Paper in the Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, 13-19 June 2020, IEEE © 2020 The Author(s)en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.relation.journalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognitionen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2020-12-07T17:31:25Z
dspace.orderedauthorsMohapatra, J; Weng, TW; Chen, PY; Liu, S; Daniel, Len_US
dspace.date.submission2020-12-07T17:31:28Z
mit.journal.volumeJune 2020en_US
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record