Notice

This is not the latest version of this item. The latest version can be found at:https://dspace.mit.edu/handle/1721.1/138868.2

Show simple item record

dc.contributor.authorDe Palma, Giacomo
dc.contributor.authorKiani, Bobak T
dc.contributor.authorLloyd, Seth
dc.date.accessioned2022-01-10T19:45:56Z
dc.date.available2022-01-10T19:45:56Z
dc.date.issued2021
dc.identifier.urihttps://hdl.handle.net/1721.1/138868
dc.description.abstractThe reliability of deep learning algorithms is fundamentally challenged by the existence of adversarial examples, which are incorrectly classified inputs that are extremely close to a correctly classified input. We explore the properties of adversarial examples for deep neural networks with random weights and biases, and prove that for any p≥1, the \ell^p distance of any given input from the classification boundary scales as one over the square root of the dimension of the input times the \ell^p norm of the input. The results are based on the recently proved equivalence between Gaussian processes and deep neural networks in the limit of infinite width of the hidden layers, and are validated with experiments on both random deep neural networks and deep neural networks trained on the MNIST and CIFAR10 datasets. The results constitute a fundamental advance in the theoretical understanding of adversarial examples, and open the way to a thorough theoretical characterization of the relation between network architecture and robustness to adversarial perturbations.en_US
dc.language.isoen
dc.relation.isversionofhttps://proceedings.mlr.press/v139/de-palma21a.htmlen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceProceedings of Machine Learning Researchen_US
dc.titleAdversarial Robustness Guarantees for Random Deep Neural Networksen_US
dc.typeArticleen_US
dc.identifier.citationDe Palma, Giacomo, Kiani, Bobak T and Lloyd, Seth. 2021. "Adversarial Robustness Guarantees for Random Deep Neural Networks." INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 139.
dc.relation.journalINTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139en_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2022-01-10T19:28:10Z
dspace.orderedauthorsDe Palma, G; Kiani, BT; Lloyd, Sen_US
dspace.date.submission2022-01-10T19:28:11Z
mit.journal.volume139en_US
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

VersionItemDateSummary

*Selected version