The MIT Libraries is completing a major upgrade to DSpace@MIT.
Starting May 5 2026, DSpace will remain functional, viewable, searchable, and downloadable, however, you will not be able to edit existing collections or add new material.
We are aiming to have full functionality restored by May 18, 2026, but intermittent service interruptions may occur.
Please email dspace-lib@mit.edu with any questions.
Thank you for your patience as we implement this important upgrade.
PCA as a defense against some adversaries
| dc.contributor.author | Aparne, Gupta | |
| dc.contributor.author | Banburski, Andrzej | |
| dc.contributor.author | Poggio, Tomaso | |
| dc.date.accessioned | 2022-03-30T18:19:38Z | |
| dc.date.available | 2022-03-30T18:19:38Z | |
| dc.date.issued | 2022-03-30 | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/141424 | |
| dc.description.abstract | Neural network classifiers are known to be highly vulnerable to adversarial perturbations in their inputs. Under the hypothesis that adversarial examples lie outside of the sub-manifold of natural images, previous work has investigated the impact of principal components in data on adversarial robustness. In this paper we show that there exists a very simple defense mechanism in the case where adversarial images are separable in a previously defined $(k,p)$ metric. This defense is very successful against the popular Carlini-Wagner attack, but less so against some other common attacks like FGSM. It is interesting to note that the defense is still successful for relatively large perturbations. | en_US |
| dc.description.sponsorship | This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. | en_US |
| dc.publisher | Center for Brains, Minds and Machines (CBMM) | en_US |
| dc.relation.ispartofseries | CBMM Memo;135 | |
| dc.title | PCA as a defense against some adversaries | en_US |
| dc.type | Article | en_US |
| dc.type | Technical Report | en_US |
| dc.type | Working Paper | en_US |
