dc.contributor.advisor | Wornell, Gregory W. | |
dc.contributor.author | Wang, Tony T. | |
dc.date.accessioned | 2022-01-14T14:46:20Z | |
dc.date.available | 2022-01-14T14:46:20Z | |
dc.date.issued | 2021-06 | |
dc.date.submitted | 2021-06-17T20:14:40.060Z | |
dc.identifier.uri | https://hdl.handle.net/1721.1/139041 | |
dc.description.abstract | In this thesis we explore adversarial examples for simple model families and simple data distributions, focusing in particular on linear and kernel classifiers. On the theoretical front we find evidence that natural accuracy and robust accuracy are more likely than not to be misaligned. We conclude from this that in order to learn a robust classifier, one should explicitly aim for it either via a good choice of model family or via optimizing explicitly for robust accuracy. On the empirical front we discover that kernel classifiers and neural networks are non-robust in similar ways. This suggests that a better understanding of kernel classifier robustness may help unravel some of the mysteries of adversarial examples. | |
dc.publisher | Massachusetts Institute of Technology | |
dc.rights | In Copyright - Educational Use Permitted | |
dc.rights | Copyright MIT | |
dc.rights.uri | http://rightsstatements.org/page/InC-EDU/1.0/ | |
dc.title | Adversarial Examples in Simpler Settings | |
dc.type | Thesis | |
dc.description.degree | M.Eng. | |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
mit.thesis.degree | Master | |
thesis.degree.name | Master of Engineering in Electrical Engineering and Computer Science | |