| dc.contributor.advisor | Aleksander Mądry. | en_US |
| dc.contributor.author | Engstrom, Logan(Logan G.) | en_US |
| dc.contributor.other | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. | en_US |
| dc.date.accessioned | 2019-11-22T00:02:48Z | |
| dc.date.available | 2019-11-22T00:02:48Z | |
| dc.date.copyright | 2019 | en_US |
| dc.date.issued | 2019 | en_US |
| dc.identifier.uri | https://hdl.handle.net/1721.1/123021 | |
| dc.description | This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. | en_US |
| dc.description | Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 | en_US |
| dc.description | Cataloged from student-submitted PDF version of thesis. | en_US |
| dc.description | Includes bibliographical references (pages 108-115). | en_US |
| dc.description.abstract | Despite their performance on standard tasks in computer vision, natural language processing and voice recognition, state-of-the-art models are pervasively vulnerable to adversarial examples. Adversarial examples are inputs that have been slightly perturbed--such that the semantic content is the same--as to cause malicious behavior in a classifier. The study of adversarial robustness has so far largely focused on perturbations bound in l[subscript p]-norms, in the case where the attacker knows the full model and controls exactly what input is sent to the classifier. However, this threat model is unrealistic in many respects. Models are vulnerable to classes of slight perturbations that are not captured by l[subscript p] bounds, adversaries realistically often will not have full model access, and in the physical world it is not possible to exactly control what image is sent to the classifier. In our exploration we successfully develop new algorithms and frameworks for exploiting vulnerabilities even in restricted threat models. We find that models are highly vulnerable to adversarial examples in these more realistic threat models, highlighting the necessity of further research to attain models that are truly robust and reliable. | en_US |
| dc.description.statementofresponsibility | by Logan Engstrom. | en_US |
| dc.format.extent | 149 pages | en_US |
| dc.language.iso | eng | en_US |
| dc.publisher | Massachusetts Institute of Technology | en_US |
| dc.rights | MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. | en_US |
| dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
| dc.subject | Electrical Engineering and Computer Science. | en_US |
| dc.title | Understanding the landscape of adversarial robustness | en_US |
| dc.type | Thesis | en_US |
| dc.description.degree | M. Eng. | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
| dc.identifier.oclc | 1127640126 | en_US |
| dc.description.collection | M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science | en_US |
| dspace.imported | 2019-11-22T00:02:46Z | en_US |
| mit.thesis.degree | Master | en_US |
| mit.thesis.department | EECS | en_US |