Show simple item record

dc.contributor.advisorAleksander Mądry.en_US
dc.contributor.authorEngstrom, Logan(Logan G.)en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2019-11-22T00:02:48Z
dc.date.available2019-11-22T00:02:48Z
dc.date.copyright2019en_US
dc.date.issued2019en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/123021
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019en_US
dc.descriptionCataloged from student-submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 108-115).en_US
dc.description.abstractDespite their performance on standard tasks in computer vision, natural language processing and voice recognition, state-of-the-art models are pervasively vulnerable to adversarial examples. Adversarial examples are inputs that have been slightly perturbed--such that the semantic content is the same--as to cause malicious behavior in a classifier. The study of adversarial robustness has so far largely focused on perturbations bound in l[subscript p]-norms, in the case where the attacker knows the full model and controls exactly what input is sent to the classifier. However, this threat model is unrealistic in many respects. Models are vulnerable to classes of slight perturbations that are not captured by l[subscript p] bounds, adversaries realistically often will not have full model access, and in the physical world it is not possible to exactly control what image is sent to the classifier. In our exploration we successfully develop new algorithms and frameworks for exploiting vulnerabilities even in restricted threat models. We find that models are highly vulnerable to adversarial examples in these more realistic threat models, highlighting the necessity of further research to attain models that are truly robust and reliable.en_US
dc.description.statementofresponsibilityby Logan Engstrom.en_US
dc.format.extent149 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleUnderstanding the landscape of adversarial robustnessen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1127640126en_US
dc.description.collectionM.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2019-11-22T00:02:46Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record