Show simple item record

dc.contributor.advisorAleksander Mądry.en_US
dc.contributor.authorWei, Kuo-An Andy.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2020-09-15T22:02:47Z
dc.date.available2020-09-15T22:02:47Z
dc.date.copyright2020en_US
dc.date.issued2020en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/127541
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020en_US
dc.descriptionCataloged from the official PDF of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 39-41).en_US
dc.description.abstractDespite the remarkable success of deep neural networks on image classification tasks, they exhibit a surprising vulnerability to certain small worst-case perturbations, also known as adversarial examples. Over the years, many different theories have been proposed to explain this puzzling phenomenon. Recent work by Ilyas et al. proposes a fresh new take on the existence of adversarial examples--that adversarial examples are inevitable due to certain well-generalizing but non-robust features present in the natural data [14]. We build upon the "non-robust features" framework raised by Ilyas et al., and present some new observations on the properties of non-robust features. We showcase some visualization techniques based on adversarial attacks to help us build an intuitive understanding of non-robust features. Lastly, we propose a novel framework for analyzing the types of information present in non-robust features, known as the adversarial transferability analysis.en_US
dc.description.statementofresponsibilityby Kuo-An Andy Wei.en_US
dc.format.extent41 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleUnderstanding non-robust features in image classificationen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1193031569en_US
dc.description.collectionM.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2020-09-15T22:02:46Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record