dc.contributor.advisor | Aleksander Mądry. | en_US |
dc.contributor.author | Wei, Kuo-An Andy. | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. | en_US |
dc.date.accessioned | 2020-09-15T22:02:47Z | |
dc.date.available | 2020-09-15T22:02:47Z | |
dc.date.copyright | 2020 | en_US |
dc.date.issued | 2020 | en_US |
dc.identifier.uri | https://hdl.handle.net/1721.1/127541 | |
dc.description | Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020 | en_US |
dc.description | Cataloged from the official PDF of thesis. | en_US |
dc.description | Includes bibliographical references (pages 39-41). | en_US |
dc.description.abstract | Despite the remarkable success of deep neural networks on image classification tasks, they exhibit a surprising vulnerability to certain small worst-case perturbations, also known as adversarial examples. Over the years, many different theories have been proposed to explain this puzzling phenomenon. Recent work by Ilyas et al. proposes a fresh new take on the existence of adversarial examples--that adversarial examples are inevitable due to certain well-generalizing but non-robust features present in the natural data [14]. We build upon the "non-robust features" framework raised by Ilyas et al., and present some new observations on the properties of non-robust features. We showcase some visualization techniques based on adversarial attacks to help us build an intuitive understanding of non-robust features. Lastly, we propose a novel framework for analyzing the types of information present in non-robust features, known as the adversarial transferability analysis. | en_US |
dc.description.statementofresponsibility | by Kuo-An Andy Wei. | en_US |
dc.format.extent | 41 pages | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | MIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Electrical Engineering and Computer Science. | en_US |
dc.title | Understanding non-robust features in image classification | en_US |
dc.type | Thesis | en_US |
dc.description.degree | M. Eng. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
dc.identifier.oclc | 1193031569 | en_US |
dc.description.collection | M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science | en_US |
dspace.imported | 2020-09-15T22:02:46Z | en_US |
mit.thesis.degree | Master | en_US |
mit.thesis.department | EECS | en_US |