On practical robustness of machine learning systems
Author(s)
Ilyas, Andrew.
Download1126543485-MIT.pdf (7.907Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Constantinos Daskalakis.
Terms of use
Metadata
Show full item recordAbstract
We consider the importance of robustness in evaluating machine learning systems, an in particular systems involving deep learning. We consider these systems' vulnerability to adversarial examples--subtle, crafted perturbations to inputs which induce large change in output. We show that these adversarial examples are not only theoretical concern, by desigining the first 3D adversarial objects, and by demonstrating that these examples can be constructed even when malicious actors have little power. We suggest a potential avenue for building robust deep learning models by leveraging generative models.
Description
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018 Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (pages 71-79).
Date issued
2018Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.