The impact of platform vulnerabilities in AI systems
Author(s)Kim, Ashley(Ashley Hyowon)
Impact of platform vulnerabilities in artificial intelligence systems
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Howard Shrobe and Hamed Okhravi.
MetadataShow full item record
Artificial intelligence has become increasingly prevalant through the past five years, even resulting in a national strategy for artificial intelligence. With such widespread usage, it is critical that we understand the threats to AI security. Historically, research on security in AI systems has focused on vulnerabilities in the training algorithm (e.g., adversarial machine learning), or vulnerabilities in the training process (e.g., data poisoning attacks). However, there has not been much research on how vulnerabilities in the platform on which the AI system runs can impact the classification results. In this work, we study the impact of platform vulnerabilities on AI systems. We divide the work into two major part: a concrete proof-of-concept attack to prove the feasibility and impact of platform attack, and a higher-level qualitative analysis to reason about the impact of large vulnerability classes on AI systems. We demonstrate an attack on the Microsoft Cognitive Toolkit which results in targeted misclassification, leveraging a memory safety vulnerability in a third party library. Furthermore, we provide a general classification of system vulnerabilities and their impacts on AI systems specifically.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020Cataloged from the PDF of thesis.Includes bibliographical references (pages 55-62).
DepartmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology
Electrical Engineering and Computer Science.