Show simple item record

dc.contributor.advisorChandrakasan, Anantha P.
dc.contributor.authorLee, Kyungmi
dc.date.accessioned2024-08-21T18:58:24Z
dc.date.available2024-08-21T18:58:24Z
dc.date.issued2024-05
dc.date.submitted2024-07-10T13:01:41.321Z
dc.identifier.urihttps://hdl.handle.net/1721.1/156346
dc.description.abstractAs deep neural networks (DNNs) are widely adopted for high-stakes applications that process sensitive private data and make critical decisions, security concerns about user data and DNN models are growing. In particular, hardware-level vulnerabilities can be exploited to undermine the confidentiality and integrity required for those applications. However, conventional hardware designs for DNN acceleration largely focus on improving the throughput, energy-efficiency, and area-efficiency, while the hardware-level security solutions are significantly less well understood. This thesis investigates the memory security for DNN accelerators, where the off-chip main memory cannot be trusted. The first part of this thesis illustrates the vulnerability of sparse DNNs to fault injections on their model parameters. It presents SparseBFA, an algorithm to identify the most vulnerable bits among the model parameters of a sparse DNN. SparseBFA shows that a victim DNN is highly susceptible to a few bit flips in the coordinates of sparse weight matrices, less than 0.00005% of the total memory footprint for its parameters. Second, this thesis proposes SecureLoop, a design space exploration framework for secure DNN accelerators that support a trusted execution environment (TEE). Cryptographic operations are tightly coupled with the data movement pattern in secure DNN accelerators, complicating the mapping of DNN workloads. SecureLoop addresses this mapping challenge by using an analytical model to describe the impact of authentication block assignments and a simulated annealing algorithm to perform cross-layer optimizations. The optimal mapping identified by SecureLoop is up to 33% faster and 50% better in energy-delay product compared to conventional mapping algorithms. Finally, this thesis demonstrates the implementation of a secure DNN accelerator targeting resource-constrained edge and mobile devices. This design addresses the implementation-level challenges of supporting a TEE and achieves a low overhead of less than 4% performance slowdown, 16.5% more energy consumption per each multiply-and-accumulate operation, and 8.1% of the accelerator area.
dc.publisherMassachusetts Institute of Technology
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.titleTowards Secure Machine Learning Acceleration: Threats and Defenses Across Algorithms, Architecture, and Circuits
dc.typeThesis
dc.description.degreePh.D.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeDoctoral
thesis.degree.nameDoctor of Philosophy


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record