Geometrically Programmed Nano-Resistors for Ultra-Robust Artificial Neural Network Accelerator
Author(s)
Lee, Giho
DownloadThesis PDF (7.862Mb)
Advisor
Kim, Jeehwan
Terms of use
Metadata
Show full item recordAbstract
Despite the transformative advance in artificial intelligence (AI), the AI processing hardware have not matched the speed and power-efficiency requirement, restricting the realization of the full potential of AI and requiring innovation in AI hardware. Data transmission bottleneck between memory and processor has been pointed out as main source of poor computing speed and power efficiency. By embeding neural weights in hardware to minimize data transmission, non-volatile memory (NVM)-based in-memory computing have expected to have several orders of speed and power-efficiency boosts. However, its practical implementation as a next generation AI hardware has been not successful due to the non-idealities in NVMs including unstability, poor state resolution, challeng in programming, and systemon-a-chip (SoC) incompatibility. This thesis introduces ultra-accurate and ultra-robust geometrically programmed nano-resistor (GPNR) that can overcome NVM non-idealities and enable commercial AI accelerator based on analog in-memory computing. The state-of-theart 6-bit conductance state resolution and 8-bit stability of nano-resistor was realized by channel geometry optimization and thermodynamically stable material, while SoC imcompatible programming in NVM devices is omited. To evaluate the computing performance, experimental vector-matrix multiplication (NVM) operation were performed, showing 5-bit accuracy operation with 28x28 GPNR array without selectors. Finally, AI inference simulation was performed with simplifed 5x5 cropped MNIST digit image classification task. GPNR-based final classification layer demonstrates 91.0 % accuracy, comparable to the software limit of 93.2 %. The outcomes of this research not only bolster the feasibility of GPNR technology in practical applications but also highlight the potential for future advancements in AI accelerators that can fully harness the capabilities of analog in-memory computing.
Date issued
2024-05Department
Massachusetts Institute of Technology. Department of Mechanical EngineeringPublisher
Massachusetts Institute of Technology