dc.contributor.advisor | Kim, Jeehwan | |
dc.contributor.author | Lee, Giho | |
dc.date.accessioned | 2025-06-09T16:24:40Z | |
dc.date.available | 2025-06-09T16:24:40Z | |
dc.date.issued | 2024-05 | |
dc.date.submitted | 2024-06-13T16:48:34.436Z | |
dc.identifier.uri | https://hdl.handle.net/1721.1/159368 | |
dc.description.abstract | Despite the transformative advance in artificial intelligence (AI), the AI processing hardware have not matched the speed and power-efficiency requirement, restricting the realization of the full potential of AI and requiring innovation in AI hardware. Data transmission bottleneck between memory and processor has been pointed out as main source of poor computing speed and power efficiency. By embeding neural weights in hardware to minimize data transmission, non-volatile memory (NVM)-based in-memory computing have expected to have several orders of speed and power-efficiency boosts. However, its practical implementation as a next generation AI hardware has been not successful due to the non-idealities in NVMs including unstability, poor state resolution, challeng in programming, and systemon-a-chip (SoC) incompatibility. This thesis introduces ultra-accurate and ultra-robust geometrically programmed nano-resistor (GPNR) that can overcome NVM non-idealities and enable commercial AI accelerator based on analog in-memory computing. The state-of-theart 6-bit conductance state resolution and 8-bit stability of nano-resistor was realized by channel geometry optimization and thermodynamically stable material, while SoC imcompatible programming in NVM devices is omited. To evaluate the computing performance, experimental vector-matrix multiplication (NVM) operation were performed, showing 5-bit accuracy operation with 28x28 GPNR array without selectors. Finally, AI inference simulation was performed with simplifed 5x5 cropped MNIST digit image classification task. GPNR-based final classification layer demonstrates 91.0 % accuracy, comparable to the software limit of 93.2 %. The outcomes of this research not only bolster the feasibility of GPNR technology in practical applications but also highlight the potential for future advancements in AI accelerators that can fully harness the capabilities of analog in-memory computing. | |
dc.publisher | Massachusetts Institute of Technology | |
dc.rights | In Copyright - Educational Use Permitted | |
dc.rights | Copyright retained by author(s) | |
dc.rights.uri | https://rightsstatements.org/page/InC-EDU/1.0/ | |
dc.title | Geometrically Programmed Nano-Resistors for Ultra-Robust Artificial Neural Network Accelerator | |
dc.type | Thesis | |
dc.description.degree | S.M. | |
dc.contributor.department | Massachusetts Institute of Technology. Department of Mechanical Engineering | |
mit.thesis.degree | Master | |
thesis.degree.name | Master of Science in Mechanical Engineering | |