Mixed-precision architecture for flexible neural network accelerators
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
MetadataShow full item record
Model quantization provides considerable latency and energy consumption reductions while preserving accuracy. However, the optimal bitwidth reduction varies on a layer by layer basis. This thesis suggests a novel neural network accelerator architecture that handles multiple bit precisions for both weights and activations. The architecture is based on a fused spatial and temporal micro-architecture that maximizes both bandwidth eciency and computational ability. Furthermore, this thesis presents an FPGA implementation of this new mixed precision architecture and it discusses the ISA and its associated bitcode compiler. Finally, the performance of the system is evaluated on a Virtex-9 UltraScale FPGA by running state-of-the-art neural networks.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 89-91).
DepartmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology
Electrical Engineering and Computer Science.