dc.contributor.advisor | Song Han. | en_US |
dc.contributor.author | Hafdi, Driss. | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. | en_US |
dc.date.accessioned | 2020-03-24T15:36:16Z | |
dc.date.available | 2020-03-24T15:36:16Z | |
dc.date.copyright | 2019 | en_US |
dc.date.issued | 2019 | en_US |
dc.identifier.uri | https://hdl.handle.net/1721.1/124247 | |
dc.description | This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. | en_US |
dc.description | Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 | en_US |
dc.description | Cataloged from student-submitted PDF version of thesis. | en_US |
dc.description | Includes bibliographical references (pages 89-91). | en_US |
dc.description.abstract | Model quantization provides considerable latency and energy consumption reductions while preserving accuracy. However, the optimal bitwidth reduction varies on a layer by layer basis. This thesis suggests a novel neural network accelerator architecture that handles multiple bit precisions for both weights and activations. The architecture is based on a fused spatial and temporal micro-architecture that maximizes both bandwidth eciency and computational ability. Furthermore, this thesis presents an FPGA implementation of this new mixed precision architecture and it discusses the ISA and its associated bitcode compiler. Finally, the performance of the system is evaluated on a Virtex-9 UltraScale FPGA by running state-of-the-art neural networks. | en_US |
dc.description.statementofresponsibility | by Driss Hafdi. | en_US |
dc.format.extent | 91 pages | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Electrical Engineering and Computer Science. | en_US |
dc.title | Mixed-precision architecture for flexible neural network accelerators | en_US |
dc.type | Thesis | en_US |
dc.description.degree | M. Eng. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
dc.identifier.oclc | 1145118397 | en_US |
dc.description.collection | M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science | en_US |
dspace.imported | 2020-03-24T15:36:12Z | en_US |
mit.thesis.degree | Master | en_US |
mit.thesis.department | EECS | en_US |