dc.contributor.advisor | Michael Carbin. | en_US |
dc.contributor.author | Siswanto, Arlene Elizabeth. | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. | en_US |
dc.date.accessioned | 2021-05-24T19:52:33Z | |
dc.date.available | 2021-05-24T19:52:33Z | |
dc.date.copyright | 2021 | en_US |
dc.date.issued | 2021 | en_US |
dc.identifier.uri | https://hdl.handle.net/1721.1/130708 | |
dc.description | Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021 | en_US |
dc.description | Cataloged from the official PDF of thesis. | en_US |
dc.description | Includes bibliographical references (pages 69-72). | en_US |
dc.description.abstract | Block sparsity imposes structural constraints on the weight patterns of sparse neural networks. The structure of sparsity has been shown to affect efficiency of sparse computation in the libraries, kernels, and hardware commonly used in machine learning. Much work in the pruning literature has focused on the unstructured pruning of individual weights, which has been shown to reduce the memory footprint of a network, but cannot achieve the computational speedups that have become increasingly coveted as neural networks become deeper and more complex. On the opposite end of granularity, neuron pruning and channel pruning are unable to reach the same level of sparsity as unstructured pruning without compromising accuracy. Block-sparse pruning is a middle ground between these two extremes, with the potential for pruning to greater sparsities while still being amenable for acceleration. Our fine-tuning experiments demonstrate that block-sparse pruning offers a tradeoff between granularity and accuracy; increasing block size results in a gradual decrease in accuracy. Our weight rewinding experiments show that increasing block size decreases the maximum sparsity obtainable when pruning a network early in training. Finally, we make the surprising observation that randomly reinitializing the pruned network structure results in the same accuracy regardless of block size. | en_US |
dc.description.statementofresponsibility | by Arlene Elizabeth Siswanto. | en_US |
dc.format.extent | 72 pages | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | MIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Electrical Engineering and Computer Science. | en_US |
dc.title | Block sparsity and weight initialization in neural network pruning | en_US |
dc.type | Thesis | en_US |
dc.description.degree | M. Eng. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
dc.identifier.oclc | 1251801573 | en_US |
dc.description.collection | M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science | en_US |
dspace.imported | 2021-05-24T19:52:33Z | en_US |
mit.thesis.degree | Master | en_US |
mit.thesis.department | EECS | en_US |