MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Block sparsity and weight initialization in neural network pruning

Author(s)
Siswanto, Arlene Elizabeth.
Thumbnail
Download1251801573-MIT.pdf (14.64Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Michael Carbin.
Terms of use
MIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
Block sparsity imposes structural constraints on the weight patterns of sparse neural networks. The structure of sparsity has been shown to affect efficiency of sparse computation in the libraries, kernels, and hardware commonly used in machine learning. Much work in the pruning literature has focused on the unstructured pruning of individual weights, which has been shown to reduce the memory footprint of a network, but cannot achieve the computational speedups that have become increasingly coveted as neural networks become deeper and more complex. On the opposite end of granularity, neuron pruning and channel pruning are unable to reach the same level of sparsity as unstructured pruning without compromising accuracy. Block-sparse pruning is a middle ground between these two extremes, with the potential for pruning to greater sparsities while still being amenable for acceleration. Our fine-tuning experiments demonstrate that block-sparse pruning offers a tradeoff between granularity and accuracy; increasing block size results in a gradual decrease in accuracy. Our weight rewinding experiments show that increasing block size decreases the maximum sparsity obtainable when pruning a network early in training. Finally, we make the surprising observation that randomly reinitializing the pruned network structure results in the same accuracy regardless of block size.
Description
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021
 
Cataloged from the official PDF of thesis.
 
Includes bibliographical references (pages 69-72).
 
Date issued
2021
URI
https://hdl.handle.net/1721.1/130708
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.