Regulating Orthogonality Of Feature Functions For Highly Compressed Deep Neural Networks
Author(s)
Wei-Chen, Wang
DownloadThesis PDF (7.161Mb)
Advisor
Lizhong, Zheng
Terms of use
Metadata
Show full item recordAbstract
When designing deep neural networks (DNN), the number of nodes in hidden layers can have a profound impact on the performance of the model. The information carried by the nodes in each layer creates a subspace, whose dimensionality is determined by the number of nodes and their linear dependency. This paper focuses on highlycompressed DNN – network with significantly less nodes in the last hidden layer than in the output layer. Each node in the last hidden layer is considered a feature function, and we study how the orthogonality of feature functions changes throughout the training process. We first develop how information is learned, stored and updated in the DNN throughout training, and propose an algorithm which regulates the orthogonality before and during training. Our experiment on high-dimensional mixture Gaussian dataset reveals that the algorithm achieves higher orthogonality in feature functions, and accelerates network convergence. Orthogonalizing feature functions enable us to approximate Newton’s method via the gradient descent algorithm. We can take advantage of the superior convergence properties of the second-order optimization, without directly computing the Hessian matrix.
Date issued
2022-05Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology