Topics in Sparsity and Compression: From High dimensional statistics to Overparametrized Neural Networks
Author(s)
Benbaki, Riade
DownloadThesis PDF (1.066Mb)
Advisor
Mazumder, Rahul
Terms of use
Metadata
Show full item recordAbstract
This thesis presents applications of sparsity in three different areas: covariance estimation in time-series data, linear regression with categorical variables, and neural network compression.
In the first chapter, motivated by problems in computational finance, we consider a framework for jointly learning time-varying covariance matrices under different structural assumptions (e.g., low-rank, sparsity or a combination of both). We propose novel algorithms for learning these covariance matrices simultaneously across all time blocks and show improved computational efficiency and performance across different tasks.
In the second chapter, we study the problem of linear regression with categorical variables, where every categorical variable can have a large number of levels. We seek to reduce or cluster the number of levels for statistical and interpretability reasons. To this end, we propose a new estimator and study its computational and statistical properties.
And in the third chapter, we explore the problem of pruning or sparsifying the weights of a neural network. Modern neural networks tend to have a large number of parameters, which makes their storage and deployment expensive, especially in resource-constrained environments. One solution to this is compressing the network by pruning or removing some parameters, while trying to maintain a similar level of performance compared to the dense network. To achieve this, we propose a new optimization-based pruning algorithm, and show how it leads to significantly better sparsity-accuracy trade-offs compared to existing pruning methods.
Date issued
2023-06Department
Massachusetts Institute of Technology. Operations Research CenterPublisher
Massachusetts Institute of Technology