Testing, Learning, and Optimization in High Dimensions
Author(s)
Gatmiry, Khashayar
DownloadThesis PDF (1.649Mb)
Advisor
Stefanie Jegelka
Kelner, Jonathan
Terms of use
Metadata
Show full item recordAbstract
In this thesis we study two separate problems: (1) What is the sample complexity of testing the class of Determinantal Point Processes? and (2) Introducing a new analysis for optimization and generalization of deep neural networks beyond their linear approximation. For the first problem, we characterize the optimal sample complexity up to logarithmic factors by proposing almost matching upper and lower bounds. For the second problem, we propose a new regime for the parameters and the algorithm of a three layer network model which goes beyond the Neural tangent kernel (NTK) approximation; as a result, we introduce a new data dependent complexity measure which generalizes the NTK complexity measure introduced by [Arora et al., 2019a]. We show that despite nonconvexity, a variant of Stochastic gradient descent (SGD) converges to a good solution for which we prove a novel generalization bound that is proportional to our complexity measure.
Date issued
2022-05Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology