dc.contributor.advisor | Stefanie Jegelka | |
dc.contributor.advisor | Kelner, Jonathan | |
dc.contributor.author | Gatmiry, Khashayar | |
dc.date.accessioned | 2022-08-29T16:21:31Z | |
dc.date.available | 2022-08-29T16:21:31Z | |
dc.date.issued | 2022-05 | |
dc.date.submitted | 2022-06-21T19:25:24.953Z | |
dc.identifier.uri | https://hdl.handle.net/1721.1/144927 | |
dc.description.abstract | In this thesis we study two separate problems: (1) What is the sample complexity of testing the class of Determinantal Point Processes? and (2) Introducing a new analysis for optimization and generalization of deep neural networks beyond their linear approximation. For the first problem, we characterize the optimal sample complexity up to logarithmic factors by proposing almost matching upper and lower bounds. For the second problem, we propose a new regime for the parameters and the algorithm of a three layer network model which goes beyond the Neural tangent kernel (NTK) approximation; as a result, we introduce a new data dependent complexity measure which generalizes the NTK complexity measure introduced by [Arora et al., 2019a]. We show that despite nonconvexity, a variant of Stochastic gradient descent (SGD) converges to a good solution for which we prove a novel generalization bound that is proportional to our complexity measure. | |
dc.publisher | Massachusetts Institute of Technology | |
dc.rights | In Copyright - Educational Use Permitted | |
dc.rights | Copyright MIT | |
dc.rights.uri | http://rightsstatements.org/page/InC-EDU/1.0/ | |
dc.title | Testing, Learning, and Optimization in High Dimensions | |
dc.type | Thesis | |
dc.description.degree | S.M. | |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
mit.thesis.degree | Master | |
thesis.degree.name | Master of Science in Electrical Engineering and Computer Science | |