| dc.contributor.advisor | Gamarnik, David | |
| dc.contributor.author | El Cheairi, Houssam | |
| dc.date.accessioned | 2026-04-21T18:11:45Z | |
| dc.date.available | 2026-04-21T18:11:45Z | |
| dc.date.issued | 2026-02 | |
| dc.date.submitted | 2026-01-07T20:44:31.715Z | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/165523 | |
| dc.description.abstract | This thesis studies three problems in high-dimensional probability and statistics: (1) the densest subgraph problem in dense random graphs, where the goal is to estimate and recover the subgraph with the highest edge density; (2) the maximum cut problem on sparse random graphs, where the goal is to find a partition of the graph’s vertices that maximizes the number of edges between the two sets; and (3) the theoretical feasibility of compression for multilayer perceptrons via pruning and quantization. In Chapter 2, we derive new asymptotics for the density of densest k-subgraphs in random graphs in the sublinear regime k = n^α , α ∈ (0, 1). Our derivation is specified for both the dense Erdös-Rényi model and Gaussian-weighted random graphs. We also show using the interpolation method that the densest subgraph density is distributionally invariant for sub-Gaussian distributions, thus establishing universality. We leverage the asymptotics in the dense Erdös-Rényi setting to study the algorithmic landscape of the Hidden Clique model, and show that it exhibits a form of the Overlap Gap Property (OGP), which constitutes an algorithmic barrier for a family of Markov chain Monte Carlo (MCMC) algorithms. In Chapter 3, we study the performance of a class of Low-Degree Polynomial (LDP) algorithms for the maximum cut problem (Max-Cut) on random Erdös-Rényi graphs. We show that tree-structured LDPs based on the Approximate Message Passing (AMP) framework are near-optimal for finding ground states of the Sherrington-Kirkpatrick (SK) model. Then, using an interpolation argument, we show optimality of the same algorithms on sparse random graphs. This effectively proves algorithmic universality for this class of LDPs. In Chapter 4, we provide a theoretical justification for the post-training compression of wide multilayer perceptrons (MLPs). By analyzing a randomized greedy algorithm akin to Optimal Brain Damage (OBD) via an interpolation method, we unify the treatment of unstructured compression (pruning and quantization) and structured pruning. Our results rigorously establish the existence of sparse and quantized subnetworks that maintain competitive performance. In particular, we show that pruning at linear sparsities is achievable for sufficiently wide MLPs. The derived bounds, which are free of data assumptions, formally showcase a tradeoff between an MLP’s width and its compressibility | |
| dc.publisher | Massachusetts Institute of Technology | |
| dc.rights | In Copyright - Educational Use Permitted | |
| dc.rights | Copyright retained by author(s) | |
| dc.rights.uri | https://rightsstatements.org/page/InC-EDU/1.0/ | |
| dc.title | Interpolation Methods in Random Optimization and Deep Learning | |
| dc.type | Thesis | |
| dc.description.degree | Ph.D. | |
| dc.contributor.department | Massachusetts Institute of Technology. Operations Research Center | |
| dc.contributor.department | Sloan School of Management | |
| mit.thesis.degree | Doctoral | |
| thesis.degree.name | Doctor of Philosophy | |