Show simple item record

dc.contributor.advisorGamarnik, David
dc.contributor.authorEl Cheairi, Houssam
dc.date.accessioned2026-04-21T18:11:45Z
dc.date.available2026-04-21T18:11:45Z
dc.date.issued2026-02
dc.date.submitted2026-01-07T20:44:31.715Z
dc.identifier.urihttps://hdl.handle.net/1721.1/165523
dc.description.abstractThis thesis studies three problems in high-dimensional probability and statistics: (1) the densest subgraph problem in dense random graphs, where the goal is to estimate and recover the subgraph with the highest edge density; (2) the maximum cut problem on sparse random graphs, where the goal is to find a partition of the graph’s vertices that maximizes the number of edges between the two sets; and (3) the theoretical feasibility of compression for multilayer perceptrons via pruning and quantization. In Chapter 2, we derive new asymptotics for the density of densest k-subgraphs in random graphs in the sublinear regime k = n^α , α ∈ (0, 1). Our derivation is specified for both the dense Erdös-Rényi model and Gaussian-weighted random graphs. We also show using the interpolation method that the densest subgraph density is distributionally invariant for sub-Gaussian distributions, thus establishing universality. We leverage the asymptotics in the dense Erdös-Rényi setting to study the algorithmic landscape of the Hidden Clique model, and show that it exhibits a form of the Overlap Gap Property (OGP), which constitutes an algorithmic barrier for a family of Markov chain Monte Carlo (MCMC) algorithms. In Chapter 3, we study the performance of a class of Low-Degree Polynomial (LDP) algorithms for the maximum cut problem (Max-Cut) on random Erdös-Rényi graphs. We show that tree-structured LDPs based on the Approximate Message Passing (AMP) framework are near-optimal for finding ground states of the Sherrington-Kirkpatrick (SK) model. Then, using an interpolation argument, we show optimality of the same algorithms on sparse random graphs. This effectively proves algorithmic universality for this class of LDPs. In Chapter 4, we provide a theoretical justification for the post-training compression of wide multilayer perceptrons (MLPs). By analyzing a randomized greedy algorithm akin to Optimal Brain Damage (OBD) via an interpolation method, we unify the treatment of unstructured compression (pruning and quantization) and structured pruning. Our results rigorously establish the existence of sparse and quantized subnetworks that maintain competitive performance. In particular, we show that pruning at linear sparsities is achievable for sufficiently wide MLPs. The derived bounds, which are free of data assumptions, formally showcase a tradeoff between an MLP’s width and its compressibility
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleInterpolation Methods in Random Optimization and Deep Learning
dc.typeThesis
dc.description.degreePh.D.
dc.contributor.departmentMassachusetts Institute of Technology. Operations Research Center
dc.contributor.departmentSloan School of Management
mit.thesis.degreeDoctoral
thesis.degree.nameDoctor of Philosophy


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record