Applications and limits of convex optimization
Author(s)
Hamilton, Linus
DownloadThesis PDF (2.463Mb)
Advisor
Moitra, Ankur
Terms of use
Metadata
Show full item recordAbstract
Every algorithmic learning problem becomes vastly more tractable when reduced to a convex program, yet few can be simplified this way. At the heart of this thesis are two hard problems with unexpected convex reformulations. The Paulsen problem, a longstanding open problem in operator theory, was recently resolved by Kwok et al [40]. We use a convex program due to Barthe to present a dramatically simpler proof with an accompanying efficient algorithm that also achieves a better bound. Next, we examine the related operator scaling problem, whose fastest known algorithm uses convex optimization in non-Euclidean space. We expose a fundamental obstruction to such techniques by proving that, under realistic noise conditions, hyperbolic space admits no analogue of Nesterov’s accelerated gradient descent. Finally, we generalize Bresler’s structure learning algorithm from Ising models to arbitrary graphical models. We compare our results to a recent convex programming reformulation of the same problem. Notably, in variants of the problem where one only receives partial samples, our combinatorial algorithm is almost unaffected, whereas the convex approach fails to get off the ground.
Date issued
2022-05Department
Massachusetts Institute of Technology. Department of MathematicsPublisher
Massachusetts Institute of Technology