Theses - Dept. of Mathematics
https://hdl.handle.net/1721.1/7604
2020-05-26T23:54:38ZOn the v₁-periodicity of the Moore space
https://hdl.handle.net/1721.1/123423
On the v₁-periodicity of the Moore space
Panchev, Lyuboslav(Lyuboslav Nikolaev)
We present progress in trying to verify a long-standing conjecture by Mark Mahowald on the v₁-periodic component of the classical Adams spectral sequence for a Moore space M. The approach we follow was proposed by John Palmieri in his work on the stable category of A-comodules. We improve on Palmieri's work by working with the endomorphism ring of M - End(M), thus resolving some of the initial difficulties of his approach and formulating a conjecture of our own that would lead to Mahowald's formulation. We further improve upon a method for calculating differentials via double filtration first used by Miller and apply it to our problem.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (page 36).
2019-01-01T00:00:00ZCombinatorial incremental problems
https://hdl.handle.net/1721.1/122890
Combinatorial incremental problems
Unda Surawski, Francisco T.
We study the class of Incremental Combinatorial optimization problems, where solutions are evaluated as they are built, as opposed to only measuring the performance of the final solution. Even though many of these problems have been studied, it has' usually been in isolation, so the first objective of this document is to present them under the same framework. We present the incremental analog of several classic combinatorial problems, and present efficient algorithms to find approximate solutions to some of these problems, either improving, or giving the first known approximation guarantees. We present unifying techniques that work for general classes of incremental optimization problems, using fundamental properties of the underlying problem, such as monotonicity or convexity, and relying on algorithms for the non-incremental version of the problem as subroutines. In Chapter 2 we give an e-approximation algorithm for general incremental minimization problems, improving the best approximation guarantee for the incremental version of the shortest path problem. In Chapter 3 we show constant approximation algorithms for several subclasses of incremental maximization problems, including e/2e-1approximation for the maximum weight matching problem, and a e/e+1 approximation for submodular valuations. In Chapter 4 we introduce a discrete-concavity property that allows us to give constant approximation guarantees to several problems, including an asymptotic 0.85-approximation for the incremental maximum flow with unit capacities, and a 0.9-approximation for incremental maximum cardinality matching, incremental maximum stable set in claw free graphs and incremental maximum size common independent set of two matroids.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 95-98).
2018-01-01T00:00:00ZLarge-scale optimization Methods for data-science applications
https://hdl.handle.net/1721.1/122272
Large-scale optimization Methods for data-science applications
Lu, Haihao,Ph.D.Massachusetts Institute of Technology.
In this thesis, we present several contributions of large scale optimization methods with the applications in data science and machine learning. In the first part, we present new computational methods and associated computational guarantees for solving convex optimization problems using first-order methods. We consider general convex optimization problem, where we presume knowledge of a strict lower bound (like what happened in empirical risk minimization in machine learning). We introduce a new functional measure called the growth constant for the convex objective function, that measures how quickly the level sets grow relative to the function value, and that plays a fundamental role in the complexity analysis. Based on such measure, we present new computational guarantees for both smooth and non-smooth convex optimization, that can improve existing computational guarantees in several ways, most notably when the initial iterate is far from the optimal solution set.; The usual approach to developing and analyzing first-order methods for convex optimization always assumes that either the gradient of the objective function is uniformly continuous (in the smooth setting) or the objective function itself is uniformly continuous. However, in many settings, especially in machine learning applications, the convex function is neither of them. For example, the Poisson Linear Inverse Model, the D-optimal design problem, the Support Vector Machine problem, etc. In the second part, we develop a notion of relative smoothness, relative continuity and relative strong convexity that is determined relative to a user-specified "reference function" (that should be computationally tractable for algorithms), and we show that many differentiable convex functions are relatively smooth or relatively continuous with respect to a correspondingly fairly-simple reference function.; We extend the mirror descent algorithm to our new setting, with associated computational guarantees. Gradient Boosting Machine (GBM) introduced by Friedman is an extremely powerful supervised learning algorithm that is widely used in practice -- it routinely features as a leading algorithm in machine learning competitions such as Kaggle and the KDDCup. In the third part, we propose the Randomized Gradient Boosting Machine (RGBM) and the Accelerated Gradient Boosting Machine (AGBM). RGBM leads to significant computational gains compared to GBM, by using a randomization scheme to reduce the search in the space of weak-learners. AGBM incorporate Nesterov's acceleration techniques into the design of GBM, and this is the first GBM type of algorithm with theoretically-justified accelerated convergence rate. We demonstrate the effectiveness of RGBM and AGBM over GBM in obtaining a model with good training and/or testing data fidelity.
Thesis: Ph. D. in Mathematics and Operations Research, Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 203-211).
2019-01-01T00:00:00ZPoint processes of representation theoretic origin
https://hdl.handle.net/1721.1/122190
Point processes of representation theoretic origin
Cuenca, Cesar(Cesar A.)
There are two parts to this thesis. In the first part we compute the correlation functions of the 4-parameter family of BC type Z-measures. The result is given explicitly in terms of Gauss's hypergeometric function. The BC type Z-measures are point processes on the punctured positive real line. They arise as interpolations of the spectral measures of a distinguished family of spherical representations of certain infinite-dimensional symmetric spaces. In representation-theoretic terms, our result solves the problem of noncommutative harmonic for the aforementioned family of representations. The second part of the text is based on joint work with Grigori Olshanski. We consider a new 5-parameter family of probability measures on the space of infinite point configurations of a discrete lattice. One of the 5 parameters is a quantization parameter and the measures in the family are closely related to the BC type Z-measures. We prove that the new measures serve as orthogonality weights for symmetric function analogues of the multivariate q-Racah polynomials. Further we show that the q-Racah symmetric functions (and their corresponding orthogonality measures) can be degenerated into symmetric function analogues of the big q-Jacobi, q-Meixner and Al-Salam-Carlitz polynomials, thus giving rise to a partial q-Askey scheme hierarchy in the algebra of symmetric functions.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 191-195).
2019-01-01T00:00:00Z