Show simple item record

dc.contributor.advisorAleksander Mądry.en_US
dc.contributor.authorTsipras, Dimitrisen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2017-10-30T15:29:18Z
dc.date.available2017-10-30T15:29:18Z
dc.date.copyright2017en_US
dc.date.issued2017en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/112050
dc.descriptionThesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 61-65).en_US
dc.description.abstractIn this thesis, we study matrix scaling and balancing, which are fundamental problems in scientific computing, with a long line of work on them that dates back to the 1960s. We provide algorithms for both these problems that, ignoring logarithmic factors involving the dimension of the input matrix and the size of its entries, both run in time Õ(m log K log² (1/[epsilon])) where e is the amount of error we are willing to tolerate. Here, K represents the ratio between the largest and the smallest entries of the optimal scalings. This implies that our algorithms run in nearly-linear time whenever K is quasi-polynomial, which includes, in particular, the case of strictly positive matrices. We complement our results by providing a separate algorithm that uses an interior-point method and runs in time Õ(m³/²(log log K + log(1/[epsilon]))), which becomes Õ(m³/² log(1/[epsilon])) for the case of matrix balancing and the doubly-stochastic variant of matrix scaling. In order to establish these results, we develop a new second-order optimization framework that enables us to treat both problems in a unified and principled manner. This framework identifies a certain generalization of linear system solving which we can use to efficiently minimize a broad class of functions, which we call second-order robust. We then show that in the context of the specific functions capturing matrix scaling and balancing, we can leverage and generalize the work on Laplacian system solving to make the algorithms obtained via this framework very efficient. This thesis is based on joint work with Michael B. Cohen, Aleksandr Mądry, and Adrian Vladu.en_US
dc.description.statementofresponsibilityby Dimitrios Tsipras.en_US
dc.format.extent77 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleFaster algorithms for matrix scaling and balancing via convex optimizationen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc1006508897en_US
atmire.cua.enabledAuthor's first name on title page should be Dimitris as confirmed by MIT Registrar June 2017 graduation list.en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record