Show simple item record

dc.contributor.authorIndyk, Piotr
dc.date.accessioned2021-01-14T20:01:11Z
dc.date.available2021-01-14T20:01:11Z
dc.date.issued2019-12
dc.identifier.issn1049-5258
dc.identifier.urihttps://hdl.handle.net/1721.1/129423
dc.description.abstractWe introduce a “learning-based” algorithm for the low-rank decomposition problem: given an n × d matrix A, and a parameter k, compute a rank-k matrix A0 that minimizes the approximation loss ||A - A0||F. The algorithm uses a training set of input matrices in order to optimize its performance. Specifically, some of the most efficient approximate algorithms for computing low-rank approximations proceed by computing a projection SA, where S is a sparse random m × n “sketching matrix”, and then performing the singular value decomposition of SA. We show how to replace the random matrix S with a “learned” matrix of the same sparsity to reduce the error. Our experiments show that, for multiple types of data sets, a learned sketch matrix can substantially reduce the approximation loss compared to a random matrix S, sometimes by one order of magnitude. We also study mixed matrices where only some of the rows are trained and the remaining ones are random, and show that matrices still offer improved performance while retaining worst-case guarantees. Finally, to understand the theoretical aspects of our approach, we study the special case of m = 1. In particular, we give an approximation algorithm for minimizing the empirical loss, with approximation factor depending on the stable rank of matrices in the training set. We also show generalization bounds for the sketch matrix learning problem.en_US
dc.description.sponsorshipNational Science Foundation (U.S.). Transdisciplinary Research in Principles of Data Science (Award 1740751)en_US
dc.language.isoen
dc.publisherMorgan Kaufmann Publishersen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceNeural Information Processing Systems (NIPS)en_US
dc.titleLearning-based low-rank approximationsen_US
dc.typeArticleen_US
dc.identifier.citationIndyk, Piotr et al. “Learning-based low-rank approximations.” Advances in Neural Information Processing Systems, 32 (December 2019) © 2019 The Author(s)en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.relation.journalAdvances in Neural Information Processing Systemsen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2020-12-18T16:31:18Z
dspace.orderedauthorsIndyk, P; Vakilian, A; Yuan, Yen_US
dspace.date.submission2020-12-18T16:31:22Z
mit.journal.volume32en_US
mit.licensePUBLISHER_POLICY
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record