Show simple item record

dc.contributor.authorVanderbei, Robert
dc.contributor.authorLin, Kevin
dc.contributor.authorLiu, Han
dc.contributor.authorWang, Lie
dc.date.accessioned2017-03-17T22:54:33Z
dc.date.available2017-03-17T22:54:33Z
dc.date.issued2016-05
dc.date.submitted2013-12
dc.identifier.issn1867-2949
dc.identifier.issn1867-2957
dc.identifier.urihttp://hdl.handle.net/1721.1/107484
dc.description.abstractWe propose two approaches to solve large-scale compressed sensing problems. The first approach uses the parametric simplex method to recover very sparse signals by taking a small number of simplex pivots, while the second approach reformulates the problem using Kronecker products to achieve faster computation via a sparser problem formulation. In particular, we focus on the computational aspects of these methods in compressed sensing. For the first approach, if the true signal is very sparse and we initialize our solution to be the zero vector, then a customized parametric simplex method usually takes a small number of iterations to converge. Our numerical studies show that this approach is 10 times faster than state-of-the-art methods for recovering very sparse signals. The second approach can be used when the sensing matrix is the Kronecker product of two smaller matrices. We show that the best-known sufficient condition for the Kronecker compressed sensing (KCS) strategy to obtain a perfect recovery is more restrictive than the corresponding condition if using the first approach. However, KCS can be formulated as a linear program with a very sparse constraint matrix, whereas the first approach involves a completely dense constraint matrix. Hence, algorithms that benefit from sparse problem representation, such as interior point methods (IPMs), are expected to have computational advantages for the KCS problem. We numerically demonstrate that KCS combined with IPMs is up to 10 times faster than vanilla IPMs and state-of-the-art methods such as ℓ[subscript 1]_ℓ[subscript s] and Mirror Prox regardless of the sparsity level or problem size.en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Grant DMS-1005539)en_US
dc.publisherSpringer Berlin Heidelbergen_US
dc.relation.isversionofhttp://dx.doi.org/10.1007/s12532-016-0105-yen_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceSpringer Berlin Heidelbergen_US
dc.titleRevisiting compressed sensing: exploiting the efficiency of simplex and sparsification methodsen_US
dc.typeArticleen_US
dc.identifier.citationVanderbei, Robert, Kevin Lin, Han Liu, and Lie Wang. “Revisiting Compressed Sensing: Exploiting the Efficiency of Simplex and Sparsification Methods.” Mathematical Programming Computation 8, no. 3 (May 9, 2016): 253–269.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Mathematicsen_US
dc.contributor.mitauthorWang, Lie
dc.relation.journalMathematical Programming Computationen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2017-02-02T15:20:50Z
dc.language.rfc3066en
dc.rights.holderSpringer-Verlag Berlin Heidelberg and The Mathematical Programming Society
dspace.orderedauthorsVanderbei, Robert; Lin, Kevin; Liu, Han; Wang, Lieen_US
dspace.embargo.termsNen
dc.identifier.orcidhttps://orcid.org/0000-0003-3582-8898
mit.licenseOPEN_ACCESS_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record