Show simple item record

dc.contributor.authorSochen, Nir
dc.contributor.authorFeldman, Dan
dc.contributor.authorFeigin-Almon, Micha
dc.date.accessioned2016-12-02T17:45:23Z
dc.date.available2016-12-02T17:45:23Z
dc.date.issued2013-03
dc.identifier.issn0924-9907
dc.identifier.issn1573-7683
dc.identifier.urihttp://hdl.handle.net/1721.1/105528
dc.description.abstractSignal and image processing have seen an explosion of interest in the last few years in a new form of signal/image characterization via the concept of sparsity with respect to a dictionary. An active field of research is dictionary learning: the representation of a given large set of vectors (e.g. signals or images) as linear combinations of only few vectors (patterns). To further reduce the size of the representation, the combinations are usually required to be sparse, i.e., each signal is a linear combination of only a small number of patterns. This paper suggests a new computational approach to the problem of dictionary learning, known in computational geometry as coresets. A coreset for dictionary learning is a small smart non-uniform sample from the input signals such that the quality of any given dictionary with respect to the input can be approximated via the coreset. In particular, the optimal dictionary for the input can be approximated by learning the coreset. Since the coreset is small, the learning is faster. Moreover, using merge-and-reduce, the coreset can be constructed for streaming signals that do not fit in memory and can also be computed in parallel. We apply our coresets for dictionary learning of images using the K-SVD algorithm and bound their size and approximation error analytically. Our simulations demonstrate gain factors of up to 60 in computational time with the same, and even better, performance. We also demonstrate our ability to perform computations on larger patches and high-definition images, where the traditional approach breaks down.en_US
dc.publisherSpringer USen_US
dc.relation.isversionofhttp://dx.doi.org/10.1007/s10851-013-0431-xen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceSpringer USen_US
dc.titleLearning Big (Image) Data via Coresets for Dictionariesen_US
dc.typeArticleen_US
dc.identifier.citationFeldman, Dan, Micha Feigin, and Nir Sochen. “Learning Big (Image) Data via Coresets for Dictionaries.” Journal of Mathematical Imaging and Vision 46, no. 3 (March 20, 2013): 276–291.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Media Laboratoryen_US
dc.contributor.mitauthorFeldman, Dan
dc.contributor.mitauthorFeigin-Almon, Micha
dc.relation.journalJournal of Mathematical Imaging and Visionen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2016-08-18T15:43:40Z
dc.language.rfc3066en
dc.rights.holderSpringer Science+Business Media New York
dspace.orderedauthorsFeldman, Dan; Feigin, Micha; Sochen, Niren_US
dspace.embargo.termsNen
dc.identifier.orcidhttps://orcid.org/0000-0001-7649-9539
mit.licensePUBLISHER_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record