Show simple item record

dc.contributor.authorFrogner, C
dc.contributor.authorSolomon, J
dc.contributor.authorMirzazadeh, F
dc.date.accessioned2021-11-08T17:43:35Z
dc.date.available2021-11-08T17:43:35Z
dc.date.issued2019-05
dc.identifier.urihttps://hdl.handle.net/1721.1/137728
dc.description.abstract© 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. Euclidean embeddings of data are fundamentally limited in their ability to capture latent semantic structures, which need not conform to Euclidean spatial assumptions. Here we consider an alternative, which embeds data as discrete probability distributions in a Wasserstein space, endowed with an optimal transport metric. Wasserstein spaces are much larger and more flexible than Euclidean spaces, in that they can successfully embed a wider variety of metric structures. We exploit this flexibility by learning an embedding that captures semantic information in the Wasserstein distance between embedded distributions. We examine empirically the representational capacity of our learned Wasserstein embeddings, showing that they can embed a wide variety of metric structures with smaller distortion than an equivalent Euclidean embedding. We also investigate an application to word embedding, demonstrating a unique advantage of Wasserstein embeddings: We can visualize the high-dimensional embedding directly, since it is a probability distribution on a low-dimensional space. This obviates the need for dimensionality reduction techniques like t-SNE for visualization.en_US
dc.language.isoen
dc.relation.isversionofhttps://openreview.net/group?id=ICLR.cc/2019/Conference#accepted-poster-papersen_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleLearning embeddings into entropic Wasserstein spacesen_US
dc.typeArticleen_US
dc.identifier.citationFrogner, C, Solomon, J and Mirzazadeh, F. 2019. "Learning embeddings into entropic Wasserstein spaces." 7th International Conference on Learning Representations, ICLR 2019.
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.contributor.departmentMIT-IBM Watson AI Lab
dc.relation.journal7th International Conference on Learning Representations, ICLR 2019en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2021-01-26T17:28:05Z
dspace.orderedauthorsFrogner, C; Solomon, J; Mirzazadeh, Fen_US
dspace.date.submission2021-01-26T17:28:14Z
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record