Show simple item record

dc.contributor.authorOh, Sewoong
dc.contributor.authorShah, Devavrat
dc.date.accessioned2016-02-01T18:21:28Z
dc.date.available2016-02-01T18:21:28Z
dc.date.issued2014
dc.identifier.issn1049-5258
dc.identifier.urihttp://hdl.handle.net/1721.1/101039
dc.description.abstractMotivated by generating personalized recommendations using ordinal (or preference) data, we study the question of learning a mixture of MultiNomial Logit (MNL) model, a parameterized class of distributions over permutations, from partial ordinal or preference data (e.g. pair-wise comparisons). Despite its long standing importance across disciplines including social choice, operations research and revenue management, little is known about this question. In case of single MNL models (no mixture), computationally and statistically tractable learning from pair-wise comparisons is feasible. However, even learning mixture of two MNL model is infeasible in general. Given this state of affairs, we seek conditions under which it is feasible to learn the mixture model in both computationally and statistically efficient manner. To that end, we present a sufficient condition as well as an efficient algorithm for learning mixed MNL models from partial preferences/comparisons data. In particular, a mixture of r MNL components over n objects can be learnt using samples whose size scales polynomially in n and r (concretely, n[superscript 3] r[superscript 3.5] log[superscript 4] n, with r << n[superscript 2/7] when the model parameters are sufficiently {\em incoherent}). The algorithm has two phases: first, learn the pair-wise marginals for each component using tensor decomposition; second, learn the model parameters for each component using RankCentrality introduced by Negahban et al. In the process of proving these results, we obtain a generalization of existing analysis for tensor decomposition to a more realistic regime where only partial information about each sample is available.en_US
dc.language.isoen_US
dc.publisherNeural Information Processing Systems Foundationen_US
dc.relation.isversionofhttps://papers.nips.cc/paper/5225-learning-mixed-multinomial-logit-model-from-ordinal-dataen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceMIT web domainen_US
dc.titleLearning Mixed Multinomial Logit Model from Ordinal Dataen_US
dc.typeArticleen_US
dc.identifier.citationOh, Sewoong, and Devavrat Shah. "Learning Mixed Multinomial Logit Model from Ordinal Data." Advances in Neural Information Processing Systems 27 (NIPS 2014).en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.mitauthorShah, Devavraten_US
dc.relation.journalAdvances in Neural Information Processing Systems (NIPS)en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsOh, Sewoong; Shah, Devavraten_US
dc.identifier.orcidhttps://orcid.org/0000-0003-0737-3259
mit.licensePUBLISHER_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record