Show simple item record

dc.contributor.authorLoveland, Donald
dc.contributor.authorWu, Xinyi
dc.contributor.authorZhao, Tong
dc.contributor.authorKoutra, Danai
dc.contributor.authorShah, Neil
dc.contributor.authorJu, Mingxuan
dc.date.accessioned2025-12-09T19:30:07Z
dc.date.available2025-12-09T19:30:07Z
dc.date.issued2025-04-22
dc.identifier.isbn979-8-4007-1274-6
dc.identifier.urihttps://hdl.handle.net/1721.1/164250
dc.descriptionWWW ’25, April 28-May 2, 2025, Sydney, NSW, Australiaen_US
dc.description.abstractCollaborative Filtering (CF) methods dominate real-world recommender systems given their ability to learn high-quality, sparse ID-embedding tables that effectively capture user preferences. These tables scale linearly with the number of users and items, and are trained to ensure high similarity between embeddings of interacted user-item pairs, while maintaining low similarity for non-interacted pairs. Despite their high performance, encouraging dispersion for non-interacted pairs necessitates expensive regularization (e.g., negative sampling), hurting runtime and scalability. Existing research tends to address these challenges by simplifying the learning process, either by reducing model complexity or sampling data, trading performance for runtime. In this work, we move beyond model-level modifications and study the properties of the embedding tables under different learning strategies. Through theoretical analysis, we find that the singular values of the embedding tables are intrinsically linked to different CF loss functions. These findings are empirically validated on real-world datasets, demonstrating the practical benefits of higher stable rank -- a continuous version of matrix rank which encodes the distribution of singular values. Based on these insights, we propose an efficient warm-start strategy that regularizes the stable rank of the user and item embeddings. We show that stable rank regularization during early training phases can promote higher-quality embeddings, resulting in training speed improvements of up to 65.9%. Additionally, stable rank regularization can act as a proxy for negative sampling, allowing for performance gains of up to 21.2% over loss functions with small negative sampling ratios. Overall, our analysis unifies current CF methods under a new perspective -- their optimization of stable rank -- motivating a flexible regularization method that is easy to implement, yet effective at enhancing CF systems.en_US
dc.publisherACM|Proceedings of the ACM Web Conference 2025en_US
dc.relation.isversionofhttps://doi.org/10.1145/3696410.3714904en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleUnderstanding and Scaling Collaborative Filtering Optimization from the Perspective of Matrix Ranken_US
dc.typeArticleen_US
dc.identifier.citationDonald Loveland, Xinyi Wu, Tong Zhao, Danai Koutra, Neil Shah, and Mingxuan Ju. 2025. Understanding and Scaling Collaborative Filtering Optimization from the Perspective of Matrix Rank. In Proceedings of the ACM on Web Conference 2025 (WWW '25). Association for Computing Machinery, New York, NY, USA, 436–449.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Chemical Engineeringen_US
dc.identifier.mitlicensePUBLISHER_POLICY
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2025-08-01T07:58:41Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2025-08-01T07:58:42Z
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record