MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Understanding and Scaling Collaborative Filtering Optimization from the Perspective of Matrix Rank

Author(s)
Loveland, Donald; Wu, Xinyi; Zhao, Tong; Koutra, Danai; Shah, Neil; Ju, Mingxuan; ... Show more Show less
Thumbnail
Download3696410.3714904.pdf (1.261Mb)
Publisher Policy

Publisher Policy

Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.

Terms of use
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Metadata
Show full item record
Abstract
Collaborative Filtering (CF) methods dominate real-world recommender systems given their ability to learn high-quality, sparse ID-embedding tables that effectively capture user preferences. These tables scale linearly with the number of users and items, and are trained to ensure high similarity between embeddings of interacted user-item pairs, while maintaining low similarity for non-interacted pairs. Despite their high performance, encouraging dispersion for non-interacted pairs necessitates expensive regularization (e.g., negative sampling), hurting runtime and scalability. Existing research tends to address these challenges by simplifying the learning process, either by reducing model complexity or sampling data, trading performance for runtime. In this work, we move beyond model-level modifications and study the properties of the embedding tables under different learning strategies. Through theoretical analysis, we find that the singular values of the embedding tables are intrinsically linked to different CF loss functions. These findings are empirically validated on real-world datasets, demonstrating the practical benefits of higher stable rank -- a continuous version of matrix rank which encodes the distribution of singular values. Based on these insights, we propose an efficient warm-start strategy that regularizes the stable rank of the user and item embeddings. We show that stable rank regularization during early training phases can promote higher-quality embeddings, resulting in training speed improvements of up to 65.9%. Additionally, stable rank regularization can act as a proxy for negative sampling, allowing for performance gains of up to 21.2% over loss functions with small negative sampling ratios. Overall, our analysis unifies current CF methods under a new perspective -- their optimization of stable rank -- motivating a flexible regularization method that is easy to implement, yet effective at enhancing CF systems.
Description
WWW ’25, April 28-May 2, 2025, Sydney, NSW, Australia
Date issued
2025-04-22
URI
https://hdl.handle.net/1721.1/164250
Department
Massachusetts Institute of Technology. Department of Chemical Engineering
Publisher
ACM|Proceedings of the ACM Web Conference 2025
Citation
Donald Loveland, Xinyi Wu, Tong Zhao, Danai Koutra, Neil Shah, and Mingxuan Ju. 2025. Understanding and Scaling Collaborative Filtering Optimization from the Perspective of Matrix Rank. In Proceedings of the ACM on Web Conference 2025 (WWW '25). Association for Computing Machinery, New York, NY, USA, 436–449.
Version: Final published version
ISBN
979-8-4007-1274-6

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.