| dc.contributor.author | Loveland, Donald | |
| dc.contributor.author | Wu, Xinyi | |
| dc.contributor.author | Zhao, Tong | |
| dc.contributor.author | Koutra, Danai | |
| dc.contributor.author | Shah, Neil | |
| dc.contributor.author | Ju, Mingxuan | |
| dc.date.accessioned | 2025-12-09T19:30:07Z | |
| dc.date.available | 2025-12-09T19:30:07Z | |
| dc.date.issued | 2025-04-22 | |
| dc.identifier.isbn | 979-8-4007-1274-6 | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/164250 | |
| dc.description | WWW ’25, April 28-May 2, 2025, Sydney, NSW, Australia | en_US |
| dc.description.abstract | Collaborative Filtering (CF) methods dominate real-world recommender systems given their ability to learn high-quality, sparse ID-embedding tables that effectively capture user preferences. These tables scale linearly with the number of users and items, and are trained to ensure high similarity between embeddings of interacted user-item pairs, while maintaining low similarity for non-interacted pairs. Despite their high performance, encouraging dispersion for non-interacted pairs necessitates expensive regularization (e.g., negative sampling), hurting runtime and scalability. Existing research tends to address these challenges by simplifying the learning process, either by reducing model complexity or sampling data, trading performance for runtime. In this work, we move beyond model-level modifications and study the properties of the embedding tables under different learning strategies. Through theoretical analysis, we find that the singular values of the embedding tables are intrinsically linked to different CF loss functions. These findings are empirically validated on real-world datasets, demonstrating the practical benefits of higher stable rank -- a continuous version of matrix rank which encodes the distribution of singular values. Based on these insights, we propose an efficient warm-start strategy that regularizes the stable rank of the user and item embeddings. We show that stable rank regularization during early training phases can promote higher-quality embeddings, resulting in training speed improvements of up to 65.9%. Additionally, stable rank regularization can act as a proxy for negative sampling, allowing for performance gains of up to 21.2% over loss functions with small negative sampling ratios. Overall, our analysis unifies current CF methods under a new perspective -- their optimization of stable rank -- motivating a flexible regularization method that is easy to implement, yet effective at enhancing CF systems. | en_US |
| dc.publisher | ACM|Proceedings of the ACM Web Conference 2025 | en_US |
| dc.relation.isversionof | https://doi.org/10.1145/3696410.3714904 | en_US |
| dc.rights | Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. | en_US |
| dc.source | Association for Computing Machinery | en_US |
| dc.title | Understanding and Scaling Collaborative Filtering Optimization from the Perspective of Matrix Rank | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Donald Loveland, Xinyi Wu, Tong Zhao, Danai Koutra, Neil Shah, and Mingxuan Ju. 2025. Understanding and Scaling Collaborative Filtering Optimization from the Perspective of Matrix Rank. In Proceedings of the ACM on Web Conference 2025 (WWW '25). Association for Computing Machinery, New York, NY, USA, 436–449. | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Chemical Engineering | en_US |
| dc.identifier.mitlicense | PUBLISHER_POLICY | |
| dc.eprint.version | Final published version | en_US |
| dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
| eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
| dc.date.updated | 2025-08-01T07:58:41Z | |
| dc.language.rfc3066 | en | |
| dc.rights.holder | The author(s) | |
| dspace.date.submission | 2025-08-01T07:58:42Z | |
| mit.license | PUBLISHER_POLICY | |
| mit.metadata.status | Authority Work and Publication Information Needed | en_US |