Leveraging large language model embeddings to enhance diversity and mitigate the filter bubble effect in recommender systems
Author(s)
Chen, Kristina Y.
DownloadThesis PDF (790.7Kb)
Advisor
Bell, Ana
Terms of use
Metadata
Show full item recordAbstract
The “filter bubble” describes the potential for internet personalization via algorithmic curation to effectively isolate individuals from a diversity of perspectives or content. In particular, this filter bubble effect may appear as a result of recommender systems used on social media platforms or online marketplaces to influence user behavior. While customization may improve user retention and decision quality, the filter bubble may hinder discovery and intensify polarization, while also reducing the degree of interaction between individuals and different viewpoints or domains. This thesis explores mitigation strategies for the filter bubble by enhancing recommendation models using content-based embeddings produced by large language models (LLMs), which encode semantic information about items being recommended. The addition of semantic information — beyond the user interaction data that usually drives recommender models — may not only improve the quality of recommendations but also promote diversity by allowing for content-based comparison of item candidates for recommendation. After establishing a baseline collaborative filtering recommendation model and validating standard re-ranking diversification techniques, we introduce two LLM embedding-enhanced approaches. The first is a hybrid retrieval scheme that combines collaborative filtering scores with LLM embedding similarity to generate candidate items. The second employs the LLM embeddings directly in a diversity-oriented re-ranking framework. To ensure generality, the same experiments are repeated and evaluated across three widely-used recommendation datasets from different domains. We further explore how embedding granularity influences performance by generating several sets of embeddings encoding different levels of detail and repeating these experiments. We also assess whether a contrastively fine-tuned LLM designed to emphasize inter-item differences produces more suitable embeddings for encouraging recommendation diversity. We reveal that score-fusion hybrids yield negligible diversity gains, particularly on sparse datasets, whereas applying re-ranking shows promise in bursting the filter bubble. In particular, LLM embeddings combined with re-ranking achieve the highest semantic diversity and long-tail novelty across domains and items, at relatively minor losses in precision and other relevance measures.
Date issued
2025-05Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology