MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Leveraging large language model embeddings to enhance diversity and mitigate the filter bubble effect in recommender systems

Author(s)
Chen, Kristina Y.
Thumbnail
DownloadThesis PDF (790.7Kb)
Advisor
Bell, Ana
Terms of use
In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
The “filter bubble” describes the potential for internet personalization via algorithmic curation to effectively isolate individuals from a diversity of perspectives or content. In particular, this filter bubble effect may appear as a result of recommender systems used on social media platforms or online marketplaces to influence user behavior. While customization may improve user retention and decision quality, the filter bubble may hinder discovery and intensify polarization, while also reducing the degree of interaction between individuals and different viewpoints or domains. This thesis explores mitigation strategies for the filter bubble by enhancing recommendation models using content-based embeddings produced by large language models (LLMs), which encode semantic information about items being recommended. The addition of semantic information — beyond the user interaction data that usually drives recommender models — may not only improve the quality of recommendations but also promote diversity by allowing for content-based comparison of item candidates for recommendation. After establishing a baseline collaborative filtering recommendation model and validating standard re-ranking diversification techniques, we introduce two LLM embedding-enhanced approaches. The first is a hybrid retrieval scheme that combines collaborative filtering scores with LLM embedding similarity to generate candidate items. The second employs the LLM embeddings directly in a diversity-oriented re-ranking framework. To ensure generality, the same experiments are repeated and evaluated across three widely-used recommendation datasets from different domains. We further explore how embedding granularity influences performance by generating several sets of embeddings encoding different levels of detail and repeating these experiments. We also assess whether a contrastively fine-tuned LLM designed to emphasize inter-item differences produces more suitable embeddings for encouraging recommendation diversity. We reveal that score-fusion hybrids yield negligible diversity gains, particularly on sparse datasets, whereas applying re-ranking shows promise in bursting the filter bubble. In particular, LLM embeddings combined with re-ranking achieve the highest semantic diversity and long-tail novelty across domains and items, at relatively minor losses in precision and other relevance measures.
Date issued
2025-05
URI
https://hdl.handle.net/1721.1/162682
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.