Towards Bridging and Governing Decentralized Communities
Author(s)
Saldías Fuentes, Belén Carolina
DownloadThesis PDF (8.970Mb)
Advisor
Roy, Deb K.
Terms of use
Metadata
Show full item recordAbstract
"Unless the spaces in a building are arranged in a sequence which corresponds to their degrees of privacy the visits made by strangers, friends, guests, clients, family, will always be a little awkward." (Alexander, 1977) — Unlike physical spaces, where we can move seamlessly between different environments with varying degrees of privacy, much of our online experience occurs in noisy, crowded, and imposed public areas. This can undermine meaningful engagement, deepen social divides, and exacerbate anxiety, polarization, and distrust arising from unnecessary friction and misunderstandings. Moreover, while different communities with distinct values and norms often share these same public venues, they are typically subject to one-size-fits-all policies that fail to address local contexts. Consequently, toxic behavior is policed at the platform level rather than by the communities themselves, leading to oversimplified governance solutions that favor some communities while silencing others.
Fortunately, emerging strategies in decentralized protocols and networks have begun to change this dynamic. Decentralized systems designed for local governance can empower communities to create more nuanced and context-sensitive rules. However, these approaches remain largely inaccessible to non-technical users and risk creating a "paradox of decentralization," wherein isolated servers or communities potentially deepen echo chambers. This thesis contends that by placing community governance and user agency at the center of online platforms—and by leveraging advances in large language models (LLMs)—we can build healthier digital spaces that foster pro-social interactions while respecting individual groups' autonomy.
To explore these possibilities, this dissertation examines how intentional design principles can promote constructive communication in decentralized contexts. First, it presents a large-scale historical Reddit dataset encompassing over 230K removed posts across more than 19K mission-defined communities—that captures a diverse range of speech, community norms, and moderation approaches. By analyzing over 60K community rules, I propose an empirically grounded norms schema and reveal how the purpose statement correlates with pro-social behavior reflected in community-centered discourse.
Building on these insights, the dissertation next tackles the challenge of shifting from centralized, top-down moderation to distributed, community-specific content governance. While centralized methods provide highly generalizable moderation powered by advanced AI, they hinder specificity and community-specific definitions of behavior, limiting community and user participation in shaping how their content is moderated and ranked. I prototype and evaluate tools for (i) explainable, decentralized content moderation—where interpretable models illuminate why a post is flagged or removed—and (ii) surfacing unspoken differences in the definitions and understanding of seemingly similar norms across communities. These prototypes show how LLMs can assist by clarifying value mismatches, supporting local decision-making, and enabling communities to mediate misunderstandings across divides.
Finally, I consolidate these findings in a real-world social network platform called Odessa—a DEcentralized Social Systems App—deliberately designed as a user-friendly, decentralized environment that allows communities to define—and iteratively refine—their own norms, moderation, ranking algorithms, and, more generally, governance strategies. Through system deployment and user experiments, I investigate how participants navigate local governance controls and interact within bridged spaces across communities. Odessa's bridging mechanisms illustrate how communities can preserve distinct values without sacrificing cross-community connections. By open-sourcing Odessa, I provide a framework for researchers and practitioners to test human-AI partnerships in governance and a learning environment for apprentices. The results presented here underscore both the opportunities and challenges in democratizing content moderation, highlighting the pivotal role of transparent AI in promoting trust and mutual understanding.
This dissertation makes the case that future social media ecosystems should emphasize bottom-up, community-driven governance aided by interpretable AI tools. By enabling communities to shape their social expectations through purpose and norms, explain decisions through transparent AI and access to human rationales, and forge connections with other communities, we can cultivate online environments where pro-social discourse thrives. In doing so, we move beyond merely "fighting toxicity" toward intentionally designing spaces that support constructive dialogue and genuine community development.
Date issued
2025-05Department
Program in Media Arts and Sciences (Massachusetts Institute of Technology)Publisher
Massachusetts Institute of Technology