MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Essays on Content Moderation Interventions for Addressing Online Misinformation

Author(s)
Martel, Cameron
Thumbnail
DownloadThesis PDF (13.14Mb)
Advisor
Rand, David G.
Terms of use
In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
In Chapter 1, I examine the efficacy of fact-checker warning labels as a content moderation intervention for addressing online misinformation. Warning labels from professional fact-checkers are one of the most historically used interventions against online misinformation. But are fact-checker warning labels effective for those who distrust fact-checkers? In a first correlational study, we validate a measure of trust in fact-checkers. Next, we conduct meta-analyses across 21 experiments in which participants evaluated true and false news posts and were randomized to either see no warning labels or to see warning labels on a high proportion of the false posts. Warning labels were on average effective at reducing belief in, and sharing of, false headlines. While warning effects were smaller for participants with less trust in fact-checkers, warning labels nonetheless significantly reduced belief in, and sharing of, false news even for those most distrusting of fact-checkers. Our results suggest fact-checker warning labels are a broadly effective tool for combatting misinformation. In Chapter 2, joint with Jennifer Allen, Gordon Pennycook, and David G. Rand, I investigate the potential of crowdsourced fact-checking systems to identify misleading online content. Social media platforms are increasingly adopting crowd-based content moderation interventions for identifying false or misleading content. However, existing theories posit that lay individuals can be highly politically biased, and that these strong political motivations inherently undermine accuracy. Alternatively, we propose that political and accuracy motivations may, in some cases, operate in tandem – in which case politically motivated individuals need not hamper truth discernment. We empirically assess this by analyzing a survey study of misinformation flagging and field data from X’s Community Notes. Consistent with a simple model of flagging behavior, posts that are both false and politically discordant are flagged the most. Importantly, we find that more politically motivated users flag a greater number of posts, engage in more politically biased flagging, and yet exhibit the same or better flagging discernment. Together, these results show that politically motivated individuals are integral to provisioning a high overall quantity and quality of crowdsourced fact-checks. In Chapter 3, I assess the perceived legitimacy of different content moderation interventions for addressing online misinformation. Current content moderation practices have been criticized as unjust. This raises an important question – who do Americans want deciding whether online content is harmfully misleading? We conducted a nationally representative survey experiment in which U.S. participants evaluated the legitimacy of hypothetical content moderation juries tasked with evaluating whether online content was harmfully misleading. These moderation juries varied on whether they were described as consisting of experts, laypeople, or non-juries. We also randomized features of jury composition (size, necessary qualifications) and whether juries engaged in discussion during content evaluation. Overall, participants evaluated expert juries as more legitimate than layperson juries or a computer algorithm. However, modifying layperson jury features helped increase legitimacy perceptions – nationally representative or politically balanced composition enhanced legitimacy, as did increased size, individual juror knowledge qualifications, and enabling juror discussion. Our findings shed light on the foundations of institutional legitimacy in content moderation and have implications for the design of online moderation systems.
Date issued
2025-05
URI
https://hdl.handle.net/1721.1/163267
Department
Sloan School of Management
Publisher
Massachusetts Institute of Technology

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.