Show simple item record

dc.contributor.advisorRand, David G.
dc.contributor.authorMartel, Cameron
dc.date.accessioned2025-10-21T13:16:43Z
dc.date.available2025-10-21T13:16:43Z
dc.date.issued2025-05
dc.date.submitted2025-06-23T18:08:08.872Z
dc.identifier.urihttps://hdl.handle.net/1721.1/163267
dc.description.abstractIn Chapter 1, I examine the efficacy of fact-checker warning labels as a content moderation intervention for addressing online misinformation. Warning labels from professional fact-checkers are one of the most historically used interventions against online misinformation. But are fact-checker warning labels effective for those who distrust fact-checkers? In a first correlational study, we validate a measure of trust in fact-checkers. Next, we conduct meta-analyses across 21 experiments in which participants evaluated true and false news posts and were randomized to either see no warning labels or to see warning labels on a high proportion of the false posts. Warning labels were on average effective at reducing belief in, and sharing of, false headlines. While warning effects were smaller for participants with less trust in fact-checkers, warning labels nonetheless significantly reduced belief in, and sharing of, false news even for those most distrusting of fact-checkers. Our results suggest fact-checker warning labels are a broadly effective tool for combatting misinformation. In Chapter 2, joint with Jennifer Allen, Gordon Pennycook, and David G. Rand, I investigate the potential of crowdsourced fact-checking systems to identify misleading online content. Social media platforms are increasingly adopting crowd-based content moderation interventions for identifying false or misleading content. However, existing theories posit that lay individuals can be highly politically biased, and that these strong political motivations inherently undermine accuracy. Alternatively, we propose that political and accuracy motivations may, in some cases, operate in tandem – in which case politically motivated individuals need not hamper truth discernment. We empirically assess this by analyzing a survey study of misinformation flagging and field data from X’s Community Notes. Consistent with a simple model of flagging behavior, posts that are both false and politically discordant are flagged the most. Importantly, we find that more politically motivated users flag a greater number of posts, engage in more politically biased flagging, and yet exhibit the same or better flagging discernment. Together, these results show that politically motivated individuals are integral to provisioning a high overall quantity and quality of crowdsourced fact-checks. In Chapter 3, I assess the perceived legitimacy of different content moderation interventions for addressing online misinformation. Current content moderation practices have been criticized as unjust. This raises an important question – who do Americans want deciding whether online content is harmfully misleading? We conducted a nationally representative survey experiment in which U.S. participants evaluated the legitimacy of hypothetical content moderation juries tasked with evaluating whether online content was harmfully misleading. These moderation juries varied on whether they were described as consisting of experts, laypeople, or non-juries. We also randomized features of jury composition (size, necessary qualifications) and whether juries engaged in discussion during content evaluation. Overall, participants evaluated expert juries as more legitimate than layperson juries or a computer algorithm. However, modifying layperson jury features helped increase legitimacy perceptions – nationally representative or politically balanced composition enhanced legitimacy, as did increased size, individual juror knowledge qualifications, and enabling juror discussion. Our findings shed light on the foundations of institutional legitimacy in content moderation and have implications for the design of online moderation systems.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleEssays on Content Moderation Interventions for Addressing Online Misinformation
dc.typeThesis
dc.description.degreePh.D.
dc.contributor.departmentSloan School of Management
dc.identifier.orcidhttps://orcid.org/0000-0003-3181-4309
mit.thesis.degreeDoctoral
thesis.degree.nameDoctor of Philosophy


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record