MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Essays on Understanding and Combating Misinformation at Scale

Author(s)
Allen, Jennifer
Thumbnail
DownloadThesis PDF (11.64Mb)
Advisor
Rand, David G.
Terms of use
Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) Copyright retained by author(s) https://creativecommons.org/licenses/by-nc-nd/4.0/
Metadata
Show full item record
Abstract
In Chapter 1, I explore the use of crowdsourcing as a potential solution to the misinformation problem at scale. Perhaps the most prominent approach to combating misinformation is the use of professional fact-checkers. This approach, however, is not scalable: Professional fact-checkers cannot possibly keep up with the volume of misinformation produced every day. Furthermore, many people see fact-checkers as having a liberal bias and thus distrust them. Here, we explore a potential solution to both of these problems: leveraging the “wisdom of crowds'' to make fact-checking possible at scale using politically-balanced groups of laypeople. Our results indicate that crowdsourcing is a promising approach for helping to identify misinformation at scale. In Chapter 2, joint with David Rand and Cameron Martel, I extend work on crowdsourced fact-checking to assess the viability of crowdsourcing in an opt-in, polarized environment. We leverage data from Birdwatch, Twitter’s crowdsourced fact-checking pilot program, to examine how shared partisanship affects participation in crowdsourced fact-checking. Our findings provide clear evidence that Birdwatch users preferentially challenge content from those with whom they disagree politically. While not necessarily indicating that Birdwatch is ineffective for identifying misleading content, these results demonstrate the important role that partisanship can play in content evaluation. Platform designers must consider the ramifications of partisanship when implementing crowdsourcing programs. In Chapter 3, I examine the role of online (mis)information on US vaccine hesitancy. I combine survey experimental estimates of persuasion with exposure data from Facebook to estimate the extent to which (mis)information content on Facebook reduces COVID vaccine acceptance. Contrary to popular belief, I find that factually-accurate vaccine-skeptical content was approximately 50X more impactful than outright false misinformation. Although outright misinformation had a larger negative effect per exposure on vaccination intentions than factually accurate content, it was rarely seen on social media. In contrast, mainstream media articles reporting on rare deaths following vaccination garnered hundreds of millions of views. While this work suggests that limiting the spread of misinformation has important public health benefits, it highlights the need to scrutinize accurate-but-misleading content published by mainstream sources.
Date issued
2024-05
URI
https://hdl.handle.net/1721.1/155882
Department
Sloan School of Management
Publisher
Massachusetts Institute of Technology

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.