Show simple item record

dc.contributor.authorJahanbakhsh, Farnaz
dc.contributor.authorZhang, Amy X
dc.contributor.authorBerinsky, Adam J
dc.contributor.authorPennycook, Gordon
dc.contributor.authorRand, David G
dc.contributor.authorKarger, David R
dc.date.accessioned2021-11-12T16:03:01Z
dc.date.available2021-11-12T16:03:01Z
dc.date.issued2021
dc.identifier.urihttps://hdl.handle.net/1721.1/138125
dc.description.abstractWhen users on social media share content without considering its veracity, they may unwittingly be spreading misinformation. In this work, we investigate the design of lightweight interventions that nudge users to assess the accuracy of information as they share it. Such assessment may deter users from posting misinformation in the first place, and their assessments may also provide useful guidance to friends aiming to assess those posts themselves. In support of lightweight assessment, we first develop a taxonomy of the reasons why people believe a news claim is or is not true; this taxonomy yields a checklist that can be used at posting time. We conduct evaluations to demonstrate that the checklist is an accurate and comprehensive encapsulation of people's free-response rationales. In a second experiment, we study the effects of three behavioral nudges---1) checkboxes indicating whether headings are accurate, 2) tagging reasons (from our taxonomy) that a post is accurate via a checklist and 3) providing free-text rationales for why a headline is or is not accurate---on people's intention of sharing the headline on social media. From an experiment with 1668 participants, we find that both providing accuracy assessment and rationale reduce the sharing of false content. They also reduce the sharing of true content, but to a lesser degree that yields an overall decrease in the fraction of shared content that is false. Our findings have implications for designing social media and news sharing platforms that draw from richer signals of content credibility contributed by users. In addition, our validated taxonomy can be used by platforms and researchers as a way to gather rationales in an easier fashion than free-response.en_US
dc.language.isoen
dc.publisherAssociation for Computing Machinery (ACM)en_US
dc.relation.isversionof10.1145/3449092en_US
dc.rightsCreative Commons Attribution 4.0 International licenseen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceACMen_US
dc.titleExploring Lightweight Interventions at Posting Time to Reduce the Sharing of Misinformation on Social Mediaen_US
dc.typeArticleen_US
dc.identifier.citationJahanbakhsh, Farnaz, Zhang, Amy X, Berinsky, Adam J, Pennycook, Gordon, Rand, David G et al. 2021. "Exploring Lightweight Interventions at Posting Time to Reduce the Sharing of Misinformation on Social Media." Proceedings of the ACM on Human-Computer Interaction, 5 (CSCW1).
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.contributor.departmentMassachusetts Institute of Technology. Department of Political Science
dc.contributor.departmentSloan School of Management
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciences
dc.relation.journalProceedings of the ACM on Human-Computer Interactionen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2021-11-12T15:58:21Z
dspace.orderedauthorsJahanbakhsh, F; Zhang, AX; Berinsky, AJ; Pennycook, G; Rand, DG; Karger, DRen_US
dspace.date.submission2021-11-12T15:58:23Z
mit.journal.volume5en_US
mit.journal.issueCSCW1en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record