Show simple item record

dc.contributor.authorPennycook, Gordon
dc.contributor.authorRand, David G
dc.date.accessioned2021-10-27T20:23:46Z
dc.date.available2021-10-27T20:23:46Z
dc.date.issued2019
dc.identifier.urihttps://hdl.handle.net/1721.1/135511
dc.description.abstract© 2019 National Academy of Sciences. All Rights Reserved. Reducing the spread of misinformation, especially on social media, is a major challenge. We investigate one potential approach: having social media platform algorithms preferentially display content from news sources that users rate as trustworthy. To do so, we ask whether crowdsourced trust ratings can effectively differentiate more versus less reliable sources. We ran two preregistered experiments (n = 1,010 from Mechanical Turk and n = 970 from Lucid) where individuals rated familiarity with, and trust in, 60 news sources from three categories: (i) mainstream media outlets, (ii) hyperpartisan websites, and (iii) websites that produce blatantly false content (“fake news”). Despite substantial partisan differences, we find that laypeople across the political spectrum rated mainstream sources as far more trustworthy than either hyperpartisan or fake news sources. Although this difference was larger for Democrats than Republicans—mostly due to distrust of mainstream sources by Republicans—every mainstream source (with one exception) was rated as more trustworthy than every hyperpartisan or fake news source across both studies when equally weighting ratings of Democrats and Republicans. Furthermore, politically balanced layperson ratings were strongly correlated (r = 0.90) with ratings provided by professional fact-checkers. We also found that, particularly among liberals, individuals higher in cognitive reflection were better able to discern between low- and high-quality sources. Finally, we found that excluding ratings from participants who were not familiar with a given news source dramatically reduced the effectiveness of the crowd. Our findings indicate that having algorithms up-rank content from trusted media outlets may be a promising approach for fighting the spread of misinformation on social media.
dc.language.isoen
dc.publisherProceedings of the National Academy of Sciences
dc.relation.isversionof10.1073/PNAS.1806781116
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
dc.sourcePNAS
dc.titleFighting misinformation on social media using crowdsourced judgments of news source quality
dc.typeArticle
dc.contributor.departmentSloan School of Management
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciences
dc.relation.journalProceedings of the National Academy of Sciences of the United States of America
dc.eprint.versionFinal published version
dc.type.urihttp://purl.org/eprint/type/JournalArticle
eprint.statushttp://purl.org/eprint/status/PeerReviewed
dc.date.updated2021-03-26T18:33:16Z
dspace.orderedauthorsPennycook, G; Rand, DG
dspace.date.submission2021-03-26T18:33:17Z
mit.journal.volume116
mit.journal.issue7
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Needed


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record