MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Essays on Online Platforms and Human-Algorithm Interaction

Author(s)
Moehring, Alex
Thumbnail
DownloadThesis PDF (9.615Mb)
Advisor
Tucker, Catherine
Terms of use
In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
This dissertation contains three chapters that analyze how algorithms on social media platforms influence the content that users engage with and how individuals incorporate algorithmic predictions in their decision-making. In Chapter 1, I study how engagement maximizing news feed algorithms on social media affect the credibility of news content with which users engage. This allows me to estimate the extent to which engagement-maximizing algorithms promote and incentivize low-quality content. In addition, I evaluate how the ranking algorithm itself can be designed to promote and encourage engagement with high quality content. In Chapter 2, I analyze how the introduction of a new non-personalized news feed impacts user engagement quantity, quality, and diversity on the Reddit platform. I find that this auxiliary feed increases the share of users that engage with news-related content and the diversity of engagement within news categories and within articles from publishers across the political spectrum increases as a result of the feed. In Chapter 3, in collaboration with Nikhil Agarwal, Tobias Salz, and Pranav Rajpurkar, we study human-AI collaboration using an information experiment with professional radiologists. Results show that providing (i) AI predictions does not always improve performance, whereas (ii) contextual information does. Radiologists do not realize the gains from AI assistance because of errors in belief updating – they underweight AI predictions and treat their own information and AI predictions as statistically independent.
Date issued
2024-05
URI
https://hdl.handle.net/1721.1/155846
Department
Sloan School of Management
Publisher
Massachusetts Institute of Technology

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.