Algorithmic Interactions With Strategic Users: Incentives, Interplay, and Impact
Author(s)
Fallah, Alireza
DownloadThesis PDF (2.519Mb)
Advisor
Ozdaglar, Asuman
Terms of use
Metadata
Show full item recordAbstract
The societal challenges posed by machine learning algorithms are becoming increasingly important, and to effectively study them, it is crucial to incorporate the incentives and preferences of users into the design of algorithms. In many cases, algorithms are solely designed based on the platform's objectives, without taking into account the potential misalignment between the platform's goals and the interests of users.
This thesis presents frameworks for studying the interactions between a platform and strategic users. The central objective of the platform is to estimate a parameter of interest by collecting users’ data. However, users, recognizing the value of their data, demand privacy guarantees or compensations in exchange for sharing their information. The thesis delves into various aspects of this problem, including the estimation task itself, the allocation of privacy guarantees, and the potential vulnerabilities of these guarantees to the platform's power.
In particular, in the first part of this thesis, we formulate this question as a Bayesian-optimal mechanism design problem, in which an individual can share her data in exchange for a monetary reward but at the same time has a private heterogeneous privacy cost which we quantify using differential privacy. We consider two popular data market architectures: central and local. In both settings, we establish minimax lower bounds for the estimation error and derive (near) optimal estimators for given heterogeneous privacy loss levels for users. Next, we pose the mechanism design problem as the optimal selection of an estimator and payments that elicit truthful reporting of users' privacy sensitivities. We further develop efficient algorithmic mechanisms to solve this problem in both privacy settings. Moreover, we investigate the case that users have heterogeneous sensitivities for two types of privacy losses corresponding to local and central privacy measures.
In the second part, we study a different aspect of the data market design: the optimal choice of architecture from both users' and the platform's point of view. The platform collects data from users by means of a mechanism that could partially protect users' privacy. We prove that a simple shuffling mechanism, whereby individual data is fully anonymized with some probability, is optimal from the viewpoint of users. We also develop a game-theoretic model of data sharing to study the impact of this shuffling mechanism on the platform's behavior and users' utility. In particular, we uncover an intriguing phenomenon that highlights the fragility of provided privacy guarantees: as the value of pooled data rises for users, the platform can exploit this opportunity to decrease the provided privacy guarantee, ultimately leading to reduced user welfare at equilibrium.
Date issued
2023-09Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology