MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Paths to AI Accountability: Design, Measurement, and the Law

Author(s)
Cen, Sarah H.
Thumbnail
DownloadThesis PDF (7.390Mb)
Advisor
Mądry, Aleksander
Shah, Devavrat
Terms of use
In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
Algorithmic systems are increasingly intervening on human interactions and decisions, from selecting the content users see on social media to helping hiring managers choose candidates to interview. In recent years, the falling barrier between humans and AI has sparked fears about AI’s capabilities and elicited questions about the role that algorithms and, increasingly, AI should play in our lives. As society continues working towards answering these questions, this thesis argues that we must construct paths to AI accountability by determining who owes responsibility to whom in the AI ecosystem, upholding these responsibilities, and enforcing them. Pursuing AI accountability allows us to innovate while still acknowledging that AI is a technology developed and wielded by human actors. Furthermore, by focusing on the responsibilities of human actors, this approach builds on existing social and legal frameworks of accountability. Within this vast, multidisciplinary research area, this thesis centers on three aspects of AI accountability: design, measurement, and the law. In Part I, we examine the importance of designing responsible AI systems from the ground up, which involves exploring definitions of responsibility, methods for achieving them, and the ramifications (e.g., trade-offs) of responsible design. As demonstrations of design, we study three different contexts. Each context builds on a notion of responsibility, and we investigate how these notions—which include trustworthiness, fairness, and social welfare—arise and interact. We provide formal definitions of each notion, discuss their implications, and propose interventions for achieving them. In Part II, we turn our attention to AI measurement: quantifying AI behaviors and effects through systematic observations and procedures. We illustrate the importance of AI measurement through three case studies: (i) a black-box audit for social media algorithms; (ii) an estimator and experiment design for individual treatment effect estimation in the presence of spillover; and (iii) a user study testing whether users adapt to their recommender systems. In this part, we show how measurement can play a crucial role in compliance testing, analyzing AI behavior, and producing evidence that can inform decision-making (e.g., policy). In Part III, we discuss how the law can align incentives with AI accountability as well as challenges in realizing AI accountability in practice. We center our discussion on two works. The first seeks to fill a gap in the law around AI that arises from AI’s unintuitive and opaque nature, and argues that AI decision-subjects have a substantive right in the age of AI that we term the “right to be an exception.” While the first work studies a gap in the law, the second tackles practical challenges in carrying out the law. It examines how lacking both transparency and access to AI systems can frustrate the ability to monitor, evaluate, and audit AI systems.
Date issued
2024-09
URI
https://hdl.handle.net/1721.1/158476
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.