MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

When is it acceptable to break the rules? Knowledge representation of moral judgements based on empirical data

Author(s)
Awad, Edmond; Levine, Sydney; Loreggia, Andrea; Mattei, Nicholas; Rahwan, Iyad; Rossi, Francesca; Talamadupula, Kartik; Tenenbaum, Joshua; Kleiman-Weiner, Max; ... Show more Show less
Thumbnail
Download10458_2024_Article_9667.pdf (1.761Mb)
Publisher with Creative Commons License

Publisher with Creative Commons License

Creative Commons Attribution

Terms of use
Creative Commons Attribution https://creativecommons.org/licenses/by/4.0/
Metadata
Show full item record
Abstract
Constraining the actions of AI systems is one promising way to ensure that these systems behave in a way that is morally acceptable to humans. But constraints alone come with drawbacks as in many AI systems, they are not flexible. If these constraints are too rigid, they can preclude actions that are actually acceptable in certain, contextual situations. Humans, on the other hand, can often decide when a simple and seemingly inflexible rule should actually be overridden based on the context. In this paper, we empirically investigate the way humans make these contextual moral judgements, with the goal of building AI systems that understand when to follow and when to override constraints. We propose a novel and general preference-based graphical model that captures a modification of standard <jats:italic>dual process</jats:italic> theories of moral judgment. We then detail the design, implementation, and results of a study of human participants who judge whether it is acceptable to break a well-established rule: <jats:italic>no cutting in line</jats:italic>. We then develop an instance of our model and compare its performance to that of standard machine learning approaches on the task of predicting the behavior of human participants in the study, showing that our preference-based approach more accurately captures the judgments of human decision-makers. It also provides a flexible method to model the relationship between variables for moral decision-making tasks that can be generalized to other settings.
Date issued
2024-07-13
URI
https://hdl.handle.net/1721.1/155691
Department
Massachusetts Institute of Technology. Media Laboratory; Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences
Journal
Autonomous Agents and Multi-Agent Systems
Publisher
Springer Science and Business Media LLC
Citation
Awad, E., Levine, S., Loreggia, A. et al. When is it acceptable to break the rules? Knowledge representation of moral judgements based on empirical data. Auton Agent Multi-Agent Syst 38, 35 (2024).
Version: Final published version
ISSN
1387-2532
1573-7454

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.