Show simple item record

dc.contributor.authorAwad, Edmond
dc.contributor.authorLevine, Sydney
dc.contributor.authorLoreggia, Andrea
dc.contributor.authorMattei, Nicholas
dc.contributor.authorRahwan, Iyad
dc.contributor.authorRossi, Francesca
dc.contributor.authorTalamadupula, Kartik
dc.contributor.authorTenenbaum, Joshua
dc.contributor.authorKleiman-Weiner, Max
dc.date.accessioned2024-07-16T15:13:51Z
dc.date.available2024-07-16T15:13:51Z
dc.date.issued2024-07-13
dc.identifier.issn1387-2532
dc.identifier.issn1573-7454
dc.identifier.urihttps://hdl.handle.net/1721.1/155691
dc.description.abstractConstraining the actions of AI systems is one promising way to ensure that these systems behave in a way that is morally acceptable to humans. But constraints alone come with drawbacks as in many AI systems, they are not flexible. If these constraints are too rigid, they can preclude actions that are actually acceptable in certain, contextual situations. Humans, on the other hand, can often decide when a simple and seemingly inflexible rule should actually be overridden based on the context. In this paper, we empirically investigate the way humans make these contextual moral judgements, with the goal of building AI systems that understand when to follow and when to override constraints. We propose a novel and general preference-based graphical model that captures a modification of standard <jats:italic>dual process</jats:italic> theories of moral judgment. We then detail the design, implementation, and results of a study of human participants who judge whether it is acceptable to break a well-established rule: <jats:italic>no cutting in line</jats:italic>. We then develop an instance of our model and compare its performance to that of standard machine learning approaches on the task of predicting the behavior of human participants in the study, showing that our preference-based approach more accurately captures the judgments of human decision-makers. It also provides a flexible method to model the relationship between variables for moral decision-making tasks that can be generalized to other settings.en_US
dc.publisherSpringer Science and Business Media LLCen_US
dc.relation.isversionof10.1007/s10458-024-09667-4en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceSpringer USen_US
dc.titleWhen is it acceptable to break the rules? Knowledge representation of moral judgements based on empirical dataen_US
dc.typeArticleen_US
dc.identifier.citationAwad, E., Levine, S., Loreggia, A. et al. When is it acceptable to break the rules? Knowledge representation of moral judgements based on empirical data. Auton Agent Multi-Agent Syst 38, 35 (2024).en_US
dc.contributor.departmentMassachusetts Institute of Technology. Media Laboratory
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciences
dc.relation.journalAutonomous Agents and Multi-Agent Systemsen_US
dc.identifier.mitlicensePUBLISHER_CC
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2024-07-14T03:17:02Z
dc.language.rfc3066en
dc.rights.holderThe Author(s)
dspace.embargo.termsN
dspace.date.submission2024-07-14T03:17:02Z
mit.journal.volume38en_US
mit.journal.issue2en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record