The Thorny Challenge of Making Moral Machines: Ethical Dilemmas with Self-Driving Cars
Author(s)
Awad, Edmond; Bonnefon, Jean-François; Shariff, Azim; Rahwan, Iyad
DownloadPublished version (528.0Kb)
Publisher with Creative Commons License
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
The algorithms that control AVs will need to embed moral principles guiding their decisions in situations of unavoidable harm. Manufacturers and regulators are confronted with three potentially incompatible objectives: being consistent, not causing public outrage, and not discouraging buyers. The presented moral machine study is a step towards solving this problem as it tries to learn how people all over the world feel about the alternative decisions the AI of self-driving vehicles might have to make. The global study displayed broad agreement across regions regarding how to handle unavoidable accidents. To master the moral challenges, all stakeholders should embrace the topic of machine ethics: this is a unique opportunity to decide as a community what we believe to be right or wrong, and to make sure that machines, unlike humans, unerringly follow the agreed-upon moral preferences. The integration of autonomous cars will require a new social contract that provides clear guidelines about who is responsible for different kinds of accidents, how monitoring and enforcement will be performed, and how trust among all stakeholders can be engendered.
Date issued
2019Department
Massachusetts Institute of Technology. Media LaboratoryJournal
NIM Marketing Intelligence Review
Publisher
Walter de Gruyter GmbH