Ethics for artificial agents
Author(s)
Gray, David Michael, Ph. D. Massachusetts Institute of Technology
DownloadFull printable version (6.154Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Linguistics and Philosophy.
Advisor
Alex Byrne.
Terms of use
Metadata
Show full item recordAbstract
Machine ethics is a nascent subfield of computer ethics that focuses on the ethical issues involved in the design of autonomous software agents ("artificial agents"). Chapter 1 of this thesis considers how best to understand the central projects of this new subfield, and reconstructs a prominent theory of how artificial agents ought to be designed. This theory, which I call the "agential theory" of machine ethics, says that artificial agents morally ought to be designed to behave only in ways that would be permissible for a human agent to behave, and that only artificial agents that have been designed in this way are morally permissible for human beings to use. Chapter 2 critically assesses two versions of the agential theory-one that assumes that artificial agents are moral agents, and another that does not. After considering arguments for both versions of the theory, I argue that both versions should be rejected. Chapter 3 sets out and analyzes a case study in machine ethics, focusing on the development of an artificial agent to assist with the planning of a public health social work intervention.
Description
Thesis: Ph. D. in Philosophy, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2018. Cataloged from PDF version of thesis. Includes bibliographical references (pages 121-125).
Date issued
2018Department
Massachusetts Institute of Technology. Department of Linguistics and PhilosophyPublisher
Massachusetts Institute of Technology
Keywords
Linguistics and Philosophy.