Show simple item record

dc.contributor.advisorIyad Rahwan.en_US
dc.contributor.authorKim, Richarden_US
dc.contributor.otherProgram in Media Arts and Sciences (Massachusetts Institute of Technology)en_US
dc.date.accessioned2019-11-12T17:42:30Z
dc.date.available2019-11-12T17:42:30Z
dc.date.copyright2018en_US
dc.date.issued2018en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/122897
dc.descriptionThesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2018en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 75-81).en_US
dc.description.abstractWe face a future of delegating many important decision making tasks to artificial intelligence (AI) systems as we anticipate widespread adoption of autonomous systems such as autonomous vehicles (AV). However, recent string of fatal accidents involving AV reminds us that delegating certain decisions making tasks have deep ethical complications. As a result, building ethical AI agent that makes decisions in line with human moral values has surfaced as a key challenge for Al researchers. While recent advances in deep learning in many domains of human intelligence suggests that deep learning models will also pave the way for moral learning and ethical decision making, training a deep learning model usually encompasses use of large quantities of human-labeled training data. In contrast to deep learning models, research in human cognition of moral learning theorizes that the human mind is capable of learning moral values from a few, limited observations of moral judgments of other individuals and apply those values to make ethical decisions in a new and unique moral dilemma. How can we leverage the insights that we have about human moral learning to design AI agents that can rapidly infer moral values of human it interacts with? In this work, I explore three cognitive mechanisms - abstraction, society-individual dynamics, and response time analysis - to demonstrate how these mechanisms contribute to rapid inference of moral values from limited number of observed data. I propose two Bayesian cognitive models to express these mechanisms using hierarchical Bayesian modeling framework and use large-scale ethical judgments from Moral Machine to empirically demonstrate the contributions of these mechanisms to rapid inference of individual preferences and biases in ethical decision making.en_US
dc.description.statementofresponsibilityby Richard Kim.en_US
dc.format.extent81 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectProgram in Media Arts and Sciencesen_US
dc.titleA computational model of moral learning for autonomous vehiclesen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentProgram in Media Arts and Sciences (Massachusetts Institute of Technology)en_US
dc.identifier.oclc1126790832en_US
dc.description.collectionS.M. Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciencesen_US
dspace.imported2019-11-12T17:42:29Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentMediaen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record