| dc.contributor.advisor | Tenenbaum, Joshua | |
| dc.contributor.author | Wing, Shannon P. | |
| dc.date.accessioned | 2024-03-21T19:14:14Z | |
| dc.date.available | 2024-03-21T19:14:14Z | |
| dc.date.issued | 2024-02 | |
| dc.date.submitted | 2024-03-04T16:38:09.190Z | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/153894 | |
| dc.description.abstract | The goal to build a safe Artificial General Intelligence requires an advancement beyond any single human being’s moral capacity. For the same reason why we desire democracy, a moral AGI will need to be able to represent a wide array of perspectives accurately.
While there has been a lot of work to push AI towards correctly answering unanimously agreed upon moral questions, we will take a different approach and ask: What do we do for the space where there is no correct answer, but perhaps multiple? Where there are better and worse arguments? We will investigate one complex moral question, where the empirical human data strays from unanimous agreement, evaluate chatGPT’s success, and build towards a neuro-symbolic framework to improve upon this baseline. By investigating one problem in depth, we hope to uncover nuances, intricacies, and details that might be overlooked in a broader exploration. Our insights intend to spark curiosity, rather than provide answers. | |
| dc.publisher | Massachusetts Institute of Technology | |
| dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) | |
| dc.rights | Copyright retained by author(s) | |
| dc.rights.uri | https://creativecommons.org/licenses/by-nc-nd/4.0/ | |
| dc.title | Towards a neuro-symbolic approach to moral judgment | |
| dc.type | Thesis | |
| dc.description.degree | M.Eng. | |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
| mit.thesis.degree | Master | |
| thesis.degree.name | Master of Engineering in Electrical Engineering and Computer Science | |