Show simple item record

dc.contributor.advisorBrian C. Williams.en_US
dc.contributor.authorRaiman, Jonathan (Jonathan Raphael)en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Aeronautics and Astronautics.en_US
dc.date.accessioned2017-12-05T19:12:15Z
dc.date.available2017-12-05T19:12:15Z
dc.date.copyright2015en_US
dc.date.issued2017en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/112425
dc.descriptionThesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, June 2017.en_US
dc.descriptionCataloged from PDF version of thesis. "February 2015." Thesis pagination reflects the way it was delivered to the Institute Archives.en_US
dc.descriptionIncludes bibliographical references (pages 93-102).en_US
dc.description.abstractSuccessful man-machine interaction requires justification and transparency for the behavior of the machine. Artificial agents now perform a variety of high risk jobs alongside humans: the need for justification is apparent when we consider the millions of dollars that can be lost by robotic traders in the stock market over misreading online news [9] or the hundreds of lives that could be saved if the behavior of plane autopilots was better understood [1]. Current state of the art approaches to man-machine interaction within a dialog, which use sentiment analysis, recommender systems, or information retrieval algorithms, fail to provide a rationale for their predictions or their internal behavior. In this thesis, I claim that making the machine selective in the elements considered in its final computation, by enforcing sparsity at the Machine Learning stage, reveals the machine's behavior and provides justification to the user. My second claim is that selectivity in the machine's inputs acts as Occam's Razor: rather than hindering performance, enforcing sparsity allows the trained Machine Learning model to better generalize to unseen data. I support my first claim concerning transparency and justification through two separate experiments that are each fundamental to Man-Machine interaction: - Recommender System: Interactive plan resolution using Uhura and user profiles represented by ontologies, - Sentiment Analysis: Text climax as support for predictions. In the first experiment, I find that the trained system's recommendations agree better with human decisions than existing several baselines which rely on state of the art topic modelling methods that do not enforce sparsity in the input data. In the second experiment, I obtain a new state of the art result on Sentiment Analysis and show that the trained system can now provide justification by pinpointing climactic moments in the original text that influence the sentiment of the text, unlike competing approaches. My second claim about sparsity's regularization benefits is supported with another set of experiments, where I demonstrate significant improvement over non-sparse baselines in 3 challenging Machine Learning tasks.en_US
dc.description.statementofresponsibilityby Jonathan Raiman.en_US
dc.format.extent102 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectAeronautics and Astronautics.en_US
dc.titleBuilding blocks for the minden_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics
dc.identifier.oclc1008753943en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record