dc.contributor.advisor | Lalana Kagal, Harold Abelson and Alex "Sandy" Pentland. | en_US |
dc.contributor.author | Adebayo, Julius A | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. | en_US |
dc.date.accessioned | 2017-04-18T16:37:35Z | |
dc.date.available | 2017-04-18T16:37:35Z | |
dc.date.copyright | 2016 | en_US |
dc.date.issued | 2016 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/108212 | |
dc.description | Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, Technology and Policy Program, 2016. | en_US |
dc.description | Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016. | en_US |
dc.description | Cataloged from PDF version of thesis. | en_US |
dc.description | Includes bibliographical references (pages 94-99). | en_US |
dc.description.abstract | Predictive models are increasingly deployed for the purpose of determining access to services such as credit, insurance, and employment. Despite societal gains in efficiency and productivity through deployment of these models, potential systemic flaws have not been fully addressed, particularly the potential for unintentional discrimination. This discrimination could be on the basis of race, gender, religion, sexual orientation, or other characteristics. This thesis addresses the question: how can an analyst determine the relative significance of the inputs to a black-box predictive model in order to assess the model's fairness (or discriminatory extent)? We present FairML, an end-to- end toolbox for auditing predictive models by quantifying the relative significance of the model's inputs. FairML leverages model compression and four input ranking algorithms to quantify a model's relative predictive dependence on its inputs. The relative significance of the inputs to a predictive model can then be used to assess the fairness (or discriminatory extent) of such a model. With FairML, analysts can more easily audit cumbersome predictive models that are difficult to interpret. | en_US |
dc.description.statementofresponsibility | by Julius A. Adebayo. | en_US |
dc.format.extent | 99 pages | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Institute for Data, Systems, and Society. | en_US |
dc.subject | Engineering Systems Division. | en_US |
dc.subject | Technology and Policy Program. | en_US |
dc.subject | Electrical Engineering and Computer Science. | en_US |
dc.title | FairML : ToolBox for diagnosing bias in predictive modeling | en_US |
dc.title.alternative | ToolBox for diagnosing bias in predictive modeling | en_US |
dc.type | Thesis | en_US |
dc.description.degree | S.M. in Technology and Policy | en_US |
dc.description.degree | S.M. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
dc.contributor.department | Massachusetts Institute of Technology. Engineering Systems Division | |
dc.contributor.department | Massachusetts Institute of Technology. Institute for Data, Systems, and Society | |
dc.contributor.department | Technology and Policy Program | |
dc.identifier.oclc | 980349219 | en_US |