Show simple item record

dc.contributor.advisorLalana Kagal, Harold Abelson and Alex "Sandy" Pentland.en_US
dc.contributor.authorAdebayo, Julius Aen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2017-04-18T16:37:35Z
dc.date.available2017-04-18T16:37:35Z
dc.date.copyright2016en_US
dc.date.issued2016en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/108212
dc.descriptionThesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, Technology and Policy Program, 2016.en_US
dc.descriptionThesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 94-99).en_US
dc.description.abstractPredictive models are increasingly deployed for the purpose of determining access to services such as credit, insurance, and employment. Despite societal gains in efficiency and productivity through deployment of these models, potential systemic flaws have not been fully addressed, particularly the potential for unintentional discrimination. This discrimination could be on the basis of race, gender, religion, sexual orientation, or other characteristics. This thesis addresses the question: how can an analyst determine the relative significance of the inputs to a black-box predictive model in order to assess the model's fairness (or discriminatory extent)? We present FairML, an end-to- end toolbox for auditing predictive models by quantifying the relative significance of the model's inputs. FairML leverages model compression and four input ranking algorithms to quantify a model's relative predictive dependence on its inputs. The relative significance of the inputs to a predictive model can then be used to assess the fairness (or discriminatory extent) of such a model. With FairML, analysts can more easily audit cumbersome predictive models that are difficult to interpret.en_US
dc.description.statementofresponsibilityby Julius A. Adebayo.en_US
dc.format.extent99 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectInstitute for Data, Systems, and Society.en_US
dc.subjectEngineering Systems Division.en_US
dc.subjectTechnology and Policy Program.en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleFairML : ToolBox for diagnosing bias in predictive modelingen_US
dc.title.alternativeToolBox for diagnosing bias in predictive modelingen_US
dc.typeThesisen_US
dc.description.degreeS.M. in Technology and Policyen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.contributor.departmentMassachusetts Institute of Technology. Engineering Systems Division
dc.contributor.departmentMassachusetts Institute of Technology. Institute for Data, Systems, and Society
dc.contributor.departmentTechnology and Policy Program
dc.identifier.oclc980349219en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record