Understanding and mitigating unintended demographic bias in machine learning systems
Author(s)
Sweeney, Christopher(Christopher J.),M. Eng.Massachusetts Institute of Technology.
Download1128813860-MIT.pdf (4.409Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Richard Fletcher and Maryam Najaan.
Terms of use
Metadata
Show full item recordAbstract
Machine Learning is becoming more and more influential in our society. Algorithms that learn from data are streamlining tasks in domains like employment, banking, education, heath care, social media, etc. Unfortunately, machine learning models are very susceptible to unintended bias, resulting in unfair and discriminatory algorithms with the power to adversely impact society. This unintended bias is usually subtle, emanating from many different sources and taking on many forms. This thesis will focus on understanding how unfair biases with respect to various demographic groups show up in machine learning systems. Furthermore, we develop multiple techniques to mitigate unintended demographic bias at various stages of typical machine learning pipelines. Using Natural Language Processing as a framework, we show substantial improvements in fairness for standard machine learning systems, when using our bias mitigation techniques.
Description
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (pages 81-84).
Date issued
2019Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.