Show simple item record

dc.contributor.advisorLo, Andrew
dc.contributor.authorGerszberg, Nina R.
dc.date.accessioned2024-09-16T13:50:42Z
dc.date.available2024-09-16T13:50:42Z
dc.date.issued2024-05
dc.date.submitted2024-07-11T14:37:24.106Z
dc.identifier.urihttps://hdl.handle.net/1721.1/156812
dc.description.abstractThe growing importance of large language models (LLMs) in daily life has heightened awareness and concerns about the fact that LLMs exhibit many of the same biases as their creators. In the context of hiring decisions, we quantify the degree to which LLMs perpetuate biases originating from their training data and investigate prompt engineering as a bias-mitigation technique. Our findings suggest that for a given resumé, an LLM is more likely to hire a candidate and perceive them as more qualified if the candidate is female, but still recommends lower pay relative to male candidates.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleQuantifying Gender Bias in Large Language Models: When ChatGPT Becomes a Hiring Manager
dc.typeThesis
dc.description.degreeM.Eng.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Engineering in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record