| dc.contributor.advisor | Lo, Andrew | |
| dc.contributor.author | Gerszberg, Nina R. | |
| dc.date.accessioned | 2024-09-16T13:50:42Z | |
| dc.date.available | 2024-09-16T13:50:42Z | |
| dc.date.issued | 2024-05 | |
| dc.date.submitted | 2024-07-11T14:37:24.106Z | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/156812 | |
| dc.description.abstract | The growing importance of large language models (LLMs) in daily life has heightened awareness and concerns about the fact that LLMs exhibit many of the same biases as their creators. In the context of hiring decisions, we quantify the degree to which LLMs perpetuate biases originating from their training data and investigate prompt engineering as a bias-mitigation technique. Our findings suggest that for a given resumé, an LLM is more likely to hire a candidate and perceive them as more qualified if the candidate is female, but still recommends lower pay relative to male candidates. | |
| dc.publisher | Massachusetts Institute of Technology | |
| dc.rights | In Copyright - Educational Use Permitted | |
| dc.rights | Copyright retained by author(s) | |
| dc.rights.uri | https://rightsstatements.org/page/InC-EDU/1.0/ | |
| dc.title | Quantifying Gender Bias in Large Language Models: When ChatGPT Becomes a Hiring Manager | |
| dc.type | Thesis | |
| dc.description.degree | M.Eng. | |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
| mit.thesis.degree | Master | |
| thesis.degree.name | Master of Engineering in Electrical Engineering and Computer Science | |