Show simple item record

dc.contributor.advisorWilson, Ashia
dc.contributor.authorShastri, Ishana
dc.date.accessioned2024-09-16T13:46:54Z
dc.date.available2024-09-16T13:46:54Z
dc.date.issued2024-05
dc.date.submitted2024-07-11T14:36:43.319Z
dc.identifier.urihttps://hdl.handle.net/1721.1/156750
dc.description.abstractHolding the judicial system accountable often demands extensive effort from auditors who must meticulously sift through numerous disorganized legal case files to detect patterns of bias and systemic errors. For example, the high-profile investigation into the Curtis Flowers case took nine reporters a full year to assemble evidence about the prosecutor’s history of selecting racially-biased juries. Large Language Models (LLMs) have the potential to automate and scale these accountability pipelines, especially given their demonstrated capabilities in both structured and unstructured document retrieval tasks. We present the first work elaborating on the opportunities and challenges of using LLMs to provide accountability in two legal domains: bias in jury selection for criminal trials and housing eviction cases. We find that while LLMs are well-suited for information extraction from eviction forms that have more structure, court transcripts present a unique challenge due to disfluencies in transcribed speech.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleAutomating Accountability Mechanisms in the Judiciary System using Large Language Models
dc.typeThesis
dc.description.degreeM.Eng.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Engineering in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record