Show simple item record

dc.contributor.authorMartínez, Eric
dc.date.accessioned2024-04-01T18:23:21Z
dc.date.available2024-04-01T18:23:21Z
dc.date.issued2024-03-30
dc.identifier.issn0924-8463
dc.identifier.issn1572-8382
dc.identifier.urihttps://hdl.handle.net/1721.1/153986
dc.description.abstractPerhaps the most widely touted of GPT-4’s at-launch, zero-shot capabilities has been its reported 90th-percentile performance on the Uniform Bar Exam. This paper begins by investigating the methodological challenges in documenting and verifying the 90th-percentile claim, presenting four sets of findings that indicate that OpenAI’s estimates of GPT-4’s UBE percentile are overinflated. First, although GPT-4’s UBE score nears the 90th percentile when examining approximate conversions from February administrations of the Illinois Bar Exam, these estimates are heavily skewed towards repeat test-takers who failed the July administration and score significantly lower than the general test-taking population. Second, data from a recent July administration of the same exam suggests GPT-4’s overall UBE percentile was below the 69th percentile, and $$\sim$$ ∼ 48th percentile on essays. Third, examining official NCBE data and using several conservative statistical assumptions, GPT-4’s performance against first-time test takers is estimated to be $$\sim$$ ∼ 62nd percentile, including $$\sim$$ ∼ 42nd percentile on essays. Fourth, when examining only those who passed the exam (i.e. licensed or license-pending attorneys), GPT-4’s performance is estimated to drop to $$\sim$$ ∼ 48th percentile overall, and $$\sim$$ ∼ 15th percentile on essays. In addition to investigating the validity of the percentile claim, the paper also investigates the validity of GPT-4’s reported scaled UBE score of 298. The paper successfully replicates the MBE score, but highlights several methodological issues in the grading of the MPT + MEE components of the exam, which call into question the validity of the reported essay score. Finally, the paper investigates the effect of different hyperparameter combinations on GPT-4’s MBE performance, finding no significant effect of adjusting temperature settings, and a significant effect of few-shot chain-of-thought prompting over basic zero-shot prompting. Taken together, these findings carry timely insights for the desirability and feasibility of outsourcing legally relevant tasks to AI models, as well as for the importance for AI developers to implement rigorous and transparent capabilities evaluations to help secure safe and trustworthy AI.en_US
dc.publisherSpringer Science and Business Media LLCen_US
dc.relation.isversionof10.1007/s10506-024-09396-9en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceSpringer Netherlandsen_US
dc.subjectLawen_US
dc.subjectArtificial Intelligenceen_US
dc.titleRe-evaluating GPT-4’s bar exam performanceen_US
dc.typeArticleen_US
dc.identifier.citationMartínez, E. Re-evaluating GPT-4’s bar exam performance. Artif Intell Law (2024).en_US
dc.relation.journalArtifcial Intelligence and Lawen_US
dc.identifier.mitlicensePUBLISHER_CC
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2024-03-31T03:17:06Z
dc.language.rfc3066en
dc.rights.holderThe Author(s)
dspace.embargo.termsN
dspace.date.submission2024-03-31T03:17:06Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record