Show simple item record

dc.contributor.authorPeppin, Aidan
dc.contributor.authorReuel, Anka
dc.contributor.authorCasper, Stephen
dc.contributor.authorJones, Elliot
dc.contributor.authorStrait, Andrew
dc.contributor.authorAnwar, Usman
dc.contributor.authorAgrawal, Anurag
dc.contributor.authorKapoor, Sayash
dc.contributor.authorKoyejo, Sanmi
dc.contributor.authorPellat, Marie
dc.contributor.authorBommasani, Rishi
dc.contributor.authorFrosst, Nick
dc.contributor.authorHooker, Sara
dc.date.accessioned2025-12-18T23:07:08Z
dc.date.available2025-12-18T23:07:08Z
dc.date.issued2025-06-23
dc.identifier.isbn979-8-4007-1482-5
dc.identifier.urihttps://hdl.handle.net/1721.1/164417
dc.descriptionFAccT ’25, Athens, Greeceen_US
dc.description.abstractTo accurately and confidently answer the question “could an AI model or system increase biorisk”, it is necessary to have both a sound theoretical threat model for how AI models or systems could increase biorisk and a robust method for testing that threat model. This paper provides an analysis of existing available research surrounding two AI and biorisk threat models: 1) access to information and planning via large language models (LLMs), and 2) the use of AI-enabled biological tools (BTs) in synthesizing novel biological artifacts. We find that existing studies around AI-related biorisk are nascent, often speculative in nature, or limited in terms of their methodological maturity and transparency. The available literature suggests that current LLMs and BTs do not pose an immediate risk, and more work is needed to develop rigorous approaches to understanding how future models could increase biorisks. We end with recommendations about how empirical work can be expanded to more precisely target biorisk and ensure rigor and validity of findings.en_US
dc.publisherACM|The 2025 ACM Conference on Fairness, Accountability, and Transparencyen_US
dc.relation.isversionofhttps://doi.org/10.1145/3715275.3732048en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleThe Reality of AI and Biorisken_US
dc.typeArticleen_US
dc.identifier.citationAidan Peppin, Anka Reuel, Stephen Casper, Elliot Jones, Andrew Strait, Usman Anwar, Anurag Agrawal, Sayash Kapoor, Sanmi Koyejo, Marie Pellat, Rishi Bommasani, Nick Frosst, and Sara Hooker. 2025. The Reality of AI and Biorisk. In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT '25). Association for Computing Machinery, New York, NY, USA, 763–771.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.mitlicensePUBLISHER_POLICY
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2025-08-01T08:33:35Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2025-08-01T08:33:36Z
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record