Black-Box Access is Insufficient for Rigorous AI Audits
Author(s)
Casper, Stephen; Ezell, Carson; Siegmann, Charlotte; Kolt, Noam; Curtis, Taylor Lynn; Bucknall, Benjamin; Haupt, Andreas; Wei, Kevin; Scheurer, Jérémy; Hobbhahn, Marius; Sharkey, Lee; Krishna, Satyapriya; Von Hagen, Marvin; Alberti, Silas; Chan, Alan; Sun, Qinyi; Gerovitch, Michael; Bau, David; Tegmark, Max; Krueger, David; Hadfield-Menell, Dylan; ... Show more Show less
Download3630106.3659037.pdf (850.0Kb)
Publisher with Creative Commons License
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
External audits of AI systems are increasingly recognized as a key mechanism for AI governance. The effectiveness of an audit, however, depends on the degree of access granted to auditors. Recent audits of state-of-the-art AI systems have primarily relied on black-box access, in which auditors can only query the system and observe its outputs. However, white-box access to the system’s inner workings (e.g., weights, activations, gradients) allows an auditor to perform stronger attacks, more thoroughly interpret models, and conduct fine-tuning. Meanwhile, outside-the-box access to training and deployment information (e.g., methodology, code, documentation, data, deployment details, findings from internal evaluations) allows auditors to scrutinize the development process and design more targeted evaluations. In this paper, we examine the limitations of black-box audits and the advantages of white- and outside-the-box audits. We also discuss technical, physical, and legal safeguards for performing these audits with minimal security risks. Given that different forms of access can lead to very different levels of evaluation, we conclude that (1) transparency regarding the access and methods used by auditors is necessary to properly interpret audit results, and (2) white- and outside-the-box access allow for substantially more scrutiny than black-box access alone.
Description
FAccT ’24, June 03–06, 2024, Rio de Janeiro, Brazil
Date issued
2024-06-03Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Economics; Massachusetts Institute of Technology. Center for Collective Intelligence; Massachusetts Institute of Technology. Department of PhysicsPublisher
ACM|The 2024 ACM Conference on Fairness, Accountability, and Transparency
Citation
Casper, Stephen, Ezell, Carson, Siegmann, Charlotte, Kolt, Noam, Curtis, Taylor Lynn et al. 2024. "Black-Box Access is Insufficient for Rigorous AI Audits."
Version: Final published version
ISBN
979-8-4007-0450-5
Collections
The following license files are associated with this item: