MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Towards AI Safety via Interpretability and Oversight

Author(s)
Kantamneni, Subhash
Thumbnail
DownloadThesis PDF (9.629Mb)
Advisor
Tegmark, Max
Terms of use
Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) Copyright retained by author(s) https://creativecommons.org/licenses/by-nc-nd/4.0/
Metadata
Show full item record
Abstract
In this thesis, we advance AI safety through mechanistic interpretability and oversight methodologies across three key areas: mathematical reasoning in large language models (LLMs), the validity of sparse autoencoders, and scalable oversight. First, we reverse-engineer addition within mid-sized LLMs and discover that LLMs represent numbers as helices. We demonstrate that LLMs perform addition via the manipulation of these helices using a "Clock" algorithm, providing the first representation-level explanation of mathematical reasoning in LLMs, verified through causal interventions on model activations. Next, we rigorously evaluate sparse autoencoders (SAEs), a popular interpretability tool, by testing their effectiveness on the downstream task of probing. We test SAEs under challenging probing conditions, including data scarcity, class imbalance, label noise, and covariate shift. While SAEs occasionally outperform baseline methods, they fail to consistently enhance task performance, underscoring a potentially critical limitation of SAEs. Lastly, we introduce a quantitative framework to evaluate scalable oversight - a promising idea where weaker AI systems supervise stronger ones - as a function of model intelligence. Applying our framework to four oversight games ("Mafia," "Debate," "Backdoor Code," and "Wargames"), we identify clear scaling patterns and extend our findings through a theoretical analysis of Nested Scalable Oversight (NSO), deriving conditions for optimal oversight structures. Together, these studies advance our understanding of AI interpretability and alignment, providing insights and frameworks to progress AI safety.
Date issued
2025-05
URI
https://hdl.handle.net/1721.1/162723
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.