| dc.contributor.advisor | Hadfield-Menell, Dylan | |
| dc.contributor.author | Hernandez, Adriano | |
| dc.date.accessioned | 2025-09-18T14:28:48Z | |
| dc.date.available | 2025-09-18T14:28:48Z | |
| dc.date.issued | 2025-05 | |
| dc.date.submitted | 2025-06-23T14:02:13.864Z | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/162716 | |
| dc.description.abstract | Artificial Intelligence (AI) and large language models (LLMs) not only present a challenge for adversarial robustness, but also the natural emergence of unwanted capabilities. Current approaches to safeguarding AI and LLMs predominantly rely on explicitly restricting known instances of these. However, this places a burden on model developers, because they cannot anticipate all the potential attacks and undesirable capabilities. To solve this problem, we leverage interdisciplinary knowledge. In the field of information security, the principle of least privilege provides guidance on how to defend from unknown threats. In AI, the principle could be implemented by ensuring that developers specify the knowledge and capabilities an AI system should retain, restricting all others by default. We call this application of the principle of least privilege, passive scoping. Our thesis makes two claims:
1. We argue that (a) passive scoping mitigates concerns about adversarial robustness and loss of control of AI systems and (b) passive scoping to edit the weights and activations at post-training time is underexplored by the literature.
2. Of possible approaches, our sparse autoencoder (SAE) filters can implement this underexplored type of passive scoping. They increase safety relative to LoRA finetuning and prompt engineering, but leave room for improvements.
The thesis is structured as follows:
1. Chapter 2 elucidates the challenges with adversarial robustness and loss of control risk. Chapter 3 puts forward a conceptual argument for the benefits of passive scoping. Later, it analyzes the extent to which passive scoping has been attempted. These two chapters work together to defend claims 1a and 1b.
2. Chapter 4 defines our optimization problem. Chapter 5 defines our experimental methodology and metrics. These two define our success criteria for claim 2. Chapter 6 finalizes our defense of claim 2 based on our results.
3. Chapter 7 explores related work, Chapter 8 engages in a broader discussion, and chapter 9 summarizes the contributions of this thesis. | |
| dc.publisher | Massachusetts Institute of Technology | |
| dc.rights | In Copyright - Educational Use Permitted | |
| dc.rights | Copyright retained by author(s) | |
| dc.rights.uri | https://rightsstatements.org/page/InC-EDU/1.0/ | |
| dc.title | On Passive-Scoping as a method for Large Language
Model Robustness to Jailbreaks and Adversarial Examples | |
| dc.type | Thesis | |
| dc.description.degree | M.Eng. | |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
| mit.thesis.degree | Master | |
| thesis.degree.name | Master of Engineering in Electrical Engineering and Computer Science | |