MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • Research Computing
  • AIA
  • Reports
  • View Item
  • DSpace@MIT Home
  • Research Computing
  • AIA
  • Reports
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Large Language Models and Defense Strategy: Escalation Risks and National Security Challenges

Author(s)
Hou, Jonathan; Lax, Edwin
Thumbnail
DownloadMain Report (394.1Kb)
Metadata
Show full item record
Abstract
This literature review examines the strategic vulnerabilities posed by Large Language Models (LLMs) in military and national security contexts. It synthesizes recent research on their propensity for escalatory reasoning, cultural misalignment, semantic manipulation, and dual-use ambiguity. Findings from conflict s imulations a nd c oalition p lanning m odels reveal how LLMs may default to aggressive or biased outputs under ambiguity. These tendencies threaten alliance cohesion, distort decision-making, and undermine trust in AI-enabled operations. The review concludes by advocating for safeguards such as culturally calibrated training, rigorous output verification, a nd the integration of human-AI intermediaries to prevent destabilizing outcomes.
Date issued
2026-02-17
URI
https://hdl.handle.net/1721.1/164897
Department
Lincoln Laboratory
Keywords
Large Language Models (LLMs)

Collections
  • Reports

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.