| dc.contributor.author | Hou, Jonathan | |
| dc.contributor.author | Lax, Edwin | |
| dc.date.accessioned | 2026-02-17T19:43:50Z | |
| dc.date.available | 2026-02-17T19:43:50Z | |
| dc.date.issued | 2026-02-17 | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/164897 | |
| dc.description.abstract | This literature review examines the strategic vulnerabilities
posed by Large Language Models (LLMs) in military
and national security contexts. It synthesizes recent research
on their propensity for escalatory reasoning, cultural misalignment,
semantic manipulation, and dual-use ambiguity. Findings
from conflict s imulations a nd c oalition p lanning m odels reveal
how LLMs may default to aggressive or biased outputs under
ambiguity. These tendencies threaten alliance cohesion, distort
decision-making, and undermine trust in AI-enabled operations.
The review concludes by advocating for safeguards such as culturally
calibrated training, rigorous output verification, a nd the
integration of human-AI intermediaries to prevent destabilizing
outcomes. | en_US |
| dc.description.sponsorship | Air Force Artificial Intelligence Accelerator | en_US |
| dc.language.iso | en_US | en_US |
| dc.subject | Large Language Models (LLMs) | en_US |
| dc.title | Large Language Models and Defense Strategy: Escalation Risks and National Security Challenges | en_US |
| dc.type | Technical Report | en_US |
| dc.contributor.department | Lincoln Laboratory | en_US |