Show simple item record

dc.contributor.authorHou, Jonathan
dc.contributor.authorLax, Edwin
dc.date.accessioned2026-02-17T19:43:50Z
dc.date.available2026-02-17T19:43:50Z
dc.date.issued2026-02-17
dc.identifier.urihttps://hdl.handle.net/1721.1/164897
dc.description.abstractThis literature review examines the strategic vulnerabilities posed by Large Language Models (LLMs) in military and national security contexts. It synthesizes recent research on their propensity for escalatory reasoning, cultural misalignment, semantic manipulation, and dual-use ambiguity. Findings from conflict s imulations a nd c oalition p lanning m odels reveal how LLMs may default to aggressive or biased outputs under ambiguity. These tendencies threaten alliance cohesion, distort decision-making, and undermine trust in AI-enabled operations. The review concludes by advocating for safeguards such as culturally calibrated training, rigorous output verification, a nd the integration of human-AI intermediaries to prevent destabilizing outcomes.en_US
dc.description.sponsorshipAir Force Artificial Intelligence Acceleratoren_US
dc.language.isoen_USen_US
dc.subjectLarge Language Models (LLMs)en_US
dc.titleLarge Language Models and Defense Strategy: Escalation Risks and National Security Challengesen_US
dc.typeTechnical Reporten_US
dc.contributor.departmentLincoln Laboratoryen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record