Show simple item record

dc.contributor.authorLynch, Joshua
dc.contributor.authorNiss, Laura
dc.date.accessioned2026-02-17T19:47:56Z
dc.date.available2026-02-17T19:47:56Z
dc.date.issued2026-02-17
dc.identifier.urihttps://hdl.handle.net/1721.1/164898
dc.description.abstractThe current use cases, limitations, and future capacity of large language models (LLMs) as assistants to military personnel remain an open question. This paper presents a case study of an Airman’s interaction with and trust calibration of LLMs over three months, both as an everyday assistant and for development of ROMAD-AI, a tactical military application. Through intuitive, AI-generated software development, an approach relying on iterative code generation through natural language prompting of LLMs from a technical novice rather than human generated programming from a technical expert, the research reveals significant gaps between industry curated AI capability demonstrations and operational reality, requiring systematic trust calibration and realistic scope management. Outcomes are analyzed through operational and technical expertise perspectives to provide practical guidance for both military service members seeking effective AI integration and researchers developing military-focused AI systems.en_US
dc.description.sponsorshipDepartment of the Air Force Artificial Intelligence Acceleratoren_US
dc.language.isoen_USen_US
dc.subjectAI, LLMs, military, vibe coding, application development, trust, calibrationen_US
dc.titleFrom Hype to Reality: Real-World Lessons and Recommendations for AI in Military Applicationsen_US
dc.typeTechnical Reporten_US
dc.contributor.departmentLincoln Laboratoryen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record