Show simple item record

dc.contributor.authorRashid, Sharaf
dc.contributor.authorBollis, Edson
dc.contributor.authorPellicer, Lucas
dc.contributor.authorRabbani, Darian
dc.contributor.authorPalacios, Rafael
dc.contributor.authorGupta, Aneesh
dc.contributor.authorGupta, Amar
dc.date.accessioned2025-10-03T16:45:41Z
dc.date.available2025-10-03T16:45:41Z
dc.date.issued2025-08-05
dc.identifier.urihttps://hdl.handle.net/1721.1/162890
dc.description.abstractGenerative Adversarial Networks (GANs) using Long Short-Term Memory (LSTM) provide a computationally cheaper approach for text generation compared to large language models (LLMs). The low hardware barrier of training GANs poses a threat because it means more bad actors may use them to mass-produce prompt attack messages against LLM systems. Thus, to better understand the threat of GANs being used for prompt attack generation, we train two well-known GAN architectures, SeqGAN and RelGAN, on prompt attack messages. For each architecture, we evaluate generated prompt attack messages, comparing results with each other, with generated attacks from another computationally cheap approach, a 1-billion-parameter Llama 3.2 small language model (SLM), and with messages from the original dataset. This evaluation suggests that GAN architectures like SeqGAN and RelGAN have the potential to be used in conjunction with SLMs to readily generate malicious prompts that impose new threats against LLM-based systems such as chatbots. Analyzing the effectiveness of state-of-the-art defenses against prompt attacks, we also find that GAN-generated attacks can deceive most of these defenses with varying levels of success with the exception of Meta’s PromptGuard. Further, we suggest an improvement of prompt attack defenses based on the analysis of the language quality of the prompts, which we found to be the weakest point of GAN-generated messages.en_US
dc.publisherMultidisciplinary Digital Publishing Instituteen_US
dc.relation.isversionofhttp://dx.doi.org/10.3390/make7030077en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceMultidisciplinary Digital Publishing Instituteen_US
dc.titleEvaluating Prompt Injection Attacks with LSTM-Based Generative Adversarial Networks: A Lightweight Alternative to Large Language Modelsen_US
dc.typeArticleen_US
dc.identifier.citationRashid, S.; Bollis, E.; Pellicer, L.; Rabbani, D.; Palacios, R.; Gupta, A.; Gupta, A. Evaluating Prompt Injection Attacks with LSTM-Based Generative Adversarial Networks: A Lightweight Alternative to Large Language Models. Mach. Learn. Knowl. Extr. 2025, 7, 77.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentSloan School of Managementen_US
dc.relation.journalMachine Learning and Knowledge Extractionen_US
dc.identifier.mitlicensePUBLISHER_CC
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2025-09-26T14:04:38Z
dspace.date.submission2025-09-26T14:04:38Z
mit.journal.volume7en_US
mit.journal.issue3en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record