Poisoning Network Flow Classifiers
Author(s)
Severi, Giorgio; Boboila, Simona; Oprea, Alina; Holodnak, John; Kratkiewicz, Kendra; Matterer, Jason; ... Show more Show less
Download3627106.3627123.pdf (2.726Mb)
Publisher Policy
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
As machine learning (ML) classifiers increasingly oversee the automated monitoring of network traffic, studying their resilience against adversarial attacks becomes critical. This paper focuses on poisoning attacks, specifically backdoor attacks, against network traffic flow classifiers. We investigate the challenging scenario of clean-label poisoning where the adversary’s capabilities are constrained to tampering only with the training data — without the ability to arbitrarily modify the training labels or any other component of the training process. We describe a trigger crafting strategy that leverages model interpretability techniques to generate trigger patterns that are effective even at very low poisoning rates. Finally, we design novel strategies to generate stealthy triggers, including an approach based on generative Bayesian network models, with the goal of minimizing the conspicuousness of the trigger, and thus making detection of an ongoing poisoning campaign more challenging. Our findings provide significant insights into the feasibility of poisoning attacks on network traffic classifiers used in multiple scenarios, including detecting malicious communication and application classification.
Date issued
2023-12-04Department
Lincoln LaboratoryPublisher
ACM|Annual Computer Security Applications Conference
Citation
Severi, Giorgio, Boboila, Simona, Oprea, Alina, Holodnak, John, Kratkiewicz, Kendra et al. 2023. "Poisoning Network Flow Classifiers."
Version: Final published version
ISBN
979-8-4007-0886-2