Evading Deep Reinforcement Learning-based Network Intrusion Detection with Adversarial Attacks
Citations Over Time
Abstract
An Intrusion Detection System (IDS) aims to detect attacks conducted over computer networks by analyzing traffic data. Deep Reinforcement Learning (Deep-RL) is a promising lead in IDS research, due to its lightness and adaptability. However, the neural networks on which Deep-RL is based can be vulnerable to adversarial attacks. By applying a well-computed modification to malicious traffic, adversarial examples can evade detection. In this paper, we test the performance of a state-of-the-art Deep-RL IDS agent against the Fast Gradient Sign Method (FGSM) and Basic Iterative Method (BIM) adversarial attacks. We demonstrate that the performance of the Deep-RL detection agent is compromised in the face of adversarial examples and highlight the need for future Deep-RL IDS work to consider mechanisms for coping with adversarial examples.
Related Papers
- → Jujutsu: A Two-stage Defense against Adversarial Patch Attacks on Deep Neural Networks(2023)19 cited
- → Adaptive Intrusion Detection Systems(2014)2 cited
- → Global Adversarial Attacks for Assessing Deep Learning Robustness(2019)3 cited
- → Developing and Defeating Adversarial Examples(2020)
- → Adaptive Intrusion Detection Systems(2015)