Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation
Citations Over TimeTop 10% of 2020 papers
Abstract
Deep learning models have consistently outperformed traditional machine learning models in various classification tasks, including image classification. As such, they have become increasingly prevalent in many real world applications including those where security is of great concern. Such popularity, however, may attract attackers to exploit the vulnerabilities of the deployed deep learning models and launch attacks against security-sensitive applications. In this paper, we focus on a specific type of data poisoning attack, which we refer to as a \em backdoor injection attack. The main goal of the adversary performing such attack is to generate and inject a backdoor into a deep learning model that can be triggered to recognize certain embedded patterns with a target label of the attacker's choice. Additionally, a backdoor injection attack should occur in a stealthy manner, without undermining the efficacy of the victim model. Specifically, we propose two approaches for generating a backdoor that is hardly perceptible yet effective in poisoning the model. We consider two attack settings, with backdoor injection carried out either before model training or during model updating. We carry out extensive experimental evaluations under various assumptions on the adversary model, and demonstrate that such attacks can be effective and achieve a high attack success rate (above 90%) at a small cost of model accuracy loss with a small injection rate, even under the weakest assumption wherein the adversary has no knowledge either of the original training data or the classifier model.
Related Papers
- → Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation(2020)180 cited
- → LR-BA: Backdoor attack against vertical federated learning using local latent representations(2023)16 cited
- → EDUCATIONAL EXPLOITING THE INFORMATION RESOURCES AND INVADING THE SECURITY MECHANISMS OF THE OPERATING SYSTEM WINDOWS 7 WITH THE EXPLOIT ETERNALBLUE AND BACKDOOR DOUBLEPULSAR(2018)9 cited
- → MTD assessment framework with cyber attack modeling(2016)8 cited
- → Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation(2018)106 cited