BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements
Annual Computer Security Applications Conference2021pp. 554–569
Citations Over TimeTop 1% of 2021 papers
Xiaoyi Chen, Ahmed Salem, Dingfan Chen, Michael Backes, Shiqing Ma, Qingni Shen, Zhonghai Wu, Yang Zhang
Abstract
Deep neural networks (DNNs) have progressed rapidly during the past decade and have been deployed in various real-world applications. Meanwhile, DNN models have been shown to be vulnerable to security and privacy attacks. One such attack that has attracted a great deal of attention recently is the backdoor attack. Specifically, the adversary poisons the target model’s training set to mislead any input with an added secret trigger to a target class.
Related Papers
- → Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation(2020)180 cited
- → LR-BA: Backdoor attack against vertical federated learning using local latent representations(2023)16 cited
- → Shadow backdoor attack: Multi-intensity backdoor attack against federated learning(2024)9 cited
- → Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation(2018)106 cited
- → PointBA: Towards Backdoor Attacks in 3D Point Cloud(2021)2 cited