Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey
Citations Over TimeTop 1% of 2024 papers
Abstract
Due to the greatly improved capabilities of devices, massive data, and increasing concern about data privacy, Federated Learning (FL) has been increasingly considered for applications to wireless communication networks (WCNs). Wireless FL (WFL) is a distributed method of training a global deep learning model in which a large number of participants each train a local model on their training datasets and then upload the local model updates to a central server. However, in general, nonindependent and identically distributed (non-IID) data of WCNs raises concerns about robustness, as a malicious participant could potentially inject a “backdoor” into the global model by uploading poisoned data or models over WCN. This could cause the model to misclassify malicious inputs as a specific target class while behaving normally with benign inputs. This survey provides a comprehensive review of the latest backdoor attacks and defense mechanisms. It classifies them according to their targets (data poisoning or model poisoning), the attack phase (local data collection, training, or aggregation), and defense stage (local training, before aggregation, during aggregation, or after aggregation). The strengths and limitations of existing attack strategies and defense mechanisms are analyzed in detail. Comparisons of existing attack methods and defense designs are carried out, pointing to noteworthy findings, open challenges, and potential future research directions related to security and privacy of WFL.
Related Papers
- → Composite Backdoor Attack for Deep Neural Network by Mixing Existing Benign Features(2020)197 cited
- → Attention-Enhancing Backdoor Attacks Against BERT-based Models(2023)3 cited
- → Trojan Horse Training for Breaking Defenses against Backdoor Attacks in Deep Learning(2022)2 cited
- → Dormant Neural Trojans(2022)