Adversarial support vector machine learning
Citations Over TimeTop 10% of 2012 papers
Abstract
Many learning tasks such as spam filtering and credit card fraud detection face an active adversary that tries to avoid detection. For learning problems that deal with an active adversary, it is important to model the adversary's attack strategy and develop robust learning models to mitigate the attack. These are the two objectives of this paper. We consider two attack models: a free-range attack model that permits arbitrary data corruption and a restrained attack model that anticipates more realistic attacks that a reasonable adversary would devise under penalties. We then develop optimal SVM learning strategies against the two attack models. The learning algorithms minimize the hinge loss while assuming the adversary is modifying data to maximize the loss. Experiments are performed on both artificial and real data sets. We demonstrate that optimal solutions may be overly pessimistic when the actual attacks are much weaker than expected. More important, we demonstrate that it is possible to develop a much more resilient SVM learning model while making loose assumptions on the data corruption models. When derived under the restrained attack model, our optimal SVM learning strategy provides more robust overall performance under a wide range of attack parameters.
Related Papers
- → Adversarial Learning in Real-World Fraud Detection: Challenges and Perspectives(2023)12 cited
- ADVANCEMENT OF ATTACK AND DEFENSE TECHNIQUES IN ADVERSARIAL MACHINELEARNING(2020)
- → Adversarial Machine Learning(2022)1 cited
- → Adversarial Machine Learning: Bayesian Perspectives(2020)2 cited
- → Analyzing the Impact of Adversarial Examples on Explainable Machine Learning(2023)