A survey of game theoretic approach for adversarial machine learning
Citations Over TimeTop 10% of 2018 papers
Abstract
The field of machine learning is progressing at a faster pace than ever before. Many organizations leverage machine learning tools to extract useful information from a massive amount of data. In particular, machine learning finds its application in cybersecurity that begins to enter the age of automation. However, machine learning applications in cybersecurity face unique challenges other domains rarely do—attacks from active adversaries. Problems in areas such as intrusion detection, banking fraud detection, spam filtering, and malware detection have to face challenges of adversarial attacks that modify data so that malicious instances would evade detection by the learning systems. The adversarial learning problem naturally resembles a game between the learning system and the adversary. In such a game, both players would attempt to play their best strategies against each other while maximizing their own payoffs. To solve the game, each player would search for an optimal strategy against the opponent based on the prediction of the opponent's strategy choice. The problem becomes even more complicated in settings where the learning system may have to deal with many adversaries of unknown types. Applying game‐theoretic approach, robust learning techniques have been developed to specifically address adversarial attacks and the preliminary results are promising. In this review, we summarize these results. This article is categorized under: Technologies > Machine Learning Fundamental Concepts of Data and Knowledge > Key Design Issues in Data Mining
Related Papers
- → Adversarial Learning in Real-World Fraud Detection: Challenges and Perspectives(2023)12 cited
- → On The Detection Of Adversarial Attacks Through Reliable AI(2022)2 cited
- ADVANCEMENT OF ATTACK AND DEFENSE TECHNIQUES IN ADVERSARIAL MACHINELEARNING(2020)
- → An Optimal Control View of Adversarial Machine Learning(2018)9 cited
- → Analyzing the Impact of Adversarial Examples on Explainable Machine Learning(2023)