Adversarial Machine Learning and Cybersecurity
2023
Citations Over Time
Abstract
Artificial intelligence systems are rapidly being deployed in all sectors of the economy, yet significant research has demonstrated that these systems can be vulnerable to a wide array of attacks. How different are these problems from more common cybersecurity vulnerabilities? What legal ambiguities do they create, and how can organizations ameliorate them? This report, produced in collaboration with the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center, presents the recommendations of a July 2022 workshop of experts to help answer these questions.
Related Papers
- → Adversarial Learning in Real-World Fraud Detection: Challenges and Perspectives(2023)12 cited
- → On The Detection Of Adversarial Attacks Through Reliable AI(2022)2 cited
- ADVANCEMENT OF ATTACK AND DEFENSE TECHNIQUES IN ADVERSARIAL MACHINELEARNING(2020)
- → An Optimal Control View of Adversarial Machine Learning(2018)9 cited
- → Analyzing the Impact of Adversarial Examples on Explainable Machine Learning(2023)