A Survey on Explainable Anomaly Detection
Citations Over TimeTop 1% of 2023 papers
Abstract
In the past two decades, most research on anomaly detection has focused on improving the accuracy of the detection, while largely ignoring the explainability of the corresponding methods and thus leaving the explanation of outcomes to practitioners. As anomaly detection algorithms are increasingly used in safety-critical domains, providing explanations for the high-stakes decisions made in those domains has become an ethical and regulatory requirement. Therefore, this work provides a comprehensive and structured survey on state-of-the-art explainable anomaly detection techniques. We propose a taxonomy based on the main aspects that characterise each explainable anomaly detection technique, aiming to help practitioners and researchers find the explainable anomaly detection method that best suits their needs.
Related Papers
- → Explainable Anomaly Detection Framework for Maritime Main Engine Sensor Data(2021)59 cited
- → Towards Experienced Anomaly Detector Through Reinforcement Learning(2018)56 cited
- → Anomaly Detection with Partially Observed Anomaly Types(2021)4 cited
- → Human-machine interactive streaming anomaly detection by online self-adaptive forest(2022)8 cited
- → Tree-based Self-adaptive Anomaly Detection by Human-Machine Interaction(2021)1 cited