Explainable Anomaly Detection Framework for Maritime Main Engine Sensor Data
Citations Over TimeTop 10% of 2021 papers
Abstract
In this study, we proposed a data-driven approach to the condition monitoring of the marine engine. Although several unsupervised methods in the maritime industry have existed, the common limitation was the interpretation of the anomaly; they do not explain why the model classifies specific data instances as an anomaly. This study combines explainable AI techniques with anomaly detection algorithm to overcome the limitation above. As an explainable AI method, this study adopts Shapley Additive exPlanations (SHAP), which is theoretically solid and compatible with any kind of machine learning algorithm. SHAP enables us to measure the marginal contribution of each sensor variable to an anomaly. Thus, one can easily specify which sensor is responsible for the specific anomaly. To illustrate our framework, the actual sensor stream obtained from the cargo vessel collected over 10 months was analyzed. In this analysis, we performed hierarchical clustering analysis with transformed SHAP values to interpret and group common anomaly patterns. We showed that anomaly interpretation and segmentation using SHAP value provides more useful interpretation compared to the case without using SHAP value.
Related Papers
- → Explainable Anomaly Detection Framework for Maritime Main Engine Sensor Data(2021)59 cited
- → Towards Experienced Anomaly Detector Through Reinforcement Learning(2018)56 cited
- → Anomaly Detection with Partially Observed Anomaly Types(2021)4 cited
- → Human-machine interactive streaming anomaly detection by online self-adaptive forest(2022)8 cited
- → Tree-based Self-adaptive Anomaly Detection by Human-Machine Interaction(2021)1 cited