Interpretability in healthcare: A comparative study of local machine learning interpretability techniques
Citations Over TimeTop 10% of 2020 papers
Abstract
Abstract Although complex machine learning models (eg, random forest, neural networks) are commonly outperforming the traditional and simple interpretable models (eg, linear regression, decision tree), in the healthcare domain, clinicians find it hard to understand and trust these complex models due to the lack of intuition and explanation of their predictions. With the new general data protection regulation (GDPR), the importance for plausibility and verifiability of the predictions made by machine learning models has become essential. Hence, interpretability techniques for machine learning models are an area focus of research. In general, the main aim of these interpretability techniques is to shed light and provide insights into the prediction process of the machine learning models and to be able to explain how the results from the prediction was generated. A major problem in this context is that both the quality of the interpretability techniques and trust of the machine learning model predictions are challenging to measure. In this article, we propose four fundamental quantitative measures for assessing the quality of interpretability techniques— similarity , bias detection , execution time , and trust . We present a comprehensive experimental evaluation of six recent and popular local model agnostic interpretability techniques, namely, LIME , SHAP , Anchors , LORE , ILIME “ and MAPLE on different types of real‐world healthcare data. Building on previous work, our experimental evaluation covers different aspects for its comparison including identity , stability , separability , similarity , execution time , bias detection , and trust . The results of our experiments show that MAPLE achieves the highest performance for the identity across all data sets included in this study, while LIME achieves the lowest performance for the identity metric. LIME achieves the highest performance for the separability metric across all data sets. On average, SHAP has the smallest average time to output explanation across all data sets included in this study. For detecting the bias, SHAP and MAPLE enable the participants to better detect the bias. For the trust metric, Anchors achieves the highest performance on all data sets included in this work.
Related Papers
- → Interpretable machine learning for building energy management: A state-of-the-art review(2023)282 cited
- → Balancing the trade-off between accuracy and interpretability in software defect prediction(2018)86 cited
- → Ant Colony Optimization Algorithm for Interpretable Bayesian Classifiers Combination: Application to Medical Predictions(2014)29 cited
- → A Preliminary Study of Interpreting CNNs Using Soft Decision Trees(2022)
- → Local Interpretability of Random Forests for Multi-Target Regression(2023)