Taxonomy of Machine Learning Safety: A Survey and Primer
Citations Over TimeTop 10% of 2022 papers
Abstract
The open-world deployment of Machine Learning (ML) algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities such as interpretability, verifiability, and performance limitations. Research explores different approaches to improve ML dependability by proposing new models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks. However, there is a missing connection between ongoing ML research and well-established safety principles. In this article, we present a structured and comprehensive review of ML techniques to improve the dependability of ML algorithms in uncontrolled open-world settings. From this review, we propose the Taxonomy of ML Safety that maps state-of-the-art ML techniques to key engineering safety strategies. Our taxonomy of ML safety presents a safety-oriented categorization of ML techniques to provide guidance for improving dependability of the ML design and development. The proposed taxonomy can serve as a safety checklist to aid designers in improving coverage and diversity of safety strategies employed in any given ML system.
Related Papers
- → Measuring Interpretability for Different Types of Machine Learning Models(2018)15 cited
- → Fundamentals of Dependability(2013)14 cited
- → ASSURE(1990)16 cited
- → Analyzing Effects of Mixed Sample Data Augmentation on Model Interpretability(2023)3 cited
- → A Dependability Case Construction Approach Based on Dependability Deviation Analysis(2014)