A Deeper Look into Aleatoric and Epistemic Uncertainty Disentanglement
Citations Over TimeTop 10% of 2022 papers
Abstract
Neural networks are ubiquitous in many tasks, but trusting their predictions is an open issue. Uncertainty quantification is required for many applications, and disentangled aleatoric and epistemic uncertainties are best. In this paper, we generalize methods to produce disentangled uncertainties to work with different uncertainty quantification methods, and evaluate their capability to produce disentangled uncertainties. Our results show that: there is an interaction between learning aleatoric and epistemic uncertainty, which is unexpected and violates assumptions on aleatoric uncertainty, some methods like Flipout produce zero epistemic uncertainty, aleatoric uncertainty is unreliable in the out-of-distribution setting, and Ensembles provide overall the best disentangling quality. We also explore the error produced by the number of samples hyper-parameter in the sampling softmax function, recommending N > 100 samples. We expect that our formulation and results help practitioners and researchers choose uncertainty methods and expand the use of disentangled uncertainties, as well as motivate additional research into this topic.
Related Papers
- → Meaningful expression of uncertainty in measurement(2022)23 cited
- → Measurement Uncertainties: Physical Parameters and Calibration of Instruments(2012)40 cited
- → A System Uncertainty Propagation Approach With Model Uncertainty Quantification in Multidisciplinary Design(2014)14 cited
- → Evaluation of pressure and species concentration measurement using uncertainty propagation(2022)2 cited
- Characterizing Epistemic Uncertainty for Launch Vehicle Designs(2016)