Demystifying Black Box Models with Neural Networks for Accuracy and Interpretability of Supervised Learning
Citations Over Time
Abstract
Intensive data modelling on large datasets that were once limited to supercomputers and workstations can now be performed on desktop computers with scripting languages such as R and Python. Analytics, a field that is popular over this aspect of access to high computational capability enables people to try out different mathematical algorithms to derive the most precise values with just calling pre-written libraries. This precision in the case of black box models such as Neural Networks and Support Vector Machine comes at the cost of interpretability. Specifically, the importance of interpretability is realized while building classification models where understanding how a Neural Network functions in solving a problem is as important as deriving precision in values. The Path Break Down Approach proposed in this paper assists in demystifying the functioning of a Neural Network model in solving a classification and prediction problem based on the San Francisco crime dataset.
Related Papers
- → Innovative approaches to addressing the tradeoff between interpretability and accuracy in ship fuel consumption prediction(2023)56 cited
- → Demystifying Black Box Models with Neural Networks for Accuracy and Interpretability of Supervised Learning(2018)1 cited
- → Hybrid Predictive Model: When an Interpretable Model Collaborates with a Black-box Model(2019)11 cited
- → A Theory of Diagnostic Interpretation in Supervised Classification(2018)1 cited