Deep Bayesian Nonparametric Learning of Rules and Plans from Demonstrations with a Learned Automaton Prior
Citations Over Time
Abstract
We introduce a method to learn imitative policies from expert demonstrations that are interpretable and manipulable. We achieve interpretability by modeling the interactions between high-level actions as an automaton with connections to formal logic. We achieve manipulability by integrating this automaton into planning, so that changes to the automaton have predictable effects on the learned behavior. These qualities allow a human user to first understand what the model has learned, and then either correct the learned behavior or zero-shot generalize to new, similar tasks. We build upon previous work by no longer requiring additional supervised information which is hard to collect in practice. We achieve this by using a deep Bayesian nonparametric hierarchical model. We test our model on several domains and also show results for a real-world implementation on a mobile robotic arm platform.
Related Papers
- → An Introduction on Interpretable Machine Learning(2020)8 cited
- Toward Interpretable Machine Learning: Transparent Deep Neural Networks and Beyond(2020)
- → Development of Interpretable Machine Learning Models to Detect Arrhythmia based on ECG Data(2022)6 cited
- → How AI Plays Its Tricks: Interpreting the Superior Performance of Deep Learning-Based Approach in Predicting Healthcare Costs(2018)1 cited
- → A new approach to training more interpretable model with additional segmentation(2021)