Explainable Reinforcement Learning: A Survey and Comparative Review
Citations Over TimeTop 1% of 2023 papers
Abstract
Explainable reinforcement learning (XRL) is an emerging subfield of explainable machine learning that has attracted considerable attention in recent years. The goal of XRL is to elucidate the decision-making process of reinforcement learning (RL) agents in sequential decision-making settings. Equipped with this information, practitioners can better understand important questions about RL agents (especially those deployed in the real world), such as what the agents will do and why. Despite increased interest, there exists a gap in the literature for organizing the plethora of papers—especially in a way that centers the sequential decision-making nature of the problem. In this survey, we propose a novel taxonomy for organizing the XRL literature that prioritizes the RL setting. We propose three high-level categories: feature importance, learning process and Markov decision process, and policy-level. We overview techniques according to this taxonomy, highlighting challenges and opportunities for future work. We conclude by using these gaps to motivate and outline a roadmap for future work.
Related Papers
- → Policy Gradient using Weak Derivatives for Reinforcement Learning(2019)2 cited
- → Customized Dynamic Pricing for Air Cargo Network via Reinforcement Learning(2020)1 cited
- → RVI reinforcement learning for semi-Markov decision processes with average reward(2010)1 cited
- Research on Agent Reinforcement Learning Policy Based on DFS(2010)
- → Reinforcement Learning: Tutorial and Survey(2024)