On Monte Carlo Tree Search and Reinforcement Learning
Citations Over TimeTop 10% of 2017 papers
Abstract
Fuelled by successes in Computer Go, Monte Carlo tree search (MCTS) has achieved widespread adoption within the games community. Its links to traditional reinforcement learning (RL) methods have been outlined in the past; however, the use of RL techniques within tree search has not been thoroughly studied yet. In this paper we re-examine in depth this close relation between the two fields; our goal is to improve the cross-awareness between the two communities. We show that a straightforward adaptation of RL semantics within tree search can lead to a wealth of new algorithms, for which the traditional MCTS is only one of the variants. We confirm that planning methods inspired by RL in conjunction with online search demonstrate encouraging results on several classic board games and in arcade video game competitions, where our algorithm recently ranked first. Our study promotes a unified view of learning, planning, and search.
Related Papers
- → Safe Reinforcement Learning for Autonomous Vehicle Using Monte Carlo Tree Search(2021)93 cited
- → Monte-Carlo tree search with tree shape control(2017)7 cited
- → Monte Carlo Tree Search With Reversibility Compression(2021)3 cited
- → RevCuT Tree Search Method in Complex Single-player Game with Continuous Search Space(2019)
- → Proof Number Based Monte-Carlo Tree Search(2023)