A reinforcement learning with adaptive state space recruitment strategy for real autonomous mobile robots
Citations Over TimeTop 10% of 2003 papers
Abstract
In the recent robotics, much attention has been focused on utilizing reinforcement learning for designing robot controllers. However, there still exists difficulties, one of them is well known as state space explosion problem. As the state space for learning system becomes continuous and high dimensional, the learning process results in time-consuming since its combinational states explodes exponentially. In order to adopt reinforcement learning for such complicated systems, it should be taken not only "adaptability" but "computational efficiencies" into account. In the paper, we propose an adaptive state space recruitment strategy for reinforcement learning, which enables the system to divide state space gradually according to task complexity and progress of learning. Some simulation results and real robot implementation show the validity of the method.
Related Papers
- → Multi-robot Box-pushing: Single-Agent Q-Learning vs. Team Q-Learning(2006)75 cited
- → Q learning behavior on autonomous navigation of physical robot(2011)13 cited
- → A reinforcement learning using adaptive state space construction strategy for real autonomous mobile robots(2004)6 cited
- → Human-like gradual learning of a Q-learning based Light exploring robot(2010)2 cited
- → Reinforcement learning accelerated by using state transition model with robotic applications(2005)