Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations
Citations Over TimeTop 1% of 2018 papers
Abstract
Dexterous multi-fingered hands are extremely versatile and provide a generic way to perform a multitude of tasks in human-centric environments. However, effectively controlling them remains challenging due to their high dimensionality and large number of potential contacts. Deep reinforcement learning (DRL) provides a model-agnostic approach to control complex dynamical systems, but has not been shown to scale to highdimensional dexterous manipulation. Furthermore, deployment of DRL on physical systems remains challenging due to sample inefficiency. Consequently, the success of DRL in robotics has thus far been limited to simpler manipulators and tasks. In this work, we show that model-free DRL can effectively scale up to complex manipulation tasks with a high-dimensional 24-DoF hand, and solve them from scratch in simulated experiments. Furthermore, with the use of a small number of human demonstrations, the sample complexity can be significantly reduced, which enables learning with sample sizes equivalent to a few hours of robot experience. The use of demonstrations result in policies that exhibit very natural movements and, surprisingly, are also substantially more robust. We demonstrate successful policies for object relocation, in-hand manipulation, tool use, and door opening, which are shown in the supplementary video.
Related Papers
- → Human-level control through deep reinforcement learning(2015)29,160 cited
- → Diagnosing Non-Intermittent Anomalies in Reinforcement Learning Policy Executions (Short Paper)(2017)11,253 cited
- → MuJoCo: A physics engine for model-based control(2012)4,331 cited
- → Continuous control with deep reinforcement learning(2015)5,359 cited