Learning to dress
Citations Over TimeTop 10% of 2018 papers
Abstract
Creating animation of a character putting on clothing is challenging due to the complex interactions between the character and the simulated garment. We take a model-free deep reinforcement learning (deepRL) approach to automatically discovering robust dressing control policies represented by neural networks. While deepRL has demonstrated several successes in learning complex motor skills, the data-demanding nature of the learning algorithms is at odds with the computationally costly cloth simulation required by the dressing task. This paper is the first to demonstrate that, with an appropriately designed input state space and a reward function, it is possible to incorporate cloth simulation in the deepRL framework to learn a robust dressing control policy. We introduce a salient representation of haptic information to guide the dressing process and utilize it in the reward function to provide learning signals during training. In order to learn a prolonged sequence of motion involving a diverse set of manipulation skills, such as grasping the edge of the shirt or pulling on a sleeve, we find it necessary to separate the dressing task into several subtasks and learn a control policy for each subtask. We introduce a policy sequencing algorithm that matches the distribution of output states from one task to the input distribution for the next task in the sequence. We have used this approach to produce character controllers for several dressing tasks: putting on a t-shirt, putting on a jacket, and robot-assisted dressing of a sleeve.
Related Papers
- Hierarchical Reinforcement Learning Based on System Model(2006)
- → Behavior Acquisition on a Mobile Robot Using Reinforcement Learning With Continuous State Space(2019)
- → Spatially and Seamlessly Hierarchical Reinforcement Learning for State Space and Policy space in Autonomous Driving(2021)
- → RL-QN: A Reinforcement Learning Framework for Optimal Control of Queueing Systems(2020)