Learning to Manipulate Articulated Objects in Unstructured Environments Using a Grounded Relational Representation
Citations Over TimeTop 10% of 2008 papers
Abstract
We introduce a learning-based approach to manipulation in unstructured environments. This approach permits autonomous acquisition of manipulation expertise from interactions with the environment. The resulting expertise enables a robot to perform effective manipulation based on partial state information. The manipulation expertise is represented in a relational state representation and learned using relational reinforcement learning. The relational representation renders learning tractable by collapsing a large number of states onto a single, relational state. The relational state representation is carefully grounded in the perceptual and interaction skills of the robot. This ensures that symbolically learned knowledge remains meaningful in the physical world. We experimentally validate the proposed learning approach on the task of manipulating an articulated object to obtain a model of its kinematic structure. Our experiments demonstrate that the manipulation expertise acquired by the robot leads to substantial performance improvements. These improvements are maintained when experience is applied to previously unseen objects.
Related Papers
- → Review of Deep Reinforcement Learning for Robot Manipulation(2019)250 cited
- → Adaptive Modular Reinforcement Learning for Robot Controlled in Multiple Environments(2021)6 cited
- → A reinforcement learning with adaptive state space recruitment strategy for real autonomous mobile robots(2003)7 cited
- → Propositionalization of Relational Data(2021)3 cited
- Multi-relational Weighted Tensor Decomposition(2012)