Efficient robotic grasping using simulation and domain adaptation.
Figshare2018
Abstract
Data collection for training robotic grasping controllers is expensive in both time and price. Methods for making use of simulated data are very appealing as they reduce this expense dramatically, but often fail to generalise to a real world environment. GraspGAN is a application of pixel level domain adaptation that can generate synthetic training data good enough that we can reduce the amount of real world data required by a factor of 50.
Related Papers
- → VR-Goggles for Robots: Real-to-Sim Domain Adaptation for Visual Control(2019)101 cited
- → Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic Platforms(2021)25 cited
- → Learn to grasp unknown objects in robotic manipulation(2021)5 cited
- → Domain centralization and cross-modal reinforcement learning for vision-based robotic manipulation(2018)2 cited
- → Towards accelerated robotic deployment by supervised learning of latent space observer and policy from simulated experiments with expert policies(2020)