Image-Based Tactile Deformation Simulation and Pose Estimation for Robot Skill Learning
Abstract
The TacTip is a cost-effective, 3D-printed optical tactile sensor commonly used in deep learning and reinforcement learning for robotic manipulation. However, its specialized structure, which combines soft materials of varying hardnesses, makes it challenging to simulate the distribution of numerous printed markers on pins. This paper aims to create an interpretable, AI-applicable simulation of the deformation of TacTip under varying pressures and interactions with different objects, addressing the black-box nature of learning and simulation in haptic manipulation. The research focuses on simulating the TacTip sensor’s shape using a fully tunable, chain-based mathematical model, refined through comparisons with real-world measurements. We integrated the WRS system with our theoretical model to evaluate its effectiveness in object pose estimation. The results demonstrated that the prediction accuracy for all markers across a variety of contact scenarios exceeded 92%.
Related Papers
- → An Object Detection and Pose Estimation Approach for Position Based Visual Servoing(2017)5 cited
- → Self-monitoring to improve robustness of 3D object tracking for robotics(2011)4 cited
- → Tracking in 3D: Image Variability Decomposition for Recovering Object Pose and Illumination(1999)15 cited
- → Foreground object segmentation from binocular stereo video(2005)2 cited
- → 6-DOF object localization by combining monocular vision and robot arm kinematics(2017)1 cited