H-ViT: A Hierarchical Vision Transformer for Deformable Image Registration
Citations Over TimeTop 10% of 2024 papers
Abstract
This paper introduces a novel top-down representation approach for deformable image registration, which estimates the deformation field by capturing various short-and long-range flow features at different scale levels. As a Hierarchical Vision Transformer (H- ViT), we propose a dual self-attention and cross-attention mechanism that uses high-level features in the deformation field to represent low-level ones, enabling information streams in the deformation field across all voxel patch embeddings irrespective of their spatial proximity. Since high-level features contain abstract flow patterns, such patterns are expected to effectively contribute to the representation of the deformation field in lower scales. When the self-attention module utilizes within-scale short-range patterns for representation, the cross-attention modules dynamically look for the key tokens across different scales to further interact with the local query voxel patches. Our method shows superior accuracy and visual quality over the state-of-the-art registration methods in five publicly available datasets, highlighting a substantial enhancement in the performance of medical imaging registration. The project link is available at https://mogvision.github.io/hvit.
Related Papers
- → An Object Detection and Pose Estimation Approach for Position Based Visual Servoing(2017)5 cited
- → Tracking in 3D: Image Variability Decomposition for Recovering Object Pose and Illumination(1999)15 cited
- → Foreground object segmentation from binocular stereo video(2005)2 cited
- → Object-oriented stripe structured-light vision-guided robot(2017)2 cited
- → 6-DOF object localization by combining monocular vision and robot arm kinematics(2017)1 cited