HRSRD: A High-Resolution SAR Road Dataset and MSDA-LinkNet for Road Extraction with Multi-Scale Deformable Attention
Abstract
High-resolution synthetic aperture radar (SAR) imagery is essential for large-scale road extraction, yet it presents significant challenges due to inherent speckle noise, complex scattering effects, and the anisotropic nature of road structures. Moreover, the scarcity of large-scale, high-quality annotated SAR road datasets hinders the development of deep learning-based methods. To address these issues, this paper first constructs a high-resolution SAR road dataset covering representative regions in the western United States. Road annotations are automatically generated using OpenStreetMap (OSM) vectors and then refined via a structure-guided alignment strategy. Building upon this dataset, we propose a novel framework termed Multi-Scale and Deformable-Attention LinkNet (MSDA-LinkNet), specifically designed to capture thin, direction-sensitive, and geometrically complex road features. The architecture integrates a parallel direction-aware multi-scale convolution module to explicitly model road anisotropy and scale variations, complemented by a deformable attention mechanism to adaptively aggregate contextual information along curved and irregular trajectories. Extensive experiments demonstrate that MSDA-LinkNet consistently outperforms representative approaches across key metrics, including Precision, F1-score, and Intersection over Union (IoU). The released dataset and benchmark provide a solid foundation for future research in high-resolution SAR-based road mapping.