Image Super-Resolution with Non-Local Sparse Attention
Citations Over TimeTop 1% of 2021 papers
Abstract
Both Non-Local (NL) operation and sparse representation are crucial for Single Image Super-Resolution (SISR). In this paper, we investigate their combinations and propose a novel Non-Local Sparse Attention (NLSA) with dynamic sparse attention pattern. NLSA is designed to retain long-range modeling capability from NL operation while enjoying robustness and high-efficiency of sparse representation. Specifically, NLSA rectifies non-local attention with spherical locality sensitive hashing (LSH) that partitions the input space into hash buckets of related features. For every query signal, NLSA assigns a bucket to it and only computes attention within the bucket. The resulting sparse attention prevents the model from attending to locations that are noisy and less-informative, while reducing the computational cost from quadratic to asymptotic linear with respect to the spatial size. Extensive experiments validate the effectiveness and efficiency of NLSA. With a few non-local sparse attention modules, our architecture, called non-local sparse network (NLSN), reaches state-of-the-art performance for SISR quantitatively and qualitatively.
Related Papers
- → Automatic Selection of Sparse Matrix Representation on GPUs(2015)126 cited
- → Characterizing dataset dependence for sparse matrix-vector multiplication on GPUs(2015)11 cited
- Locality-Aware Software Throttling for Sparse Matrix Operation on GPUs(2018)
- → Sparse Matrix Reconstruction Based on Sequential Sparse Recovery for Multiple Measurement Vectors(2021)2 cited
- A sparse-sparse iteration for computing a sparse incomplete(2009)