Video Frame Interpolation Transformer
Citations Over TimeTop 10% of 2022 papers
Abstract
Existing methods for video interpolation heavily rely on deep convolution neural networks, and thus suffer from their intrinsic limitations, such as content-agnostic kernel weights and restricted receptive field. To address these issues, we propose a Transformer-based video interpolation framework that allows content-aware aggregation weights and considers long-range dependencies with the self-attention operations. To avoid the high computational cost of global self-attention, we introduce the concept of local attention into video interpolation and extend it to the spatial-temporal domain. Furthermore, we propose a space-time separation strategy to save memory usage, which also improves performance. In addition, we develop a multi-scale frame synthesis scheme to fully realize the potential of Transformers. Extensive experiments demonstrate the proposed model performs favorably against the state-of-the-art methods both quantitatively and qualitatively on a variety of benchmark datasets. The code and models are released at https://github.com/zhshi0816/Video-Frame-Interpolation-Transformer.
Related Papers
- → Slope hold image interpolation using fan filters(2005)
- → L2-optimal image interpolation and its applications to medical imaging(2010)
- Image Interpolation Using Loss Information Estimation and Its Implementation on Portable Device(2010)