Real-Time Video Super-Resolution with Spatio-Temporal Networks and Motion Compensation
Citations Over TimeTop 1% of 2017 papers
Abstract
Convolutional neural networks have enabled accurate image super-resolution in real-time. However, recent attempts to benefit from temporal correlations in video super-resolution have been limited to naive or inefficient architectures. In this paper, we introduce spatio-temporal sub-pixel convolution networks that effectively exploit temporal redundancies and improve reconstruction accuracy while maintaining real-time speed. Specifically, we discuss the use of early fusion, slow fusion and 3D convolutions for the joint processing of multiple consecutive video frames. We also propose a novel joint motion compensation and video super-resolution algorithm that is orders of magnitude more efficient than competing methods, relying on a fast multi-resolution spatial transformer module that is end-to-end trainable. These contributions provide both higher accuracy and temporally more consistent videos, which we confirm qualitatively and quantitatively. Relative to single-frame models, spatio-temporal networks can either reduce the computational cost by 30% whilst maintaining the same quality or provide a 0.2dB gain for a similar computational cost. Results on publicly available datasets demonstrate that the proposed algorithms surpass current state-of-the-art performance in both accuracy and efficiency.
Related Papers
- → A fast multi-reference frame motion estimation algorithm(2010)8 cited
- → Fast multiple reference frame motion estimation for H.264/AVC(2007)3 cited
- → <title>Camera zoom/pan estimation and compensation for video compression</title>(1991)11 cited
- → A new Motion Estimation Technique for video compression(2013)2 cited
- → Motion Estimation and Compensation(2002)18 cited