Video frame interpolation via down–up scale generative adversarial networks
Citations Over TimeTop 16% of 2022 papers
Abstract
Frame interpolation finds many applications in video applications, including frame rate up-conversion and video compression. Deep learning-based methods have been proposed for frame interpolation, but a long runtime is typically required to achieve good visual quality. In this paper, we introduce an efficient frame interpolation method based on a modified generative adversarial network. The proposed framework consists of a generator with a pair of down–up scale modules, where the down-scaled-input module attempts to capture the overall structure of the scene while the original-scale-input module aims to restore finer textures. Skip connections and an input processing block are further incorporated into the minimal two-scale generator design to expedite processing without losing image features. The difference between the synthesized frame and the ground truth is measured by a combined loss function, including one adversarial loss and three reconstruction losses. Compared to the state-of-the-art motion compensation and deep-learning based frame interpolation approaches, the proposed framework achieves the most satisfactory trade-off between the synthesis quality and runtime.
Related Papers
- → Motion-compensated frame interpolation scheme for H.263 codec(2003)20 cited
- Adaptive motion estimation technique for motion compensated interframe interpolation(1999)
- → An Entire Frame Loss Recovery Algorithm for H.264/AVC over Wireless Networks(2009)
- The frame processing scheme of video based on motion estimation(2003)
- → Adaptive motion estimation technique for motion compensated interframe interpolation(2003)