Learning Linear Transformations for Fast Image and Video Style Transfer
Citations Over TimeTop 1% of 2019 papers
Abstract
Given a random pair of images, a universal style transfer method extracts the feel from a reference image to synthesize an output based on the look of a content image. Recent algorithms based on second-order statistics, however, are either computationally expensive or prone to generate artifacts due to the trade-off between image quality and runtime performance. In this work, we present an approach for universal style transfer that learns the transformation matrix in a data-driven fashion. Our algorithm is efficient yet flexible to transfer different levels of styles with the same auto-encoder network. It also produces stable video style transfer results due to the preservation of the content affinity. In addition, we propose a linear propagation module to enable a feed-forward network for photo-realistic style transfer. We demonstrate the effectiveness of our approach on three tasks: artistic style, photo-realistic and video style transfer, with comparisons to state-of-the-art methods.
Related Papers
- → On the Possibility of Replacing the Backward Transformation in the Modified Simplex Method by a Forward Transformation(1973)
- Nanjing-style Illustrations in A Journey to the West and the Spread of the Book:Differences and Similarities of the Illustrations among Nanjing-style,Huizhou-style and Jianyang-style(2009)
- Design and Realization of RS Encoder Based on FPGA(2009)
- → A(2023)