Infrared and Visible Image Fusion via Decoupling Network
Citations Over TimeTop 1% of 2022 papers
Abstract
In general, the goal of existing infrared and visible image fusion (IVIF) methods is to make the fused image contain both the high-contrast regions of the infrared image and the texture details of the visible image. However, this definition would lead the fusion image losing information from the visible image in high-contrast areas. For this problem, this paper proposed a decoupling network-based IVIF method (DNFusion), which utilizes the decoupled maps to design additional constraints on the network to force the network to retain the saliency information of the source image effectively. The current definition of image fusion is satisfied while effectively maintaining the saliency objective of the source images. Specifically, the feature interaction module inside effectively facilitates the information exchange within the encoder and improves the utilization of complementary information. Also, a hybrid loss function constructed with weight fidelity loss, gradient loss, and decoupling loss which ensures the fusion image to be generated to effectively preserves the source image’s texture details and luminance information. The qualitative and quantitative comparison of extensive experiments demonstrates that our model can generate a fused image containing saliency objects and clear details of the source images, and the method we proposed has a better performance than other state-of-the-art methods.
Related Papers
- → Comparative analysis of different fusion rules for SAR and multi-spectral image fusion based on NSCT and IHS transform(2015)9 cited
- → A Novel Fusion Rule for Medical Image Fusion in Complex Wavelet Transform Domain(2016)7 cited
- → Better and Faster Deep Image Fusion with Spatial Frequency(2021)1 cited
- Image fusion algorithm based on high frequency channel(2003)
- → Cognition-based Fusion Method for Infrared and Visible image(2018)