期刊文献+

信息分离和质量引导的红外与可见光图像融合 被引量:2

Information decomposition and quality guided infrared and visible image fusion
原文传递
导出
摘要 目的 红外与可见光图像融合的目标是将红外图像与可见光图像的互补信息进行融合,增强源图像中的细节场景信息。然而现有的深度学习方法通常人为定义源图像中需要保留的特征,降低了热目标在融合图像中的显著性。此外,特征的多样性和难解释性限制了融合规则的发展,现有的融合规则难以对源图像的特征进行充分保留。针对这两个问题,本文提出了一种基于特有信息分离和质量引导的红外与可见光图像融合算法。方法 本文提出了基于特有信息分离和质量引导融合策略的红外与可见光图像融合算法。设计基于神经网络的特有信息分离以将源图像客观地分解为共有信息和特有信息,对分解出的两部分分别使用特定的融合策略;设计权重编码器以学习质量引导的融合策略,将衡量融合图像质量的指标应用于提升融合策略的性能,权重编码器依据提取的特有信息生成对应权重。结果 实验在公开数据集RoadScene上与6种领先的红外与可见光图像融合算法进行了对比。此外,基于质量引导的融合策略也与4种常见的融合策略进行了比较。定性结果表明,本文算法使融合图像具备更显著的热目标、更丰富的场景信息和更多的信息量。在熵、标准差、差异相关和、互信息及相关系数等指标上,相较于对比算法中的最优结果分别提升了0.508%、7.347%、14.849%、9.927%和1.281%。结论 与具有领先水平的红外与可见光算法以及现有的融合策略相比,本文融合算法基于特有信息分离和质量引导,融合结果具有更丰富的场景信息、更强的对比度,视觉效果更符合人眼的视觉特征。 Objective Infrared and visible image fusion is essential to computer vision and image processing. To strengthen the scenes recognition derived of multisource images, more multi-sensors imagery information is required to be fused in relation to infrared and visible images. A fused image is generated for human perception-oriented visual tasks like video surveillance, target recognition and scene understanding. However, the existing fusion methods are usually designed by manually selecting the characteristics to be preserved. The existing fusion methods can be roughly divided into two categories in the context of traditional fusion methods and the deep learning-based fusion methods. For the traditional methods, to comprehensively characterize and decompose the source images, they need to manually design transformation methods. The fusion strategies are manually designed to fuse the decomposed subparts. The manually designed decomposition methods become more and more complex, which leads to the decline of fusion efficiency. For the deep learning-based methods, some methods define the unique characteristics of source images via human observation. The fused images are expected to preserve these characteristics as much as possible. However, it is difficult and unsuitable to identify the vital information through one or a few characteristics. Other methods are focused on preserving higher structural similarity with source images in terms of the fused image. It will reduce the saliency of thermal targets in the fusion result, which is not conductive to the rapid location and capture of thermal targets by the human vision system. Our method is designed to solve these two issues. We develop a new deep learning-based decomposition method for infrared and visible image fusion. Besides, we propose a deep learning-based and quality-guided fusion strategy to fuse the decomposed parts. Method Our infrared and visible image fusion method is based on the information decomposed and quality-guided fusion strategy. First, we design an image decomposition and representation way, which is based on the convolution neural network(CNN). For each source image, two encoders are used to decompose the source images into the common part and unique part. Based on three loss functions(including a reconstruction loss, a translation loss and a loss to constrain the unique information), four encoders are learned to realize the physical-meaning-related decomposition. For the two decomposed parts, specific fusion strategies are applied for each. For the common parts, a traditional fusion strategy is selected to reduce the computational complexity and improve the fusion efficiency. For the unique parts, a weight encoder is assigned to learn the quality-guided fusion strategy, which can further preserve the complementary information derived from multi-source images. To improve the fusion performance, the metrics is used to evaluate the quality of fused images. The weight encoder generates the corresponding weights according to the unique information. The generator-optimized in the unique information decomposition procedure is used to generate the final fused image according to the fused common part and unique part. Result Our method is compared to six state-of-the-art visible and infrared image fusion methods on the publicly available dataset named as RoadScene. In addition, the quality-guided fusion strategy is also in compared with four common fusion strategies, including mean, max, addition and l1-norm on the publicly available dataset. The qualitative comparisons show that our fusion results have three priorities as mentioned below: first, our fusion results can highlight thermal targets. It is beneficial to capture the thermal targets in accordance with the high contrast. Second, more scene information and clearer edges or textures can be presented. Some regions and textures are enhanced as well. Third, even in some extreme cases, our fusion results also show the most information. The effective information in one source image is preserved in the fused image without being affected by the regions in the other source image which has less information. Additionally, we also perform the quantitative evaluation of the proposed method with comparative fusion methods and strategies on six objective metrics. These metrics is composed of entropy, standard deviation, the sum of difference correlation, mutual information and correlation coefficient. Our method shows the best or comparable performance. Compared to existing fusion methods, our average is increased by 0.508%, 7.347%, 14.849%, 9.927% and 1.281% compared with existing methods, respectively. Furthermore, our method is applied to fuse a RGB visible image and a single-channel infrared image. The results show that our method is feasible to improve fusion results. Conclusion We develop an infrared and visible image fusion method based on information decomposition and quality-guided fusion strategy. The experiment results show that the proposed fusion method and fusion strategy outperforms several state-of-the-art infrared and visible image fusion methods and the existing fusion strategies. Both the qualitative and quantitative results show the effectiveness of the proposed method and strategy.
作者 徐涵 梅晓光 樊凡 马泳 马佳义 Xu Han;Mei Xiaoguang;Fan Fan;Ma Yong;Ma Jiayi(Electronic Information School,Wuhan University,Wuhan 430072,China)
出处 《中国图象图形学报》 CSCD 北大核心 2022年第11期3316-3330,共15页 Journal of Image and Graphics
基金 国家自然科学基金项目(61773295) 湖北省自然科学基金项目(2019CFA037)。
关键词 图像融合 特有信息分离 质量引导 红外与可见光图像 深度学习 image fusion unique information decomposition quality guidance infrared and visible images deep learning
  • 相关文献

参考文献4

二级参考文献56

  • 1王丽,卢迪,吕剑飞.一种基于小波方向对比度的多聚焦图像融合方法[J].中国图象图形学报,2008,13(1):145-150. 被引量:22
  • 2李晖晖,郭雷,刘航.基于二代curvelet变换的图像融合研究[J].光学学报,2006,26(5):657-662. 被引量:89
  • 3Bai X, Zhou F, Xue B. Edge preserved image fusion based on multiscale toggle contrast operator [J]. Image and Vision Computing, 2011, 29(12):829-839.
  • 4Li S, Kang X, Hu J. Image fusion with guided filtering [J]. IEEE Transactions on Image Processing, 2013, 22(7):2864-2875.
  • 5Miles B, Ayed I B, Law M W K, et al. Spine image fusion via graph cuts [J]. IEEE Transactions on Biomedical Engineering, 2013, 60(7): 1841-1850.
  • 6Liang J, He Y, Liu D, et al. Image fusion using higher order singular value decomposition [J]. IEEE Transactions on Image Processing, 2012, 21(5): 2898-2909.
  • 7Yu N, Qiu T, Bi F, et al. Image features extraction and fusion based on joint sparse representation [J]. IEEE Journal of Selected Topics in Signal Processing,2011,5(5):1074-1082.
  • 8Ellmauthaler A, Pagliari C L, Da Silva E A B. Multiscale image fusion using the undecimated wavelet transform with spectral factorization and nonorthogonal filter banks [J]. IEEE Transactions on Image Processing, 2013, 22(3): 1005-1017.
  • 9Liu Y, Jin J, Wang Q. Region level based multi-focus image fusion using quaternion wavelet and normalized cut [J]. Signal Processing, 2014, 97(4): 9-30.
  • 10Pradhan P S, King R L, Younan N H, et al. Estimation of the number of decomposition levels for a wavelet-based multiresolution multisensor image fusion [J]. IEEE Transactions on Geoscience and Remote Sensing, 2006, 44(12): 3674-3686.

共引文献65

同被引文献29

引证文献2

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部