期刊文献+

基于注意力机制与光照感知网络的红外与可见光图像融合 被引量:1

Infrared and Visible Image Fusion Based on Attention Mechanism and Illumination-Aware Network
下载PDF
导出
摘要 部分图像融合方法未充分考虑图像环境的光照状况,导致融合图像中出现红外目标亮度不足以及整体画面亮度较低,从而影响纹理细节的清晰度.为解决上述问题,提出一种基于注意力机制与光照感知网络相结合的红外与可见光图像融合算法.首先,在训练融合网络之前利用光照感知网络计算当前场景是日间或夜间的概率,将其运用至融合网络损失函数中,用以指导融合网络训练;然后,在网络的特征提取部分采用空间注意力机制和深度可分离卷积对源图像进行特征提取,得到空间显著信息后,输入卷积神经网络(CNN)以提取深度特征;最后,将深度特征信息进行拼接用于图像重建,进而得到最终的融合图像.实验结果表明:本文方法的互信息(MI)、视觉保真度(VIF)、平均梯度(AG)、融合质量(Qabf)与空间频率(SF)较对比方法分别平均提高39.33%、11.29%、26.27%、47.11%和39.01%;融合后的图像能够有效保留红外目标亮度,且包含丰富的纹理细节信息. Some image fusion methods do not fully consider the illumination conditions in the image environment,resulting in insufficient brightness of infrared targets and overall low brightness of the image in the fused image,thereby affecting the clarity of texture details.To address these issues,an infrared and visible image fusion algorithm based on attention mechanism and illumination-aware network was proposed.Firstly,before training the fusion network,the illumination-aware network was used to calculate the probability that the current scene was daytime or nighttime and apply it to the loss function of the fusion network,so as to guide the training of the fusion network.Then,in the feature extraction part of the network,spatial attention mechanism and depthwise separable convolution were used to extract features from the source image.After obtaining spatial salient information,it was input into a convolutional neural network(CNN) to extract deep features.Finally,the deep feature information was concatenated for image reconstruction to obtain the final fused image.The experimental results show that the method proposed in this paper improves mutual information(MI),visual fidelity(VIF),average gradient(AG),fusion quality(Qabf),and spatial frequency(SF) by an average of 39.33%,11.29%,26.27%,47.11%,and 39.01%,respectively.At the same time,it can effectively preserve the brightness of infrared targets in the fused images,including rich texture detail information.
作者 杨艳春 闫岩 王可 YANG Yanchun;YAN Yan;WANG Ke(School of Electronic and Information Engineering,Lanzhou Jiaotong University,Lanzhou 730070,China)
出处 《西南交通大学学报》 EI CSCD 北大核心 2024年第5期1204-1214,共11页 Journal of Southwest Jiaotong University
基金 长江学者和创新团队发展计划(IRT_16R36) 国家自然科学基金项目(62067006) 甘肃省科技计划(18JR3RA104) 甘肃省高等学校产业支撑计划(2020C-19) 甘肃省教育厅青年博士基金项目(2022QB-067) 甘肃省自然科学基金项目(23JRRA847,21JR7RA300)。
关键词 图像融合 注意力机制 卷积神经网络 红外特征提取 深度学习 Image fusion attention mechanism convolutional neural network infrared feature extraction deep learning
  • 相关文献

参考文献11

二级参考文献58

共引文献110

同被引文献8

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部