摘要
目前,基于深度学习的融合方法依赖卷积核提取局部特征,而单尺度网络、卷积核大小以及网络深度的限制无法满足图像的多尺度与全局特性。为此,本文提出了红外与可见光图像注意力生成对抗融合方法。该方法采用编码器和解码器构成的生成器以及两个判别器。在编码器中设计了多尺度模块与通道自注意力机制,可以有效提取多尺度特征,并建立特征通道长距离依赖关系,增强了多尺度特征的全局特性。此外,构建了两个判别器,以建立生成图像与源图像之间的对抗关系,保留更多细节信息。实验结果表明,本文方法在主客观评价上都优于其他典型方法。
At present,deep learning-based fusion methods rely only on convolutional kernels to extract local features,but the limitations of single-scale networks,convolutional kernel size,and network depth cannot provide a sufficient number of multi-scale and global image characteristics.Therefore,here we propose an infrared and visible image fusion method using attention-based generative adversarial networks.This study uses a generator consisting of an encoder and decoder,and two discriminators.The multi-scale module and channel self-attention mechanism are designed in the encoder,which can effectively extract multi-scale features and establish the dependency between the long ranges of feature channels,thus enhancing the global characteristics of multi-scale features.In addition,two discriminators are constructed to establish an adversarial relationship between the fused image and the source images to preserve more detailed information.The experimental results demonstrate that the proposed method is superior to other typical methods in both subjective and objective evaluations.
作者
武圆圆
王志社
王君尧
邵文禹
陈彦林
WU Yuanyuan;WANG Zhishe;WANG Junyao;SHAO Wenyu;CHEN Yanlin(School of Applied Science,Taiyuan University of Science and Technology,Taiyuan 030024,China)
出处
《红外技术》
CSCD
北大核心
2022年第2期170-178,共9页
Infrared Technology
基金
山西省面上自然基金项目(201901D111260)
信息探测与处理山西省重点实验室开放研究基金(ISTP2020-4)
太原科技大学博士启动基金(20162004)。
关键词
图像融合
通道自注意力机制
深度学习
生成对抗网络
红外图像
可见光图像
image fusion
channel self-attention mechanism
deep learning
generative adversarial networks
infrared image
visible image