摘要
提出了一种红外和可见光图像的融合方法。该方法采用深度学习中的生成对抗网络(generative adversarial network,GAN)来融合两种模态图像,融合过程主要通过网络架构中生成器和鉴别器之间的对抗实现。生成器采用多尺度链接架构,有效提取并利用源图像的深层与浅层特征信息。同时鉴别器采用了与传统全局鉴别器不同的局部鉴别器,确保融合后的图像充分包含源图像的信息与特征分布。经实验验证,采用该方法融合后的图像能有效包含两种源图像各自的特征。
A fusion method for infrared and visible image was presented.The method used generative adversarial network(GAN)of deep learning to fuse two modal images.The fusion process was mainly achieved through adversarial interactions between generators and discriminators of the network architecture.The generator employed a multi-scale link architecture to allow effective extraction and utilization of deep and shallow-level features from the source images.Moreover,the local discriminator which was distinct from traditional global discriminator was used to ensure comprehensive incorporation of the information and feature distributions from the source images in the fused output.Experimental results demonstrate the effectiveness of the proposed method in preserving the distinctive characteristics of both source images in the fused output.
作者
刘兆丰
姜家瑞
傅迎华
LIU Zhaofeng;JIANG Jiarui;FU Yinghua(School of Optical-Electrical and Computer Engineering,University of Shanghai for Science and Technology,Shanghai 200082,China)
出处
《制导与引信》
2023年第4期22-28,共7页
Guidance & Fuze
基金
上海航天科技创新基金(SAST2021-005)。