期刊文献+

基于生成对抗网络的多聚焦图像融合 被引量:3

Multi-Focus Image Fusion Based on Generative Adversarial Network
下载PDF
导出
摘要 多聚焦图像融合能够融合同一场景下具有不同聚焦部分的一系列图像.为了克服多聚焦图像融合模糊特征提取中存在的不足,提出一种基于U-Net的生成对抗网络模型.首先,生成器采用U-Net和SSE对多聚焦图像的特征进行提取,并完成图像融合;其次,判别器采用卷积层对已知的融合结果和生成器生成的融合图像进行分辨;然后,损失函数采用生成器的对抗损失、映射损失、梯度损失、均方误差损失和判别器对抗损失对生成网络进行参数调节;最后,将生成器、判别器和损失函数组成生成对抗网络模型,并进行实验.Pascal VOC2012数据集作为生成对抗网络的训练集,包括近焦图像、远焦图像、映射图像和融合图像.实验结果证明,该生成对抗网络模型能够有效地提取多聚焦图像中的模糊特征,且融合图像在互信息、相位一致性和感知相似性等方面表现优异. Multi-focus image fusion can fuse a series of images that have a different focus in the same scene.To overcome the disadvantage extraction of the blurring character in multi-focus images,a generative adversarial network model based on U-Net is proposed.Firstly,the generator uses U-Net and SSE to extract the feature of the multi-focus image,and fuses images.Then,the discriminator uses convolutional layers to distinguish the fused result between the existed and the generative.Furthermore,a loss function has the loss of adversarial in the generator,loss of mapping,loss of gradient,loss mean square error and the loss of adversarial in the dis-criminator.The train data of generative adversarial network uses the dataset of Pascal VOC2012 to generate and includes near-focus image,far-focus image,mapping image and all-in-focus image.The experimental result shows that the proposed generative adversarial network model can effectively extract the blurring feature in multi-focus images,and the fused image have good performances on mutual information,phase congruency and structural similarity.
作者 蒋留兵 张点 潘波 郑朋 车俐 Jiang Liubing;Zhang Dian;Pan Bo;Zheng Peng;Che Li(School of Information and Communication,Guilin University of Electronic Technology,Guilin 541004;School of Computer Science and Information Security,Guilin University of Electronic Technology,Guilin 541004)
出处 《计算机辅助设计与图形学学报》 EI CSCD 北大核心 2021年第11期1715-1725,共11页 Journal of Computer-Aided Design & Computer Graphics
基金 国家自然科学基金(61561010) 广西自然科学基金(2017GXNSFAA198089) 广西重点研发计划(桂科AB18126003,桂科AB18221016).
关键词 多聚焦图像融合 U-Net 生成对抗网络 损失函数 multi-focus image fusion U-Net generative adversarial network loss function
  • 相关文献

参考文献4

二级参考文献30

  • 1Piella G. A general framework for rnultiresolution image fusion: from pixels to regions [J]. Information Fusion, 2003, 4 ( 4 ) : 259-280.
  • 2Rockinger O. Image sequence fusion using a shift-invariant wavelet transform [C]//Proceedings of the IEEE International Conference on Image Processing. Washington DC, USA: IEEE Computer Society, 1997: 288 -291.
  • 3Ray LA, Adhami R R. Dual tree discrete wavelet transform with application to image fusion [C]//Proceedings of the 38th Southeastern on System Theory. Huntsville, Albania: IEEE Computer Society, 2006: 430-433.
  • 4Ioannidou S, Karathanassi V. Investigation of the dual-tree complex and shift-invariant discrete wavelet transforms on quickbird image fusion [J]. IEEE Geoscience and Remote Sensing Letters , 2007,4(1): 166-170.
  • 5Nencini F, Garzelli A, Baronti S, et al. Remote sensing image fusion using the curvelet transform [J]. Information Fusion, 2007, 8 (2): 143-156.
  • 6Do M N, Vetterli M. The contourlet transform: an efficient directional multiresolution image representation [J]. IEEE Transactions on Image Processing, 2005, 14(12): 2091-2106.
  • 7Cunha L O Zhou J P. The nonsubsampled contourlet transform: Theory, design, and applications [J]. IEEE Transactions on Image Processing, 2006, 15 (10) : 3089-3101.
  • 8Yang B, Li S T, Sun F M. Image fusion using nonsubsampled contourlet transform [C]//Proceedings of the 4th International Conference on Image and Graphics. Chengdu, China: IEEE Computer Society, 2007: 719-724.
  • 9Yang B, Li S T. Multifocus image fusion and restoration with sparse representation [J]. IEEE Transactions on Instrumentation and Measurement, 2010, 59(4): 884-892.
  • 10Li S T, Kwok J T, Wang Y. Combination of images witb diverse focuses using the spatial frequency [J]. Information Fusion, 2001,2(3): 169-176.

共引文献66

同被引文献36

引证文献3

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部