期刊文献+

基于多特征融合的多尺度生成对抗网络图像修复算法 被引量:1

Image inpainting algorithm of multi-scale generative adversarial network based on multi-feature fusion
下载PDF
导出
摘要 针对多尺度生成式对抗网络图像修复算法(MGANII)在修复图像过程中训练不稳定、修复图像的结构一致性差以及细节和纹理不足等问题,提出了一种基于多特征融合的多尺度生成对抗网络的图像修复算法。首先,针对结构一致性差以及细节和纹理不足的问题,在传统的生成器中引入多特征融合模块(MFFM),并且引入了一个基于感知的特征重构损失函数来提高扩张卷积网络的特征提取能力,从而改善修复图像的细节性和纹理特征;然后,在局部判别器中引入了一个基于感知的特征匹配损失函数来提升判别器的鉴别能力,从而增强了修复图像的结构一致性;最后,在对抗损失函数中引入风险惩罚项来满足利普希茨连续条件,使得网络在训练过程中能快速稳定地收敛。在CelebA数据集上,所提的多特征融合的图像修复算法与MANGII相比能快速收敛,同时所提算法所修复图像的峰值信噪比(PSNR)、结构相似性(SSIM)比基线算法所修复图像分别提高了0.45%~8.67%和0.88%~8.06%,而Frechet Inception距离得分(FID)比基线算法所修复图像降低了36.01%~46.97%。实验结果表明,所提算法的修复性能优于基线算法。 Aiming at the problems in Multi-scale Generative Adversarial Networks Image Inpainting algorithm(MGANII),such as unstable training in the process of image inpainting,poor structural consistency,insufficient details and textures of the inpainted image,an image inpainting algorithm of multi-scale generative adversarial network was proposed based on multi-feature fusion.Firstly,aiming at the problems of poor structural consistency and insufficient details and textures,a Multi-Feature Fusion Module(MFFM)was introduced in the traditional generator,and a perception-based feature reconstruction loss function was introduced to improve the ability of feature extraction in the dilated convolutional network,thereby supplying more details and texture features for the inpainted image.Then,a perception-based feature matching loss function was introduced into local discriminator to enhance the discrimination ability of the discriminator,thereby improving the structural consistency of the inpainted image.Finally,a risk penalty term was introduced into the adversarial loss function to meet the Lipschitz continuity condition,so that the network was able to converge rapidly and stably in the training process.On the dataset CelebA,compared with MANGII,the proposed multi-feature fusion image inpainting algorithm can converges faster.Meanwhile,the Peak Signal-to-Noise Ratio(PSNR)and Structural SIMilarity(SSIM)of the images inpainted by the proposed algorithm are improved by 0.45% to 8.67% and 0.88% to 8.06% respectively compared with those of the images inpainted by the baseline algorithms,and Frechet Inception Distance score(FID)of the images inpainted by the proposed algorithm is reduced by 36.01%to 46.97% than the images inpainted by the baseline algorithms.Experimental results show that the inpainting performance of the proposed algorithm is better than that of the baseline algorithms.
作者 陈刚 廖永为 杨振国 刘文印 CHEN Gang;LIAO Yongwei;YANG Zhenguo;LIU Wenying(School of Computer Science and Technology,Guangdong University of Technology,Guangzhou Guangdong 510006,China;Cyberspace Security Research Center,Peng Cheng Laboratory,Shenzhen Guangdong 518005,China)
出处 《计算机应用》 CSCD 北大核心 2023年第2期536-544,共9页 journal of Computer Applications
基金 国家自然科学基金资助项目(62076073)。
关键词 多尺度 特征匹配 特征融合 图像修复 生成对抗网络 multi-scale feature matching feature fusion image inpainting Generative Adversarial Network(GAN)
  • 相关文献

参考文献4

二级参考文献7

共引文献47

同被引文献10

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部