摘要
提出一种基于生成对抗网络的破损老照片修复方法。生成器基于U-Net网络,采用局部卷积代替所有的卷积层,仅对有效像素进行操作,不仅避免传统常规卷积所造成的色彩不协调和模糊等问题,而且能够修复任意非中心不规则的破损区域。考虑对长距离特征信息的依赖,在生成网络解码阶段加入上下文注意力模块,以保持语义连贯性。此外,生成器的损失函数除了基础的对抗损失以外,还加入了感知损失、风格损失和重构损失,以增强网络稳定性。在CelebA-HQ数据集和真实破损老照片上进行实验,实验结果表明,该方法不受破损情况的限制,对破损老照片可以达到不错的修复效果。
This paper proposes a method to inpaint damaged old photos based on generative adversarial networks.The generator is based on the U-Net network and uses partial convolution instead of all convolutional layers.It only operates on effective pixels,which not only avoids the color discrepancy and blurriness caused by standard convolution,but also can repair irregular damaged area.Considering the dependence on long-distance feature information,the contextual attention model is added in the decoding stage of generation network to maintain semantic coherence.In addition to the basic GAN loss,the loss function of the generator also adds perceptual loss,style loss and reconstruction loss to enhance network stability.Experiments are conducted on the CelebA-HQ dataset and real damaged old photos.The experimental results show that the method is not limited by the damage and can achieve a good restoration effect on the damaged old photos.
作者
陈圆圆
刘惠义
CHEN Yuan-yuan;LIU Hui-yi(College of Computer and Information,Hohai University,Nanjing 211100,China)
出处
《计算机与现代化》
2021年第4期42-47,共6页
Computer and Modernization
基金
江苏省水利厅科技计划项目(2017003ZB)。
关键词
生成对抗网络
局部卷积
上下文注意力
老照片修复
generative adversarial networks
partial convolution
contextual attention
old photos inpainting