摘要
为解决修复人为破损图像大面积不规则缺失情况时,修复结果存在的结构不合理、伪影、失真等问题,提出了一种基于高效注意力的密集残差图像修复算法。首先,将破损图像进行下采样,捕获图像细节特征,将其传递至包含空洞卷积的密集残差网络,增大信息感受野,提升修复细节特征的细粒度。同时增加高效注意力机制,使用特征图加权聚合生成全局上下文向量,对输入特征进行全局描述。此外,引入超图卷积模块,用于捕捉全局信息并增强修复细节的完整性。然后进行上采样,还原图像的结构纹理信息。最后将生成的图像输入SN-Patch GAN鉴别器进行判别优化,进一步提升修复效果。在公开数据集上进行训练和测试,实验结果表明:本文算法可以实现大面积随机缺失图像的修复,有效捕捉全局上下文信息,修复结果具有平滑过渡边界和清晰细节,满足视觉连贯性及真实性,在修复的视觉效果、峰值信噪比、结构相似度和平均误差方面,均优于对比算法。
In order to solve the problems of unreasonable structure, artifact and distortion of inpainting results in the case of irregular images and large-area missing, a dense residual image inpainting algorithm based on efficient attention is proposed. Firstly, the damaged image is downsampled to gradually reveal the detailed information of the image features, which is then passed to the dense residual network containing the dilated convolution to increase the information receptive field and captures more detailed texture information. At the same time, an efficient attention mechanism is added, and the weighted aggregation of feature maps is used to generate a global context vector, which is used to reflect the global description of the input features. Furthermore, a hypergraph convolution module is introduced to capture global information and enhance the inpainting fine-grainedness. Thereafter, upsampling is performed to restore the structural texture information of the image. Finally, the generated image is fed into the SN-Patch GAN discriminator for discriminant optimization to further improve the repaired effect. Training and testing are carried out on public datasets. The experimental results show that the proposed algorithm can achieve large-area missing image inpainting and effectively capture the global context information and enhance the fine-grained inpainting, which the inpainting results have smooth boundaries and clear details, satisfing visual coherence and authenticity, and are superior over the state of the art in terms of visual effects, peak signal-to-noise ratio, structural similarity and average error.
出处
《计算机科学与应用》
2023年第5期1148-1156,共9页
Computer Science and Application