期刊文献+

基于高斯特征均衡的改进图像修复模型 被引量:1

Improved Image Inpainting Model with Gaussian Distribution Feature Equalization
下载PDF
导出
摘要 图像修复是指利用图像的已知区域信息去重建图像或视频的未知区域的过程。在过去基于深度学习的图像补全网络,采用基于自编码的双阶段修复模型,如Deepfill网路,并在第二阶段加入基于注意力机制的内容注意力模块提高修复水平,但整体修复效果容易出现伪影,模糊,边缘结构不清等问题,后续不少论文根据此问题,通过提高特征提取的效率来改进算法,但增加了运算量并使算法复杂化。本文章针对这个问题提出了一种基于特征均衡的改进内容注意力模块。该模块在不引入其他权重的前提下,提高模型整体修复效果。改进的内容注意力模块的通过特征均衡使前景像素点不仅能获取当前分数最大的背景块建议,同时能获得周围背景块的建议,使最终生成图像在结构表现上更平滑,语义更统一,减少了生成的图像模糊和伪影。实验在CelebA-HQ和Paris数据集上与另外两种模型进行了数值对比。本文模型在CelebA-HQ数据集对小面积掩码(<20%)的修复实验中平均SSIM值达到了90.2%,平均PSNR值达到了34.79dB;在较大面积掩码(40%-50%)的任务中,本文的算法在平均L1误差和L2误差均优于其他两种算法,平均PSNR为28.0dB,平均SSIM为86.74%。通过对修复图像的直观比较,加入基于高斯分布的特征融合模块能够生成结构更清晰,伪影更少的图像。综上,本文所提出的改进算法能够解决一定的模糊、伪影问题,同时没有增加额外的模型权重。 Image inpainting refers to the process of reconstructing unknown regions of an image or video using the known region information of the image.In the past,deep learning based image completion networks mostly use self encoding two stage structure.In the second stage,the content attention module based on attention mechanism is added to the Deepfill model to improve the repair level.However,the overall restoration effect is prone to artifacts,blurring,unclear edge structure and so on.Many subsequent papers have been solved by improving the efficiency of feature extraction,such as deepfillv2,pen net,etc.,but at the same time,it increases the amount of calculation of the model and complicates the structure.To solve this problem,this paper proposes an improved content attention module based on feature fusion to improve the overall restoration effect without introducing other convolution blocks.The feature information of the surrounding blocks is fused to the foreground pixels through feature fusion,so that each foreground pixel of the content attention module can not only obtain the recommendation of the background block with the largest current score,but also obtain the recommendation of the surrounding background block with the largest score.Finally,the pixel change of the generated image is smoother,the continuity between adjacent pixels is provided,and the repair quality is improved.The experiments in this paper are compared with the other two models on CelebA-HQ and Paris Street View datasets.Among them,the average SSIM value of the model is 90.2%in the CelebA-HQ data set,and the average PSNR value reaches 34.79dB in the repair experiment of small area mask(<20%).With the increase of mask coverage,the algorithm in this paper has no significant impact on the value.In the tasks with large area mask(40%-50%),the algorithm in this paper is better than the other two algorithms in average L1 error and L2 error.The average PSNR is 28.0db and the average SSIM is 86.74%.Through the intuitive comparison of restored images,adding the feature fusion module based on Gauss distribution can generate images with clearer structure and fewer artifacts.In summary,the improved repair model proposed in this paper combines the way of feature fusion,which can effectively improve the quality of the original model and has certain research value.
作者 李维 LI Wei(Southwest Jiaotong University,Chengdu 611730,China)
机构地区 西南交通大学
出处 《价值工程》 2022年第18期101-104,共4页 Value Engineering
关键词 图像修复 深度学习 生成对抗网络 注意力机制 特征融合 deep learning image inpainting generative adversarial network convolutional neural network attention
  • 相关文献

参考文献1

二级参考文献3

共引文献27

同被引文献4

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部