期刊文献+

基于残差注意力融合和门控信息蒸馏的图像修复 被引量:2

Image Inpainting via Residual Attention Fusion and Gated Information Distillation
下载PDF
导出
摘要 图像修复在计算机视觉任务中具有重大的意义和价值。近年来,基于深度学习的图像修复模型被广泛应用于该领域中。但是现有的深度学习图像修复模型存在对破损图像中有效信息的利用不足和受破损图像中掩码信息干扰的问题,从而导致修复出的图像的部分结构缺失和部分细节模糊。为此,文中提出了基于残差注意力融合和门控信息蒸馏的图像修复模型。首先,该图像修复模型由生成器和鉴别器两部分组成,生成器的主干结构采用U-Net网络,由编码器和解码器组成;鉴别器采用马尔可夫鉴别器,由6个卷积层组成。然后,在编码器和解码器中分别构建残差注意力融合结构,以增强破损图像中有效信息的利用和减少掩码信息的干扰。最后,在编码器和解码器的跳跃连接中嵌入门控信息蒸馏模块,进一步提取待修复图像中的有效低级特征。在公开人脸和街景数据集上的实验结果表明:文中模型在语义结构和纹理细节方面具有更好的修复效果;文中模型的结构相似性、峰值信噪比、平均绝对值误差、最小平方误差和弗雷歇距离指标均优于5种对比模型,说明文中模型的修复质量优于对比模型。 Image inpainting is of great significance and value in computer vision tasks.In recent years,image inpainting models based on deep learning have been widely used in this field.However,the existing deep learning image inpainting models have the problems of insufficient utilization of the effective information in the damaged image and interference by the mask information in the damaged image,which leads to the loss of part of the structure and fuzzy part of the details of the repaired image.Therefore,this paper proposed an image inpainting model based on a residual attention fusion and gated information distillation.Firstly,the model consists of two parts,the generator and the discriminator.The backbone structure of the generator uses the U-Net network and consists of two parts,the encoder and the decoder.The discriminator uses a Markov discriminator and consists of six convolutional layers.Then,the residual attention fusion block was used in the encoder and decoder,respectively,to enhance the utilization of valid information in the broken image and reduce the interference of mask information.Finally,a gated information distillation block was embedded in the skip connection of the encoder and decoder to further extract the low-level features in the damaged image.The experimental results on public face and street view datasets show that,the proposed model has better repair performance in semantic structure and texture details;the proposed model outperforms the five contrast models in structural similarity,peak signal to noise ratio,mean absolute error,mean square error and Fréchet distance indicators,demonstrating that the inpainting quality of the proposed model is superior to the compared models.
作者 余映 何鹏浩 徐超越 YU Ying;HE Penghao;XU Chaoyue(School of Information Science and Engineering,Yunnan University,Kunming 650091,Yunnan,China)
出处 《华南理工大学学报(自然科学版)》 EI CAS CSCD 北大核心 2022年第12期49-59,共11页 Journal of South China University of Technology(Natural Science Edition)
基金 国家自然科学基金资助项目(62166048) 云南省应用基础研究计划项目(2018FB102)。
关键词 深度学习 图像修复 残差注意力融合 门控信息蒸馏 deep learning image inpainting residual attention fusion gated information distillation
  • 相关文献

参考文献1

二级参考文献1

  • 1M.Ablikim,M.N.Achasov,P.Adlarson,S.Ahmed,M.Albrecht,M.Alekseev,A.Amoroso,F.F.An,Q.An,Y.Bai,O.Bakina,R.Baldini Ferroli,Y.Ban,K.Begzsuren,J.V.Bennett,N.Berger,M.Bertani,D.Bettoni,F.Bianchi,J Biernat,J.Bloms,I.Boyko,R.A.Briere,L.Calibbi,H.Cai,X.Cai,A.Calcaterra,G.F.Cao,N.Cao,S.A.Cetin,J.Chai,J.F.Chang,W.L.Chang,J.Charles,G.Chelkov,Chen,G.Chen,H.S.Chen,J.C.Chen,M.L.Chen,S.J.Chen,Y.B.Chen,H.Y.Cheng,W.Cheng,G.Cibinetto,F.Cossio,X.F.Cui,H.L.Dai,J.P.Dai,X.C.Dai,A.Dbeyssi,D.Dedovich,Z.Y.Deng,A.Denig,Denysenko,M.Destefanis,S.Descotes-Genon,F.De Mori,Y.Ding,C.Dong,J.Dong,L.Y.Dong,M.Y.Dong,Z.L.Dou,S.X.Du,S.I.Eidelman,J.Z.Fan,J.Fang,S.S.Fang,Y.Fang,R.Farinelli,L.Fava,F.Feldbauer,G.Felici,C.Q.Feng,M.Fritsch,C.D.Fu,Y.Fu,Q.Gao,X.L.Gao,Y.Gao,Y.Gao,Y.G.Gao,Z.Gao,B.Garillon,I.Garzia,E.M.Gersabeck,A.Gilman,K.Goetzen,L.Gong,W.X.Gong,W.Gradl,M.Greco,L.M.Gu,M.H.Gu,Y.T.Gu,A.Q.Guo,F.K.Guo,L.B.Guo,R.P.Guo,Y.P.Guo,A.Guskov,S.Han,X.Q.Hao,F.A.Harris,K.L.He,F.H.Heinsius,T.Held,Y.K.Heng,Y.R.Hou,Z.L.Hou,H.M.Hu,J.F.Hu,T.Hu,Y.Hu,G.S.Huang,J.S.Huang,X.T.Huang,X.Z.Huang,Z.L.Huang,N.Huesken,T.Hussain,W.Ikegami Andersson,W.Imoehl,M.Irshad,Q.Ji,Q.P.Ji,X.B.Ji,X.L.Ji,H.L.Jiang,X.S.Jiang,X.Y.Jiang,J.B.Jiao,Z.Jiao,D.P.Jin,S.Jin,Y.Jin,T.Johansson,N.Kalantar-Nayestanaki,X.S.Kang,R.Kappert,M.Kavatsyuk,B.C.Ke,I.K.Keshk,T.Khan,A.Khoukaz,P.Kiese,R.Kiuchi,R.Kliemt,L.Koch,O.B.Kolcu,B.Kopf,M.Kuemmel,M.Kuessner,A.Kupsc,M.Kurth,M.G.Kurth,W.Kuhn,J.S.Lange,P.Larin,L.Lavezzi,H.Leithoff,T.Lenz,C.Li,Cheng Li,D.M.Li,F.Li,F.Y.Li,G.Li,H.B.Li,H.J.Li,J.C.Li,J.W.Li,Ke Li,L.K.Li,Lei Li,P.L.Li,P.R.Li,Q.Y.Li,W.D.Li,W.G.Li,X.H.Li,X.L.Li,X.N.Li,X.Q.Li,Z.B.Li,H.Liang,H.Liang,Y.F.Liang,Y.T.Liang,G.R.Liao,L.Z.Liao,J.Libby,C.X.Lin,D.X.Lin,Y.J.Lin,B.Liu,B.J.Liu,C.X.Liu,D.Liu,D.Y.Liu,F.H.Liu,Fang Liu,Feng Liu,H.B.Liu,H.M.Liu,Huanhuan Liu,Huihui Liu,J.B.Liu,J.Y.Liu,K.Y.Liu,Ke Liu,Q.Liu,S.B.Liu,T.Liu,X.Liu,X.Y.Liu,Y.B.Liu,Z.A.Liu,Zhiqing Liu,Y.F.Long,X.C.Lou,H.J.Lu,J.D.Lu,J.G.Lu,Y.Lu,Y.P.Lu,C.L.Luo,M.X.Luo,P.W.Luo,T.Luo,X.L.Luo,S.Lusso,X.R.Lyu,F.C.Ma,H.L.Ma,L.L.Ma,M.M.Ma,Q.M.Ma,X.N.Ma,X.X.Ma,X.Y.Ma,Y.M.Ma,F.E.Maas,M.Maggiora,S.Maldaner,S.Malde,Q.A.Malik,A.Mangoni,Y.J.Mao,Z.P.Mao,S.Marcello,Z.X.Meng,J.G.Messchendorp,G.Mezzadri,J.Min,T.J.Min,R.E.Mitchell,X.H.Mo,Y.J.Mo,C.Morales Morales,N.Yu.Muchnoi,H.Muramatsu,A.Mustafa,S.Nakhoul,Y.Nefedov,F.Nerling,I.B.Nikolaev,Z.Ning,S.Nisar,S.L.Niu,S.L.Olsen,Q.Ouyang,S.Pacetti,Y.Pan,M.Papenbrock,P.Patteri,M.Pelizaeus,H.P.Peng,K.Peters,A.A.Petrov,J.Pettersson,J.L.Ping,R.G.Ping,A.Pitka,R.Poling,V.Prasad,M.Qi,T.Y.Qi,S.Qian,C.F.Qiao,N.Qin,X.P.Qin,X.S.Qin,Z.H.Qin,J.F.Qiu,S.Q.Qu,K.H.Rashid,C.F.Redmer,M.Richter,M.Ripka,A.Rivetti,V.Rodin,M.Rolo,G.Rong,J.L.Rosner,Ch.Rosner,M.Rump,A.Sarantsev,M.Savrie,K.Schoenning,W.Shan,X.Y.Shan,M.Shao,C.P.Shen,P.X.Shen,X.Y.Shen,H.Y.Sheng,X.Shi,X.D Shi,J.J.Song,Q.Q.Song,X.Y.Song,S.Sosio,C.Sowa,S.Spataro,F.F.Sui,G.X.Sun,J.F.Sun,L.Sun,S.S.Sun,X.H.Sun,Y.J.Sun,Y.K Sun,Y.Z.Sun,Z.J.Sun,Z.T.Sun,Y.T Tan,C.J.Tang,G.Y.Tang,X.Tang,V.Thoren,B.Tsednee,I.Uman,B.Wang,B.L.Wang,C.W.Wang,D.Y.Wang,H.H.Wang,K.Wang,L.L.Wang,L.S.Wang,M.Wang,M.Z.Wang,Wang Meng,P.L.Wang,R.M.Wang,W.P.Wang,X.Wang,X.F.Wang,X.L.Wang,Y.Wang,Y.F.Wang,Z.Wang,Z.G.Wang,Z.Y.Wang,Zongyuan Wang,T.Weber,D.H.Wei,P.Weidenkaff,H.W.Wen,S.P.Wen,U.Wiedner,G.Wilkinson,M.Wolke,L.H.Wu,L.J.Wu,Z.Wu,L.Xia,Y.Xia,S.Y.Xiao,Y.J.Xiao,Z.J.Xiao,Y.G.Xie,Y.H.Xie,T.Y.Xing,X.A.Xiong,Q.L.Xiu,G.F.Xu,L.Xu,Q.J.Xu,W.Xu,X.P.Xu,F.Yan,L.Yan,W.B.Yan,W.C.Yan,Y.H.Yan,H.J.Yang,H.X.Yang,L.Yang,R.X.Yang,S.L.Yang,Y.H.Yang,Y.X.Yang,Yifan Yang,Z.Q.Yang,M.Ye,M.H.Ye,J.H.Yin,Z.Y.You,B.X.Yu,C.X.Yu,J.S.Yu,C.Z.Yuan,X.Q.Yuan,Y.Yuan,A.Yuncu,A.A.Zafar,Y.Zeng,B.X.Zhang,B.Y.Zhang,C.C.Zhang,D.H.Zhang,H.H.Zhang,H.Y.Zhang,J.Zhang,J.L.Zhang,J.Q.Zhang,J.W.Zhang,J.Y.Zhang,J.Z.Zhang,K.Zhang,L.Zhang,S.F.Zhang,T.J.Zhang,X.Y.Zhang,Y.Zhang,Y.H.Zhang,Y.T.Zhang,Yang Zhang,Yao Zhang,Yi Zhang,Yu Zhang,Z.H.Zhang,Z.P.Zhang,Z.Q.Zhang,Z.Y.Zhang,G.Zhao,J.W.Zhao,J.Y.Zhao,J.Z.Zhao,Lei Zhao,Ling Zhao,M.G.Zhao,Q.Zhao,S.J.Zhao,T.C.Zhao,Y.B.Zhao,Z.G.Zhao,A.Zhemchugov,B.Zheng,J.P.Zheng,Y.Zheng,Y.H.Zheng,B.Zhong,L.Zhou,L.P.Zhou,Q.Zhou,X.Zhou,X.K.Zhou,Xingyu Zhou,Xiaoyu Zhou,Xu Zhou,A.N.Zhu,J.Zhu,J.Zhu,K.Zhu,K.J.Zhu,S.H.Zhu,W.J.Zhu,X.L.Zhu,Y.C.Zhu,Y.S.Zhu,Z.A.Zhu,J.Zhuang,B.S.Zou,J.H.Zou,无.Future Physics Programme of BESⅢ[J].Chinese Physics C,2020,44(4). 被引量:532

共引文献7

同被引文献8

引证文献2

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部