期刊文献+

基于空间元学习的放大任意倍的超分辨率重建方法 被引量:1

Super-resolution reconstruction method with arbitrary magnification based on spatial meta-learning
下载PDF
导出
摘要 针对现有的基于深度学习的超分辨率重建方法主要研究放大整数倍的重建,对放大任意倍(如非整数倍)重建情况讨论较少的问题,提出一种基于空间元学习的放大任意倍的超分辨率重建方法。首先,利用坐标投影找出高分辨率图像与低分辨率图像坐标间的对应关系;其次,在元学习网络的基础上,考虑特征图的空间信息,将提取到的空间特征与坐标位置相结合作为权值预测网络的输入;最后,将权值预测网络预测出的卷积核与特征图结合,从而有效地放大特征图的尺寸,得到放大任意倍的高分辨率图像。所提的空间元学习模块可以与其他深度网络相结合,得到放大任意倍的超分辨率图像重建方法。所提的放大任意倍(非整数倍)超分辨率重建方法解决了实际生活中放大尺寸固定且非整数倍的重建问题。实验结果表明,所提的重建方法在空间复杂度(网络参数)相当的情况下,时间复杂度(计算量)是其他重建方法的25%~50%,且峰值信噪比(PSNR)比其他一些方法提高了0.01~5 dB,结构相似度(SSIM)提高了0.03~0.11。 For the problem that the existing deep-learning based super-resolution reconstruction methods mainly study on the reconstruction problem of amplifying integer times,not on the cases of amplifying arbitrary times(e.g.non-integer times),a super-resolution reconstruction method with arbitrary magnification based on spatial meta-learning was proposed.Firstly,the coordinate projection was used to find the correspondence between the coordinates of high-resolution image and low-resolution image.Secondly,based on the meta-learning network,considering the spatial information of feature map,the extracted spatial features and coordinate positions were combined as the input of weighted prediction network.Finally,the convolution kernels predicted by the weighted prediction network were combined with the feature map in order to amplify the size of feature map effectively and obtain the high-resolution image with arbitrary magnification.The proposed spatial metalearning module was able to be combined with other deep networks to obtain super-resolution reconstruction methods with arbitrary magnification.The provided super-resolution reconstruction method with arbitrary magnification(non-integer magnification)was able to solve the reconstruction problem with a fixed size but non-integer scale in the real life.Experimental results show that,when the space complexity(network parameters)is equivalent,the time complexity(computational cost)of the proposed method is 25%-50%of that of the other reconstruction methods,the Peak Signal-toNoise Ratio(PSNR)of the proposed method is 0.01-5 dB higher than that of the others,and the Structural Similarity(SSIM)of the proposed method is 0.03-0.11 higher than that of the others.
作者 孙忠凡 周正华 赵建伟 SUN Zhongfan;ZHOU Zhenghua;ZHAO Jianwei(Department of Information and Mathematics,China Jiliang University,Hangzhou Zhejiang 310018,China)
出处 《计算机应用》 CSCD 北大核心 2020年第12期3471-3477,共7页 journal of Computer Applications
基金 国家自然科学基金资助项目(61571410) 浙江省自然科学基金资助项目(LY18F020018,LSY19F020001)。
关键词 超分辨率 深度学习 空间元学习 残差密集模块 权值预测 super-resolution deep learning spatial meta-learning residual dense module weight prediction
  • 相关文献

参考文献2

二级参考文献3

共引文献201

同被引文献4

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部