摘要
近年来,卷积神经网络(CNN)在单幅图像超分辨率重建领域(SISR)展现出良好效果。深度网络可以在低分辨率图像和高分辨率图像之间建立复杂的映射,使得重建图像质量相对传统的方法取得巨大提升。由于现有SISR方法通过加深和加宽网络结构以增大卷积核的感受野,在具有不同重要性的空间域和通道域采用均等处理的方法,因此会导致大量的计算资源浪费在不重要的特征上。为了解决此问题,算法通过双重注意力模块捕捉通道域与空间域隐含的权重信息,以更加高效的分配计算资源,加快网络收敛,在网络中通过残差连接融合全局特征,不仅使得主干网络可以集中学习图像丢失的高频信息流,同时可以通过有效的特征监督加快网络收敛,为缓解MAE损失函数存在的缺陷,在算法中引入了一种特殊的Huber loss函数。在主流数据集上的实验结果表明,该算法相对现有的SISR算法在图像重建精度上有了明显的提高。
In recent years,the convolutional neural network(CNN)has achieved desired results in the field of single image super-resolution(SISR).Deep networks can establish complex mapping between low-resolution and high-resolution images,considerably enhancing the quality of reconstructed images,compared with the traditional methods.Since the existing SISR methods mainly increase the receptive field of convolution kernels by deepening and widening the network structure,and employ equal processing methods in spatial domains and channel domains of varying importance,a large number of computing resources are wasted on unimportant features.In order to address the realistic problems of the existing models,the algorithm proposed in this paper captured implicit weight information in channel and space domains through dual attention modules,so as to allocate computing resources more effectively and speed up the network convergence.The fusion of global features through residual connections in this network not only focused on learning the high-frequency information of images that had been lost,but also accelerated the network convergence through effective feature supervision.In order to alleviate the defects of the MAE loss function,a special Huber loss function was introduced in the algorithm.The experimental results on benchmark show that the proposed algorithm can significantly improve the image reconstruction accuracy compared with the existent SISR methods.
作者
李彬
王平
赵思逸
LI Bin;WANG Ping;ZHAO Si-yi(College of Electronic Science and Technology,National University of Defense Technology,Changsha Hunan 410072,China;College of Computer Science and Technology,National University of Defense Technology,Changsha Hunan 410072,China)
出处
《图学学报》
CSCD
北大核心
2021年第2期206-215,共10页
Journal of Graphics