摘要
深度卷积神经网络在单图像超分辨率重建方面取得了卓越成就,但其良好表现通常以巨大的参数数量为代价.本文提出一种简洁紧凑型递归残差网络结构,该网络通过局部残差学习减轻训练深层网络的困难,引入递归结构保证增加深度的同时控制模型参数数量,采用可调梯度裁剪方法防止产生梯度消失/梯度爆炸,使用反卷积层在网络末端直接上采样图像到超分辨率输出图像.基准测试表明,本文在重建出同等质量超分辨率图像的前提下,参数数量及计算复杂度分别仅为VDSR方法的1/10和1/(2n^2).
Despite the great success in single image super-resolution reconstruction achieved by deep convolutional neural network, the number of the computational parameters is often very large. This paper proposes a concise and compact recursive residual network. The local residual learning method is adopted to mitigate the difficulty of training very deep network, the recursive structure is configured to control the number of model parameters while increasing the model depth, the adjustable gradient clipping strategy is applied to prevent the gradient disappearance/gradient explosion, and a deconvolutional layer is set to directly up sample the image to a super-resolution image at the end of the residual network. According to benchmark tests, in the premise that the same quality super-resolution image is reconstructed, the number of parameters and the computational complexity of the proposed method are reduced to about 1/10 and 1/(2n^2)of VDSR, respectively.
作者
周登文
赵丽娟
段然
柴晓亮
ZHOU Deng-Wen;ZHAO Li-Juan;DUAN Ran;CHAI Xiao-Liang(School of Control and Computer Engineering, North China Electric Power University, Beijing 102206)
出处
《自动化学报》
EI
CSCD
北大核心
2019年第6期1157-1165,共9页
Acta Automatica Sinica
基金
北京市自然科学基金(4162056)
中央高校基本科研业务费专项资金(2018ZD06)资助~~
关键词
递归结构
残差学习
卷积神经网络
深度学习
超分辨率
Recursive structure
residual learning
convolutional neural network
deep learning
super-resolution