摘要
针对极端低光情况下的图像增强问题,提出一种基于编码解码网络和残差网络的端到端的全卷积网络模型。设计一个包括编码解码网络和精细网络2部分的端到端的全卷积网络模型作为转换网络,直接处理短曝光图像的光传感器数据得到RGB格式的输出图像。该网络包含对抗思想、残差结构和感知损失,先通过对极低光图像编码解码重构图像的低频信息,之后将重构的低频信息输入残差网络中进而重构出图像的高频信息。在SID数据集上进行实验验证,结果表明,该方法有效地提高了极端低光情况下拍摄得到的图像进行低光增强之后的视觉效果,增加了细节表达,使得图像中物体的纹理更加清楚和边缘更加分明。
The process of obtaining the image with normal exposure time from the image with short exposure time photographed in extreme low-light conditions is defined as the image enhancement under extreme low-light conditions.In this paper,we proposed a method for the enhancement of the extreme low-light image based on the encoding-and-decoding network architecture and residual block.We designed an end-to-end fully convolutional network as the translation model,which consists of two parts:the encoding-and-decoding network and refinement network.The input data of the translation model is the extreme low-light raw data captured with short exposure time in extreme low-light conditions,and the output data is the image in RGB format.Firstly,the low-frequency information of the image was reconstructed via U-net and then was input into the residual network to reconstruct the high-frequency information of the image.Through the experiments carried out on the SID data set and comparisons with previous research results,it is proved that the method described in this paper can effectively enhance the visual effect of the images captured under extreme low-light conditions and improved with low-light enhancement,and increase the expression of the image details.
作者
杨勇
刘惠义
YANG Yong;LIU Hui-yi(College of Computer and Information,Hohai University,Nanjing Jiangsu 211100,China)
出处
《图学学报》
CSCD
北大核心
2020年第4期520-528,共9页
Journal of Graphics
关键词
深度学习
卷积神经网络
极低光图像
生成对抗网络
图像增强
deep learning
convolutional neural network
extremely low-light image
generative adversarial
image enhancement