期刊文献+

结合卷积神经网络与动态环境光的图像去雾算法 被引量:8

Image Dehazing Algorithm Based on Convolutional Neural Network and Dynamic Ambient Light
原文传递
导出
摘要 为了有效估计雾霾图像的透射率并改善去雾图像偏暗的问题,提出一种结合卷积神经网络与动态环境光的图像去雾算法。设计了基于卷积神经网络的透射率估计网络,构建包含配对的真实雾霾图像与透射率图像库,对其进行随机块采样,得到配对的雾霾图像块与透射率图像块,并将其作为训练集用于训练透射率估计网络;使用训练好的网络估计雾霾图像的透射率,并进行平滑滤波。考虑到图像成像时光照不均的问题,使用动态环境光替代全局大气光。使用平滑滤波后的透射率和动态环境光进行图像去雾。实验结果表明,该算法不仅可以有效实现图像去雾,而且提高了去雾图像的亮度和饱和度。 To effectively estimate the transmittance of the hazy images and improve the darkness of the fog removal image,an image dehazing algorithm is proposed based on convolutional neural network and dynamic ambient light.Firstly,a transmittance estimation network is designed based on convolutional neural network.Then,an image library containing paired real hazy images and transmittance images is constructed.And randomly block sampling is performed to obtain the paired hazy patches and transmittance patches which are used as training sets for training the transmittance estimation network.After that,the trained network is used to estimate the transmittance of hazy images and then smooth the acquired transmittance.At the same time,considering the problem of uneven illumination of images,dynamic ambient light is used to replace global atmospheric light.Finally,the smooth filtered transmittance and dynamic ambient light are used to restore the images.Experimental results show that the algorithm can not only effectively restore the images,but also significantly improve the brightness and saturation of the restored images.
作者 刘杰平 杨业长 陈敏园 马丽红 Liu Jieping;Yang Yezhang;Chen Minyuan;Ma Lihong(School of Electronic and Information Engineering,South China University of Technology,Guangzhou,Guangdong 510641,China)
出处 《光学学报》 EI CAS CSCD 北大核心 2019年第11期112-123,共12页 Acta Optica Sinica
基金 国家自然科学基金(61471173,61701181) 广东省自然科学基金(2017A030325430)
关键词 图像处理 图像增强 去雾 大气散射模型 透射率 卷积神经网络 image processing image enhancement dehazing atmospheric scattering model transmittance convolutional neural network
  • 相关文献

参考文献5

二级参考文献36

  • 1焦李成,谭山.图像的多尺度几何分析:回顾和展望[J].电子学报,2003,31(z1):1975-1981. 被引量:227
  • 2Haghighat M B A,Aghagolzadeh A, Seyedarabi H. Multi-focus image fusion for visual sensor networks in DCT domain [ J ]. Computers & Electrical Engineering, 2011,37(5):789- 797.
  • 3Toet A. Image fusion by a ratio of low-pass pyramid [ J]. Pat- tern Recognition Letters, 1989,9(4) :245 - 253.
  • 4Li H F, Chai Y, Li X F. Multi-focus image fusion based on nonsubsampled contourlet transform and focused regions detec- tion[ J]. Optik-tntonafional Journal for Light and Electron Op- tics, 2013,124(1) :40 - 51.
  • 5Zhang B H,Lu X Q,Jia W T.A multi-focus image fusion al- gorithm based on an improved dual-channel PCNN in NSCT domain [ J]. Optik-Intemational Journal for Light and Electron Optics,2013,124(20) :4104 - 4109.
  • 6Penfland A P. A new sense for depth of field [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1987,9(4):523 - 531.
  • 7Chambolle A. Partial differential equations and image process- ing [ A]. IEEE International Conference of Image Processing [ C] .Austin, Texas: IEEE, 1994.16 - 20.
  • 8Ziou D, Deschanes F. Depth from defocus estimation in spatial domain [ J ]. Computer Vision and Image Understanding, 2001,81(2) : 143 - 165.
  • 9Favaro P, Mennucci A, Soatto S. Observing shape from defo- cused images [ J ]. Interllational Journal of Computer Vision, 2003,52(1) :25 -43.
  • 10Rajagopalan A N, Chaudhuri S, Mudenagudi U. Depth estima- tion and image restoration using defocused stereo pairs [ J ]. IEEE Transactions on Pattern Analysis and Machine InteUi- gence, 2004,26( 11 ) : 1521 - 1525.

共引文献91

同被引文献77

引证文献8

二级引证文献49

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部