期刊文献+

利用光场合成与弥散圆渲染的单幅图像重聚焦 被引量:2

Single-Image Refocusing Using Light Field Synthesis and Circle of Confusion Rendering
原文传递
导出
摘要 提出一种单幅图像动态重聚焦方法,结合基于深度学习的光场合成与基于几何结构的弥散圆渲染方法,模拟出光场重聚焦效果。该方法仅输入单幅图像,首先进行深度估计,然后将深度图转换为视差,最后在不同深度确定弥散圆直径以进行像素重采样。设计两种神经网络结构,分别以光场相机的多视角子图和重聚焦图像为样本进行有监督的深度学习。在多个数据集和实际场景中的实验结果表明,本文方法在可接受的计算成本下能获得优于其他方法的视觉效果与评价指标,峰值信噪比达到了34.55,结构相似度达到了0.937。 A method to dynamically refocus a single image is presented;by combining deep learning-based light field synthesis with geometric structure-based circle of confusion rendering,it simulates the light field refocusing effect.In the proposed method,the depth map is estimated and converted into disparity,and then the circle of confusion diameter is measured at different depths to resample the pixels.Two neural network structures are designed,supervised by multi-views and refocused images of the light field camera.Experiments are conducted on multiple datasets and real scenes.Compared with other techniques,the results obtained using the proposed method show superior visual performance and evaluation indicators,along with an acceptable computational cost,with the peak signal-to-noise ratio and structural similarity index reaching 34.55 and 0.937,respectively.
作者 王奇 傅雨田 Wang Qi;Fu Yutian(Key Laboratory of Infrared System Detection and Imaging,Chinese Academy of Sciences,Shanghai 200083,China;Shanghai Institute of Technical Physics,Chinese Academy of Sciences,Shanghai 200083,China;University of Chinese Academy of Sciences,Beijing 100049,China)
出处 《光学学报》 EI CAS CSCD 北大核心 2020年第1期275-283,共9页 Acta Optica Sinica
基金 国家863计划(2015AA7015090,2015AA7015097) 国家自然科学基金(11573049)。
关键词 成像系统 计算成像 光场 重聚焦 深度估计 弥散圆渲染 imaging systems computational imaging light field refocusing depth estimation circle of confusion rendering
  • 相关文献

参考文献3

二级参考文献41

  • 1徐家骅,工程光学基础(第2版),1988年
  • 2X Li, S Jia, W Cui, et al.. Consistent map building by a mobile robot equipped with stereo sensor and LRF[C]. IEEE International Conference on Computer Science and Automation Engineering, 2011, 3: 100-104.
  • 3T Brox, J Malik. Large displacement optical flow: descriptor matching in variational motion estimation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(03): 500-513.
  • 4S Vedula, S Baker, P Rander, et al.. Three-dimensional scene flow[C]. Proceedings of the International Conference on Computer Vision, Corfu, 1999: 722-729.
  • 5S Vedula, P pomder, R collins, et al.. Three-dimensional scene flow[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(3): 475-480.
  • 6Y Zhang, C Kambhamettu. On 3D scene flow and structure recovery from multl-view image sequences[J]. IEEE Transactions on Systems, Man, and Cybernetics (Part B), 2003, 33(4): 592-606.
  • 7R Li, S Sclaroff. Multi-scale 3D scene flow from binocular stereo sequences[J]. Computer Vision and Image Understanding, 2008, 110(1): 75-90.
  • 8F Huguet, F Devernay. A variational method for scene flow estimation from stereo sequences[C]. Proceedings of the International Conference on Computer Vision, Los Alamitos, 2007: 1-7.
  • 9Richard A. Newcombe, A J Davision. Live dense reconstruction with a single moving camera[C]. Computer Vision and Pattern Flecongnition (CVPR), 2010: 1498-1505.
  • 10X Li, S Jia, K Wang, et al.. Scene flow-based environment 3D digitalization for mobile robot navigation[C]. Advanced Robotics, 2012, 26(3): 1521-1536.

共引文献55

同被引文献21

引证文献2

二级引证文献2

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部