期刊文献+

基于光场成像的多线索融合深度估计方法 被引量:3

Depth Estimation from Multiple Cues Based Light-Field Cameras
下载PDF
导出
摘要 在计算机视觉领域,传统的深度估计方法通常基于单个或多个摄像机拍摄的二维图像实现,忽略了光线的方向信息,获取的深度图精确性较差.以Raytrix、Lytro光场相机为代表的新型计算成像设备,能够在一次曝光中同时获取光辐射的空间分布及方向信息,为基于图像的深度估计提供了更多几何依据和线索.近年来,结合微透镜光场相机成像模型的多深度线索融合思想,成为探求深度估计问题的新导向.与基于单一深度线索的深度估计方法相比,当前的多线索融合方法取得了更准确的深度估计效果,然而,尚不能保持对噪声和遮挡的鲁棒性.为此,本文基于光场成像几何提出了一种考虑遮挡并对噪声鲁棒的多线索融合深度估计方法.首先,在借鉴人类视觉多线索优势差异的研究成果基础上,选择对噪声不敏感的聚焦线索和准确度较高的对应线索作为深度估计的依据;接着,针对微透镜光场相机成像模型,剖析聚焦、对应两种深度线索的几何特性,阐明了焦点堆栈轮廓曲线的局部对称性以及EPI图像上直线斜率与深度的比例关系,进一步分析并证明了存在遮挡的情况下,分别由焦点堆栈和中心子孔径图像生成的两类轮廓曲线之间仍存在一致性;随后,基于这些几何特性,构建了考虑遮挡和噪声影响的度量标准;然后,建模深度估计为多标签优化问题,将度量标准构造成最小化能量函数,以图割法求解得最优深度估计.在实验环节,本文首先对模型参数做Ablation分析:单独考虑对称性、一致性、对应线索的其中之一,或组合考虑两种或全部分别进行实验,结果表明综合考虑多种性质时本文方法效果最好.另外,分别在合成光场数据集和真实场景光场数据集上测试本文及相关文献的深度估计方法,实验结果表明本文深度估计方法准确度较高,对遮挡边缘的细节处理有效,且能够保持对噪声的鲁棒性. In the field of computer vision,traditional methods of depth estimation are usually based on two-dimensional images captured by single or multiple cameras.The depth maps obtained by those methods are usually inaccurate,since the imaging process ignores the directional information of light.Consumer-level and high-end light-field cameras are now widely available.Lightfield cameras,such as Raytrix and Lytro,can record not only the intensity but also the direction of light rays at one exposure.Therefore,a light field provides more geometric constraints and cues to depth estimation based on images.In recent years,depth from combining multiple cues,based on light-field cameras,has become a new direction for the problem of depth estimation.Compared with the methods of depth from a single cue,the current depth estimation methods based on multi-cue fusion achieve more accurate depth maps.However,most previous methods do not explicitly model occlusions,and cannot capture sharp transitions around object boundaries.On the other hand,conventional methods fail when handling noisy scene.In this paper,a robust multi-cue depth estimation method for occlusion and noise is proposed based on light-field imaging geometry.First of all,inspired by the research of human vision about multiple cues,we select both focus and correspondence depth cues for fusion computation,where the focus cue is insensitive to noise,and the correspondence cue is more accurate.Secondly,according to the imaging model of lenslet light-field camera,we clarify the local symmetry in focal stack profile and the relationship between the depths and the slope of the lines in EPI,and further show that there are consistencies between the two profiles respectively generated by the focal stack images and the center sub-aperture image at the neighborhood of real disparity under occlusion.Thirdly,based on those geometric characteristics,some metrics considering the effects of noise and occlusion are constructed.According to the local symmetry of the focal stack profile,the mirror symmetry metric is established.And the consistency metric is constructed by considering the properties of two kinds of focal stack profile under occlusion.Then the matching metric of disparity cue is designed by using the proportional relationship between the depth value and the slope of line in EPI image.The last but not the least,depth estimation is modeled as a multi-label optimization.We describe the energy function as two parts:the smooth term and the data term consisting of the above metrics.Then we use graph cut to obtain disparity map by minimizing the energy function.In the experiment,Ablation study is executed on the model parameters.We firstly utilize one metric among the symmetry,consistency and correspondence metrics.Then we combine two or all metrics in energy function.Ablation analysis results show that it is the best to consider a variety of metrics synthetically.Additional,we test the depth estimation methods in this paper and the related literatures on the synthesis light-field dataset and real light-field dataset,respectively.The experimental results show that our method can accurately estimate depth,effectively deal with noise and details at occlusion boundary.
作者 韩磊 徐梦溪 王鑫 王慧斌 HAN Lei;XU Meng-Xi;WANG Xin;WANG Hui-Bin(College of Computer and Information Engineering,Hohai University,Nanjing 211100;School of Computer Engineering,Nanjing Institute of Technology,Nanjing 211167)
出处 《计算机学报》 EI CSCD 北大核心 2020年第1期107-122,共16页 Chinese Journal of Computers
基金 国家自然科学基金(61401195,61563036) 江苏省高校自然科学基金(17KJB520010)资助~~
关键词 光场 深度估计 多线索融合 微透镜 对称性 极平面图像 light field depth estimation multiple cues fusion lenslet symmetry epipolar plane image
  • 相关文献

参考文献4

二级参考文献89

  • 1夏永泉,刘正东,杨静宇.一种基于正交矩的立体匹配方法[J].系统仿真学报,2005,17(9):2082-2084. 被引量:5
  • 2苏连成,朱枫.一种新的全向立体视觉系统的设计[J].自动化学报,2006,32(1):67-72. 被引量:23
  • 3王金岩,史文华,敬忠良.基于Depth from Focus的图像三维重建[J].南京航空航天大学学报,2007,39(2):181-186. 被引量:7
  • 4Zhu Z G.Omnidirectional stereo vision[C] //Proceedings of the 10th IEEE ICAR Workshop on Omnidirectional Vision,Budapest,2001:652-660.
  • 5Gluckman J,Nayar S K,Thoresz K J.Real-time omnidirectional and panoramic stereo[C] //Proceedings of the DARPA Image Understanding Workshop,Monterey,1998:299-303.
  • 6Lin S S,Bajcsy R.High resolution catadioptric omni-directional stereo sensor for robot vision[C] //Proceedings of IEEE International Conference on Robotics & Automation,Taipei,2003:1694-1699.
  • 7Bunschoten R,Krse B.Robust scene reconstruction from an omnidirectional vision system[J].IEEE Transactions on Robotics and Automation,2003,19(2):351-357.
  • 8Yi S,Ahuja N.A novel omnidirectional stereo vision system with a single camera[M] //Lecture Notes in Computer Science.Heidelberg:Springer,2006,4142:146-156.
  • 9Sagawa R,Kurita N,Echigo T,et al.Compound catadioptric stereo sensor for omnidirectional object detection[C] //Proceedings of International Conference on Intelligent Robots and Systems,Sendai,2004:2612-2617.
  • 10Cabral E L L,de Souza J C,Hunold M C.Omnidirectional stereo vision with a hyperbolic double lobed mirror[C] //Proceedings of the 17th International Conference on Pattern Recognition,Cambridge,2004:1-9.

共引文献19

同被引文献15

引证文献3

二级引证文献3

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部