期刊文献+

基于光场分析的多线索融合深度估计方法 被引量:5

Depth Estimation from Light Field Analysis Based Multiple Cues Fusion
下载PDF
导出
摘要 受人类视觉深度感知机理的启发,结合最近光场分析理论的进展,文中提出了一种融合多种深度线索的全局一致深度估计方法.该方法首先利用摄像机阵列获取光场数据,然后通过合成孔径成像方法获得指定深度的重投影光场,并从中提取表征深度变化不同维度的模糊与视差线索.接着文中采用光场分析方法对比了模糊与视差线索的适用情况,设计了多深度线索的融合算法,以实现不同深度线索的优势互补.进而为了获得精确一致的深度结果,文中在马尔可夫随机场模型的基础上,提出了一种自适应平滑约束的全局能量函数.最终,文中利用图割算法最小化全局能量函数,获得了平滑的高精度深度估计结果.文中分别在虚拟数据和真实数据上测试了所提出的方法,与单一深度线索和局部深度估计方法相比,文中方法能结合不同深度线索的优势并获得更加鲁棒的深度结果. Inspired by the depth perception mechanism of the human visual system,this paper proposes a novel globally consistent depth estimation method from defocus blur and stereo disparity depth-cues.According to the recent progress of light field theory,we can simulate the focus and defocus depth cue by using synthetic aperture photograph technique.Firstly,the defocus blur and stereo disparity depth cues are extracted from light field data sets,which is acquired by using a camera array system.Then based on the characteristic of the two depth cues,we design a fusion algorithm for the light field depth estimation.To acquire the globally consistent structural depth result,we introduce an adaptive weighted smoothing function in Markov random field framework.Finally,the global energy is minimized by graph cut algorithm,which leads a consistent and precise depth estimation.We test the proposed method on both virtual data and real data,the experimental results have shown that our method can take advantage of the two depth-cues and obtain more robust depth estimation.
出处 《计算机学报》 EI CSCD 北大核心 2015年第12期2437-2449,共13页 Chinese Journal of Computers
基金 国家自然科学基金(61103060 61272287) 国家"八六三"高技术研究发展(2012AA011803) 教育部博士点基金(20116102110031)资助
关键词 相机阵列 光场分析 多线索融合 深度估计 camera array light field analysis multiple cues fusion depth estimation
  • 相关文献

参考文献39

  • 1Herbort S. Wohler C. An introduction to image-based 3D surface reconstruction and a survey of photometric stereo methods. 3D Research. 2011. 2(3): 1-17.
  • 2束搏,邱显杰,王兆其.基于图像的几何建模技术综述[J].计算机研究与发展,2010,47(3):549-560. 被引量:46
  • 3Saxena A. Chung S H, Ng A Y. 3-D depth reconstruction from a single still image. International Journal of Computer Vision. 2008. 760), 53-69.
  • 4Saxena A. Schulte J. Ng A Y. Depth estimation using monocular and stereo cues//Proceedings of the International Ioint Conference on Artificial Intelligence. Hyderabad , India. 2007, 2197-2203.
  • 5Bullier 1. Integrated model of visual processing. Brain Research Reviews. 2001. 36(2-3), 96-107.
  • 6Parker A J. Binocular depth perception and the cerebral cortex. Nature Reviews Neuroscience. 2007. 8(5), 379-391.
  • 7Levoy M. Light fields and computational imaging. IEEE Computer. 2006. 39(8), 46-55.
  • 8Liang C K. Shih Y C. Chen H H. Light field analysis for modeling image formation. IEEE Transactions on Image Processing. 2011. 20(2), 446-460.
  • 9Bishop T E. Favaro P. The light field camera, Extended depth of field. aliasing. and superresolution. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2012. 34(5), 972-986.
  • 10Georgiev T. Lumsdaine A. Focused plenoptic camera and rendering. Journal of Electronic Imaging. 2010. 19 (2), 021106-021106-11.

二级参考文献91

  • 1刘钢,彭群生,鲍虎军.基于图像建模技术研究综述与展望[J].计算机辅助设计与图形学学报,2005,17(1):18-27. 被引量:57
  • 2曹智清,石教英,张世明,孙鑫,刘培珺.基于序列图像建模的多视图模型合并策略[J].计算机研究与发展,2005,42(9):1633-1639. 被引量:2
  • 3Levoy M, Hanrahan P. Lightfield rendering[C] //Proc of the ACMSIGGRAPH. New York: ACM, 1996: 31-42.
  • 4Matusik W, Buehler C, Raskar R, et al. Image-based visual hulls [C] //Proc of the ACM SIGGRAPH. New York: ACM, 2000:369-374.
  • 5Rocchini C, Cignoni P, Montani M. A low cost 3D scanner based on structured light [C]//Proc of the ACM EuroGraph. New York: ACM, 2001: 299-308.
  • 6Bouguet J Y, Perona P. 3D photography on your desk [C]// Proc of the IEEE Int Conf on Computer Vision. Washington, DC: IEEE Computer Society, 1998: 1-8.
  • 7Martin W N, Aggarwal J K. Volumetric descriptions of objects from multiple views[J]. IEEE Trans on Pattern Analysis and Machine Intelligence, 1983, 5(2): 150-158.
  • 8Laurentini A. The visual hull concept for silhouette-based image understanding [J]. IEEE Trans on Pattern Analysis and Machine Intelligence, 1994, 16(2): 150-162.
  • 9Szeliski R. Rapid octree construction from image sequences [J]. Computer Vision, Graphics, and Image Processing: Image Understanding, 1993, 58(1): 23-32.
  • 10Tarini M, Callieri M, Montan C, et al. Marching intersections: An efficient approach to shape from silhouette [C]//Proc of VMV. Washington, DC: IEEE Computer Society, 2002:10-15.

共引文献45

同被引文献23

引证文献5

二级引证文献30

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部