期刊文献+

使用柯西分布点扩散函数模型的单幅散焦图像深度恢复 被引量:3

Depth recovery from a single defocused image using a Cauchy-distribution-based point spread function model
原文传递
导出
摘要 目的当前,大多数单幅散焦图像的3维(3D)场景深度恢复方法,通常使用高斯分布描述点扩散函数(PSF)模型,依据图像边缘散焦模糊量与场景深度的对应关系获得稀疏深度图,采用不同的扩展方法得到整个场景图像的全深度图。鉴于现有方法的深度恢复结果还不够精准,对各种噪声干扰还不够健壮,提出一种基于柯西分布的点扩散函数模型计算物体图像边缘散焦模糊量的方法。方法将输入的单幅散焦图像分别用两个柯西分布重新模糊,利用图像边缘两次重新模糊图像间梯度比值和两个柯西分布的尺度参数,可以计算出图像中边缘处的散焦模糊量。使用matting内插方法将边缘模糊量扩展到整个图像,即可恢复场景的全深度图。结果将原始Lenna图像旋转并加入高斯噪声以模拟图像噪声和边缘位置误差,用原图与噪声图比较了柯西分布图像梯度比值与高斯分布图像梯度比值的平均误差。使用多种真实场景图像数据,将本文方法与现有的多种单幅散焦图像深度恢复方法进行了比较。柯西分布图像梯度比值的平均误差要小于高斯分布图像梯度比值的平均误差。本文方法能够从非标定单幅散焦图像中较好地恢复场景深度,对图像噪声、不准确边缘位置和邻近边缘具有更好的抗干扰能力。结论本文方法可以生成优于现有基于高斯模型等方法的场景深度图。同时,也证明了使用非高斯模型建模PSF的可行性和有效性。 Objective This study aims to address the challenging problem of recovering the 3D depth of a scene from a single image. Most current approaches for depth recovery from a single defocused image model the point spread function as a 2D Gaussian function. However, these methods are influenced by noise, and a high quality of recovery is difficult to achieve. Method Unlike the previous depth calculations from defocus methods, we propose an approach to estimate the amount of spatially varying defoeus blurs at the locations of image edges on the basis of a Cauchy distribution point-spread function model. The input defocused image is reblurred twice with two respective Cauchy distribution kernels. The amount of defo- cus blur at edge locations can be obtained from the ratio between the gradients of the two re-blurred images and the two scale parameters of Cauchy distribution. A full depth map is recovered by propagating the blur amount at edge locations to the entire image via matting interpolation. Result The original "Lenna" image and a rotated noise "Lenna" image are used, and a Gaussian noise is used to simulate the image noise and edge position error. Then, the average error of the Cauchy gradient ratio is compared with that of a Gaussian gradient ratio. Various real-scene image data are also used to compare our depth recovery results with those of existing methods. Experimental results show that the average error of the Cauchy gradient ratio is less than that of a Gaussian gradient ratio. Experimental results on several real images demonstrate the effectiveness of our method in estimating the defocus map from a single defocused image. Conclusion Our method is robust to image noise, inaccurate edge location, and interference of neighboring edges. The proposed method can generate more accurate scene depth maps as compared with most existing methods based on a Gaussian model. Our results also dem- onstrate that a non-Gaussian model for DSF is feasible.
作者 明英 蒋晶珏
出处 《中国图象图形学报》 CSCD 北大核心 2015年第5期708-714,共7页 Journal of Image and Graphics
基金 国家高技术研究发展计划(863)项目(SQ2006AA 12Z108506) 中央高校青年教师项目(3101017)
关键词 图像处理 深度恢复 深度估计 散焦深度法 散焦模糊 Gaussian梯度 柯西分布 image processing depth recovery depth estimation depth from defocus defocus blur Gaussian gradient Cauchy distribution
  • 相关文献

参考文献20

  • 1Saxena A, Sun M, Ng A Y. Learning 3-D scene structure from a single still image [ C ] //Proceedings of IEEE 11 th International Conference on Computer Vision. Rio de Janeiro, Brazil: IEEE, 2007, 1-8. [DOI: 10. 1109/ICCV. 2007. 4408828].
  • 2Hassner T, Basri R. Example based 3D reconstruction from sin- gle 2D images [ C ]//Proceedings of Computer Vision and Pattern Recognition Workshop. New York: IEEE, 2006: 1-15. [DOI: 10. 1109/ CVPRW. 2006. 76].
  • 3Saxena A, Schuhe J, Ng A Y. Depth Estimation using Monocu- lar and Stereo Cues [ C ]//Proceedings of the 20th International Joint Conference on Artificial Intelligence. Hyderabad, India: Elsevier, 2007:2197-2203.
  • 4Liu B Y, Gould S, Koller D. Single image depth estimation from predicted semantic labels [ C ]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. San Francisco, CA: IEEE, 2010: 1253-1260. [DOI: 10. 1109/CVPR. 2010. 5539823].
  • 5Su C C, Cormack L K, Bovik A C, et al. Depth estimation from monocular color images using natural scene statistics models [ C]//Proceedings of IEEE llth IVMSP Workshop: 3D Image/ Video Technologies and Applications. Seoul: IEEE, 2013 : 1-4. [ DOI: 10. 1109/IVMSPW. 2015. 6611900].
  • 6Delage E, Lee H L, Ng, A Y. A dynamic bayesian network model for autonomous 3D reconstruction from a single indoor im- age[ C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2006, (2) : 2418- 2428. [DOI : 10. ll09/CVPR. 2006.23 ].
  • 7Hoiem D, Efros A A, Hebert M. Recovering occlusion bounda- ries from a single image [ J ]. International Journal of Computer Vision, 2011,91(3) : 328-346. [DOI: 10. 1007/s11263-010- 0400-4].
  • 8Pentland A P. A new sense for depth of field [J]. IEEE Trans- actions on Pattern Analysis and Machine Intelligence, 1987, PA- MI-9(4) :523-531. [ DOI: 10. 1109/TPAMI. 1987. 4767940].
  • 9Levin A, Fergus R, Durand F, et al. Image and depth from a conventional camera with a coded aperture [ J ]. ACM Transac- tions on Graphics, 2007, 26(3) :70-80. [DOI: 10. 1145/ 1276377. 1276464].
  • 10Ashok V, Ramesh R, Amit A, et al. Dappled photography : mask enhanced cameras for heterodyned light fields and coded aperture refocusing [ J]. ACM Transactions on Graphics, 2007, 26(3):1-14. [DOI: 10. 1145/1276377. 1276463].

二级参考文献39

  • 1Skifstad K, Jain R. Illumination independent change detection from real world image sequences [ J ]. Computer Vision, Graphic, Image Process, 1989, 46(3) : 387-399.
  • 2Rosin P L, Ellis T. Image difference threshold strategies and shadow detection[ A]. In: Proceedings of the 6th British Machine Vision Conference[C], Birmingham, U K, 1995, 1: 347-356.
  • 3Stauffer C, Grimson W. Learning patterns of activity using real-time tracking [ J ]. IEEE Transactions on Pattern Analysis Machine Intelligence, 2000, 22 ( 8 ) : 747-757.
  • 4Haritaoglu I, Harwood D, Davis L S. W4: Real-time surveillance of people and their activities[ J]. IEEE Transactions on Pattern Analysis Machine Intelligence, 2000, 22(8) : 809-830.
  • 5Elgammal A, Harwood D, Davis L. Nonparametric model for background subtraction [ A ]. In: Proceedings of the 6th European Conference on Computer Vision [ C ], Trinity College, Dublin, Ireland, 2000, 2 : 751-767.
  • 6Aach T, Kaup A, Mester R. Statistical model-based change detection in moving video[J]. Signal Processing, 1993, 31(2) : 165-180.
  • 7Phong Bui-Tuong. Illumination for computer generated pictures[ J]. Communications of the ACM, 1975, 18(6) : 311-317.
  • 8Durucan E, Ebrahimi T. Change detection and background extraction by linear algebra[ J]. Proceedings of IEEE, 2001, 89(10) : 1368-1381.
  • 9Huang L W, Gu I Y, Tian Q. Statistical modeling of complex backgrounds for foreground object detection [ J ]. IEEE Transactions on Image Processing, 2004, 13( 11 ) : 1459-1472.
  • 10Liu Q, Sun M G, Sclabassi R I. Illumination invariant change detection model for patient monitoring video[ A ]. In: Proceedings of The 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society[ C ], San Francisco, CA, USA, 2004 : 1782-1785.

共引文献10

同被引文献25

引证文献3

二级引证文献7

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部