期刊文献+

单目图像深度结构恢复研究

Research for Scene Depth Structure of Monocular Image
下载PDF
导出
摘要 针对现有的单个深度线索对深度感知不准确的问题,以Marr创建的由采集图像到成功构建三维信息的体系结构为基础,本文提出了一种结合多种深度线索的单目图像深度排序方法,实现室外场景中的单目深度排序任务。首先,对输入的单目图像进行超像素分割,确定要排序的目标。然后将分割的区域进行区域属性标记,分别标记为地面、天空、垂直物三个类别中的某一类。对于标记为天空的区域我们将其深度固定为最远,而标记为地面的区域将其深度值固定为最近,对于标定为垂直物的区域,结合局部区域之间的遮挡关系和消失点线索综合考虑。其次,构建图模型,结合区域与地面接触点位置关系和区域之间的遮挡关系构建图模型,然后使用置信度传播进行全局推理,获取一致的深度排序结果。最后利用区域外观特征,把具有外观像素的超像素区域进行合并,并使用混合进化算法进行全局能量优化,得到最终的深度排序结果。在BSDS500数据集上,验证了本文方法的深度排序性能,实验结果能够说明本文的深度排序表达优于GM2014获取的深度排序结果。 Due to the inaccurate depth perception of the existing single depth cues, inspired by the architecture of collecting images and successfully building three-dimensional information created by Marr, an approach combined with multiple depth cues is presented for depth ordering of monocular im-ages in outdoor scenes. First of all, the input monocular image is segmented to determine the ordering objects. Then those objects are labeled with regional attribute, which is labeled as one of ground, sky and vertical. The depth of the regions labeled as sky are fixed to the furthest, and the depth of the regions labeled as ground are fixed to the nearest, and other regions are ordered by local occlusion cues and global depth cues. Furthermore, based on above depth cues, we construct a graph model, and belief propagation is leveraged to enforce global depth reasoning. Finally, considering appearance characteristics of the regions, we group regions belonging to the same objects on the same depth layer to account for over-segmentation issues, and a hybrid evolution algorithm is utilized to minimize global energy. The properties of depth ordering of the proposed method are evaluated on the BSDS500 dataset, and the experimental results demonstrate that the depth ordering in this paper is outperform the depth ordering results of GM2014.
出处 《计算机科学与应用》 2018年第4期522-531,共10页 Computer Science and Application
基金 国家重点研发计划(NO.2017YFB1002203) 国家自然科学基金(NO.61503111,NO.61501467)的支持。
  • 相关文献

参考文献1

二级参考文献12

  • 1Engelhard N, Endres F, Hess J, et al. Real-time 3D visual SI.AM with a hand-held RGB-D camera [ C]//The RGB-D Workshop on 3D Pereeption in Robotics at the Empean Robotics Forum. Robotdalen, Sweden : Association for Computing Machiney, 2011 : 239-248.
  • 2Izadi S, Kim D, Hilliges O, et al. KinectFusion: real-time 30 re- conslruetion and interactinn using a mnving depth camera [ C ]//The 24th Annual ACM Symposium on User Interface Software and Technology. Santa Barbara, CA : ACM,2011:559-568.
  • 3Tong J, Zhou J, Liu L, et al. Scanning 3d full human bodies using kinects[J]. Visualization and Computer Graphics, 2012, 18(4) : 643-650.
  • 4Lee S, Ho Y. Real-time Stereo view generation using kinect depth camera[ C]//Proceedings of 2011 Asia-Pacific Signal and Information Processing Association Annual Conference. Cam- bridge : Cambridge University Press, 2011 : 123-133.
  • 5Lee W, Park N, Woo W. Depth-assisted real-time 3D object de- tection for augmented reality [ C ]// Proceedings of ICAT' 11. Osaka, Japan: WorldPress, 2011 : 126-132.
  • 6Peter H, Michael K, Evan H, et al. RGB-D mapping: Using Ki- nect-style depth cameras for dense 3D modeling of indoor envi- ronments [ J ]. International Journal of Robotics Research, 2012, 31 (5) :647-663.
  • 7Smisek J, Jancosek M, Pajdla T. 3D With Kinect[ M]. London : Consumer Depth Cameras for Computer Vision, 2013 : 3-25.
  • 8Herrera C, Kannala J, Heikkila J. Joint depth and color camera calibration with distortion correction [ J ]. Pattern Analysis and Machine Intelligence, 2012, 34 (10) : 2058-2064.
  • 9Arieli Y, Freedman B, Machline M, et al. Depth mapping using projected patterns: U.S. 8150142 B2 [P]. 2012-4-3.
  • 10Zhang Z. A flexible new technique for camera calibration [ J ]. Pattern Analysis and Machine Intelligence, 2000, 22 ( 11 ) : 1330-1334.

共引文献26

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部