期刊文献+

基于增量计算的大规模场景致密语义地图构建 被引量:2

Incremental large scale dense semantic mapping
下载PDF
导出
摘要 为了准确而高效地进行大规模场景理解,提出基于增量计算的条件随机场下的大规模场景致密语义地图构建方法.该方法利用双目视觉估算相机运动轨迹,根据图像序列语义标注结果构建语义地图.递增的语义地图的构建过程是关键,需要检测致密化处理后的输入帧相较于前一帧的新增体素,对新增体素内部三维点过分割成超体素,利用前后多帧的标注结果指导超体素的标注,如此逐帧地将新增体素融合到语义地图中.该方法将时序上的先验信息作为条件随机场中的数据项,依据超体素的邻接关系定义平滑项,利用图割法求解新增超体素的标签.实验表明,该方法能够获取准确的大规模语义地图,有效减少对冗余点的处理,改善图像上的标注结果. In order to efficiently achieve accurate large-scale scene understanding result,A new large scale dense semantic mapping system was proposed.The system constructed a map by incrementally calculating with a conditional random field model.The method used stereo visual odometry to get the motion of the camera,and used the labeled image sequences to build semantic map.The key point was to incrementally build the semantic map which detected newly built voxels,over-segment the points within these voxels into supervoxels,labeled these supervoxels under the guidance of neighboring frames and used the rigid transformation matrix to fuse the newly labeled points with the already built map.A conditional random field model was constructed which took labeling results of sequential frames as the data term,took the coherent labeling constraint between neighboring supervoxels as the pairwise term and solved the model by graph cut.Experimental evaluations show that the approach can get an accurate large scale semantic map and decrease computational cost,The approach can improve the labeling results at image level.
出处 《浙江大学学报(工学版)》 EI CAS CSCD 北大核心 2016年第2期385-391,共7页 Journal of Zhejiang University:Engineering Science
基金 国家自然科学基金青年基金资助项目(61001171) 国家"863"高技术研究发展计划资助项目(2014AA09A510)
关键词 大规模 语义地图 增量 超体素 条件随机场 致密点云 large-scale semantic map incremental supervoxel conditional random field dense point cloud
  • 相关文献

参考文献13

  • 1BAILEY T, DURRANT WHYTE H. Simultaneous localization and mapping(SLAM):Part II [J]. Robotics &Automation Magazine, 2006, 13(3): 108-117.
  • 2SENGUPTA S, GREVESON E, SHAHROKNI A, et al. Urban 3dsemantic modelling using stereo vision [C]∥ Computer Vision ICRA. Karlsruhe: IEEE, 2013: 580-585.
  • 3HE Hu, UPCROFT B. Nonparametric semantic segmentation for 3dstreet scenes [C]∥IEEE IROS. Tokyo: IEEE,2013: 3697-3703.
  • 4KUNDU A, LI Y, DELLAERT F, LI F, et al. Joint semantic segmentation and 3d reconstruction from monocular video [C]∥Computer Vision ECCV. Zurich: Springer, 2014: 703-718.
  • 5PAPON J, ABRAMOV A, SCHOELER M, WORGOTTER F. Voxel cloud connectivity segmentation supervoxels for point clouds [C]∥IEEE CVPR. Portland, OR: IEEE, 2013: 2027-2034.
  • 6BOYKOV Y, JOLLY M. Interactive graph cuts for optimal boundary& region segmentation of objects in nd images [C]∥IEEE ICCV. Vancouver, BC: IEEE, 2001: 105-112.
  • 7LU W, XIANG Z, LIU J. High performance visual odometry with two stage local binocular [C]∥IEEEIV. Gold Coast, QLD: IEEE, 2013: 1107-1112.
  • 8TRIGGS B, MCLAUCHLAN P F, HARTLEY R I, FITZGIBBON A W. Bundle adjustment — a modern synthesis [C]∥Vision algorithms: theory and practice. Corfu, Greece: Springer, 2000: 298-372.
  • 9GEIGER A, LENZ P, URTASUN R. Are we ready for autonomous driving? the kitti vision benchmark suite [C]∥IEEE CVPR. Providence, RI: Springer, 2012: 3354-3361.
  • 10HUANG Wen qi, GONG Xiao jin. Fusion based holistic road scene understanding [EB/OL]. (2014 06 29) [2015 07 22] http: ∥arxiv.org/pdf/1406.7525.pdf

同被引文献48

引证文献2

二级引证文献6

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部