期刊文献+

基于双目视觉的空间语义网络实现算法

Spatial Semantic Network Implementation Algorithms Based on Binocular Vision
下载PDF
导出
摘要 环境感知是无人驾驶中的重要环节。针对目前广泛使用的感知设备--激光雷达存在价格昂贵、信息单一的问题,基于深度学习技术,提出一种同时具备图像分割、双目立体估计功能的联合训练网络,即空间语义网络(SSN)。通过空间映射,SSN可实现输入双目图像、输出语义点云的功能。经KITTI数据集训练后,用测试集验证,结果显示,本文模型对图像分割准确率可以达到82.5%;针对近点,以立体估计误差5%以内判定为准确,立体估计准确率可以达到95.5%。同时,运算速度可以达到0.135s/帧,每帧生成约4.8万个语义点云坐标,接近低速工况下的实时性要求,具有较强实际应用价值。 Environmental perception is a vital part of driverless driving.The present widely used radar is trapped with its expensive cost and unitary information.Based on the deep learning technology,a joint training network referred to as spatial semantic network (SSN) was proposed,which can realize image segmentation and stereo estimation simultaneously.Through spatial mapping,SSN can input binocular images and output semantic point clouds.The SSN was trained by the KITTI dataset,and then the trained model was validated by KITTI test set,of which the verification result showed that the accuracy of image segmentation can reach 82.5%.And for the near points,the accuracy of stereo estimation can reach 95.5%,where the error within 5% was considered as accurate.Moreover,the processing speed can reach 0.135 s per frame,generating around 48 000 semantic cloud point coordinates per frame,which was close to the real-time requirement under low-speed conditions,and had strong practical application value.
作者 龚章鹏 王国业 彭思杰 GONG Zhangpeng;WANG Guoye;PENG Sijie(College of Engineering,China Agricultural University,Beijing 100083,China)
出处 《农业机械学报》 EI CAS CSCD 北大核心 2019年第B07期324-330,共7页 Transactions of the Chinese Society for Agricultural Machinery
基金 国家自然科学基金项目(51775548)
关键词 空间语义网络 深度学习 图像分割 立体估计 双目图像 语义点云 spatial semantic network deep learning image segmentation stereo estimation binocular images semantic cloud points
  • 相关文献

参考文献7

二级参考文献79

  • 1杨亮,郭新宇,陆声链,赵春江.基于多幅图像的黄瓜叶片形态三维重建[J].农业工程学报,2009,25(2):141-144. 被引量:42
  • 2邱保志,张西芝.基于网格的参数自动化聚类算法[J].郑州大学学报(工学版),2006,27(2):91-93. 被引量:14
  • 3Reina G,Underwood J,Brooker G,et al.Radar‐based perception for autonomous outdoor vehicles[J].Journal of Field Robotics,2011,28(6):894-913.
  • 4Alvarez J M A,Lopez A M.Road detection based on illuminant invariance[J].Intelligent Transportation Systems,IEEE Transactions on,2011,12(1):184-193.
  • 5Danescu R,Nedevschi S.Probabilistic lane tracking in difficult road scenarios using stereovision[J].Intelligent Transportation Systems,IEEE Transactions on,2009,10(2):272-282.
  • 6Rotaru C,Graf T,Zhang J.Color image segmentation in HSI space for automotive applications[J].Journal of Real-Time Image Processing,2008,3(4):311-322.
  • 7Himmelsbach M,Wuensche H.Fast segmentation of 3dpoint clouds for ground vehicles[C]∥Intelligent Vehicles Symposium(IV),IEEE,2010:560-565.
  • 8Steinhauser D,Ruepp O,Burschka D.Motion segmentation and scene classification from 3D LIDAR data[C]∥Intelligent Vehicles Symposium,IEEE,2008:398-403.
  • 9Klasing K,Wollherr D,Buss M.A clustering method for efficient segmentation of 3Dlaser data[C]∥ICRA,Pasadena,California,USA,2008:4043-4048.
  • 10Douillard B,Underwood J,Kuntz N,et al.On the segmentation of 3D LIDAR point clouds[C]∥Robotics and Automation(ICRA),2011IEEE International Conference on,IEEE,2011:2798-2805.

共引文献130

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部