期刊文献+

锚框校准和空间位置信息补偿的街道场景视频实例分割

Anchor Frame Calibration and Spatial Position Information Compensation for Street Scene Video Instance Segmentation
下载PDF
导出
摘要 街道场景视频实例分割是无人驾驶技术研究中的关键问题之一,可为车辆在街道场景下的环境感知和路径规划提供决策依据.针对现有方法存在多纵横比锚框应用单一感受野采样导致边缘特征提取不充分以及高层特征金字塔空间细节位置信息匮乏的问题,本文提出锚框校准和空间位置信息补偿视频实例分割(Anchor frame calibration and Spatial position information compensation for Video Instance Segmentation,AS-VIS)网络.首先,在预测头3个分支中添加锚框校准模块实现同锚框纵横比匹配的多类型感受野采样,解决目标边缘提取不充分问题.其次,设计多感受野下采样模块将各种感受野采样后的特征融合,解决下采样信息缺失问题.最后,应用多感受野下采样模块将特征金字塔低层目标区域激活特征映射嵌入到高层中实现空间位置信息补偿,解决高层特征空间细节位置信息匮乏问题.在Youtube-VIS标准库中提取街道场景视频数据集,其中包括训练集329个视频和验证集53个视频.实验结果与YolactEdge检测和分割精度指标定量对比表明,锚框校准平均精度分别提升8.63%和5.09%,空间位置信息补偿特征金字塔平均精度分别提升7.76%和4.75%,AS-VIS总体平均精度分别提升9.26%和6.46%.本文方法实现了街道场景视频序列实例级同步检测、跟踪与分割,为无人驾驶车辆环境感知提供有效的理论依据. Due to the decision-making provision for vehicle environment perception and path planning,street scenes video instance segmentation as one of the key issues in research of self-driving technology has aroused wide concern.How-ever,current researches focus on insufficient edge feature extraction,which is caused by utilization of single receptive field sampling for multi-aspect ratio anchor frames and deficiencies of spatial detailed position information in the high-level fea-ture pyramid architecture.To alleviate these problems,we propose a network anchor frame calibration and spatial posi-tion information compensation for video instance segmentation(AS-VIS).Firstly,we conduct the anchor frame calibra-tion module as additional branch in parallel with three prediction branches to align multi-type receptive field sampling with different aspect ratio of anchor frame.Secondly,a multi-receptive field subsampling module is designed to fuse the features of various receptive fields achieving less information missing compared with traditional down-sampling.Finally,for spatial location information compensation and detail location information dispersion in the higher-level feature space,we design multi-receptive field subsampling module embedded in higher level to map active feature of target region in lower level of the feature pyramid.The street scene video dataset is extracted from Youtube-VIS benchmark,including 329 videos in training set and 53 videos in validation set.Quantitative comparison of experimental results with Yolact-Edge show that the average accuracy of anchor frame calibration is improved by 8.63%and 5.09%,spatial position infor-mation compensation feature pyramid network is improved by 7.76%and 4.75%,and the overall average accuracy of AS-VIS is improved by 9.26%and 6.46%.The proposed network AS-VIS realizes detection,tracking,and segmentation syn-chronously on instance-level street scene video sequences,and provides an effective theoretical basis for environment per-ception of self-driving vehicles.
作者 张印辉 赵崇任 何自芬 杨宏宽 黄滢 ZHANG Yin-hui;ZHAO Chong-ren;HE Zi-fen;YANG Hong-kuan;HUANG Ying(Department of Mechanical and Electrical Engineering,Kunming University of Science and Technology,Kunming,Yunnan 650500,China)
出处 《电子学报》 EI CAS CSCD 北大核心 2024年第1期94-106,共13页 Acta Electronica Sinica
基金 国家自然科学基金(No.62061022,No.62171206)。
关键词 街道场景 视频实例分割 锚框校准 空间信息补偿 无人驾驶 street scene video instance segmentation anchor frame calibration spatial information compensation self-driving vehicle
  • 相关文献

参考文献9

二级参考文献44

  • 1Reina G,Underwood J,Brooker G,et al.Radar‐based perception for autonomous outdoor vehicles[J].Journal of Field Robotics,2011,28(6):894-913.
  • 2Alvarez J M A,Lopez A M.Road detection based on illuminant invariance[J].Intelligent Transportation Systems,IEEE Transactions on,2011,12(1):184-193.
  • 3Danescu R,Nedevschi S.Probabilistic lane tracking in difficult road scenarios using stereovision[J].Intelligent Transportation Systems,IEEE Transactions on,2009,10(2):272-282.
  • 4Rotaru C,Graf T,Zhang J.Color image segmentation in HSI space for automotive applications[J].Journal of Real-Time Image Processing,2008,3(4):311-322.
  • 5Himmelsbach M,Wuensche H.Fast segmentation of 3dpoint clouds for ground vehicles[C]∥Intelligent Vehicles Symposium(IV),IEEE,2010:560-565.
  • 6Steinhauser D,Ruepp O,Burschka D.Motion segmentation and scene classification from 3D LIDAR data[C]∥Intelligent Vehicles Symposium,IEEE,2008:398-403.
  • 7Klasing K,Wollherr D,Buss M.A clustering method for efficient segmentation of 3Dlaser data[C]∥ICRA,Pasadena,California,USA,2008:4043-4048.
  • 8Douillard B,Underwood J,Kuntz N,et al.On the segmentation of 3D LIDAR point clouds[C]∥Robotics and Automation(ICRA),2011IEEE International Conference on,IEEE,2011:2798-2805.
  • 9Moosmann F.Interlacing Self-Localization,Moving Object Tracking and Mapping for 3DRange Sensors[D].Germany:KIT Scientific Publishing,2013.
  • 10Milella A,Reina G,Underwood J,et al.Combining radar and vision for self-supervised ground segmentation in outdoor environments[C]∥Intelligent Robots and Systems(IROS),2011IEEE/RSJ International Conference on,IEEE,2011:255-260.

共引文献174

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部