摘要
为提高无人驾驶汽车视觉运动目标检测精度,提出一种基于深度学习与稀疏场景流结合的运动目标检测方法。首先使用深度学习网络SOLO V2分割交通场景,提取行人、车辆等潜在运动目标,缩小场景内运动目标搜索范围。其次,利用背景中特征匹配点估计相机自运动参数,在此基础上将潜在运动区域前后两帧特征点坐标映射到同一坐标系下,进而计算出仅由运动目标产生的稀疏场景流。最后,根据每个目标场景流估计误差的不同,计算每个目标场景流估计的不确定度,然后使用独立自适应阈值用于运动状态判断。使用KITTI数据集进行测试,实验结果表明:所提算法能明显提升运动目标检测精度,算法精度和召回率在两组数据集分别为92.3%、94.4%和87.4%、95.1%。
In order to improve the accuracy of visual moving object detection of unmanned vehicles,this paper proposes a moving object detection method based on deep learning combined with sparse scene flow.First,the deep learning network SOLO V2 is used to segment the traffic scene,extract potential moving targets such as pedestrians and vehicles,and nar-row the search range of moving targets in the scene.Secondly,the camera self-motion parameters are calculated by using the feature matching points in the background.On this basis,the feature points of the potential motion area are mapped to a unified coordinate system.This feature point is at two moments before and after,and then the sparse scene flow genera-ted only by its own motion is calculated.Finally,according to the difference of the estimation error of each target scene flow,the uncertainty of each target scene flow estimation is calculated and an independent adaptive threshold is set for each target for motion state judgment.The KITTI data set is used to test.The experimental results show that the proposed algorithm can significantly improve the accuracy of moving target detection.The accuracy and recall of the algorithm are 92.3%,94.4%and 87.4%,95.1%respectively in the two sets of data.
作者
刘明文
蒋涛
袁建英
顾硕鑫
徐智勇
雷婷
LIU Mingwen;JIANG Tao;YUAN Jianying;GU Shuoxin;XU Zhiyong;LEI Ting(College of Automation,Chengdu University of Information Technology,Chengdu610025,China)
出处
《成都信息工程大学学报》
2023年第4期381-386,共6页
Journal of Chengdu University of Information Technology
基金
国家自然科学基金资助项目(62103064)
四川省自然科学基金资助项目(22NSFSC2317)
四川省科技计划资助资助(2021YFG0133、2021YFG0295、2021YF110069、2021YFQ0057、2022YFS0565、2022YFN0020、2021YFG0308)。