期刊文献+

基于改进SuperPoint网络的视觉SLAM算法研究

Research on Visual SLAM Algorithm Based on Improved SuperPoint Network
下载PDF
导出
摘要 针对传统视觉SLAM算法在视角变换和光照变化时易导致位姿估计精度低甚至跟踪失败等问题,启发于SuperPoint网络在特征提取上的强鲁棒性,提出一种基于轻量级SuperPoint网络的视觉SLAM算法(Light Weight SuperPoint network based-on visual SLAM,LWS-vSLAM)。首先,为解决SuperPoint网络编码层计算量过大引起的系统实时性下降问题,采用LWS-NET轻量化特征提取网络,该网络编码层采用轻量级注意力模型对图像特征进行降采样来减小计算量。其次,为解决在视角变换和光照变化环境下存在较多误匹配问题,利用LWS-NET网络的特征检测分类层的插值计算完成图像中优质特征点的筛选,并以优质特征点为中心进行区域内误匹配剔除。最后,将LWS-NET特征提取与匹配网络与ORB-SLAM2系统后端非线性优化、闭环修正和局部建图进行融合,设计一个完整的单目视觉LWS-vSLAM系统。在公共评测数据集TUM、KITTI中进行仿真实验,实验结果表明,算法平均每帧运行时间相较于SuperPoint缩短约30%,轨迹误差相较于ORB-SLAM2减少13.7%,显著提升了在视角变换和光照变化下的定位精度。 To address the problems that traditional visual SLAM algorithms are prone to low accuracy of pose estimation and even tracking failure when the viewpoint shifts and illumination changes,inspired by the strong robustness of SuperPoint network in feature extraction,a Light Weight SuperPoint network based on visual SLAM(LWS-vSLAM)algorithm is proposed.First,in order to solve the problem of real-time degradation caused by the excessive computation in the coding layer of the SuperPoint network,the LWS-NET light-weight feature extraction network is used,which adopts a lightweight attention model to downsample the image features in the coding layer to reduce the computation.Secondly,to solve the problem of more mismatching in viewpoint transformation and illumination change environment,the interpolation calculation of the feature detection and classification layer in the LWS-NET network are used to filter the high-quality feature points in the image,and the mismatches in the region are eliminated with the high-quality feature points as the center.Finally,the complete monocular vision LWS-vSLAM system was designed by combining LWS-NET feature extraction and matching network with ORB-SLAM2 system back-end nonlinear optimization,closed-loop correction and local mapping.Simulation experiments were carried out on the public evaluation data sets TUM and KITTI.The experimental results show that the average running time per frame of the algorithm is about 30%shorter than that of SuperPoint,and the trajectory error is 13.7%lower than that of ORB-SLAM2,which significantly improves the positioning accuracy under perspective transformation and illumination change.
作者 方鑫 孙新柱 陈孟元 陈何宝 FANG Xin;SUN Xinzhu;CHEN Mengyuan;CHEN Hebao(School of Electrical Engineering,Anhui Polytechnic University,Wuhu 241000,China;Key Laboratory of Advanced Perception and Intelligent Control of High-end Equipment,Wuhu 241000,China)
出处 《安徽工程大学学报》 CAS 2023年第5期46-55,共10页 Journal of Anhui Polytechnic University
基金 国家自然科学基金资助项目(61903002) 安徽省高校协同创新项目(GXXT-2021-050)。
关键词 同时定位与建图 SuperPoint 特征提取网络 轻量级 误匹配剔除 simultaneous localization and mapping SuperPoint feature extraction network lightweight mis-match rejection
  • 相关文献

参考文献6

二级参考文献40

  • 1罗久飞,邱广,张毅,冯松,韩冷.基于自适应双阈值的SURF双目视觉匹配算法研究[J].仪器仪表学报,2020,41(3):240-247. 被引量:41
  • 2Endres F, Hess J, Engelhard N, et al. An evaluation of the RGB- D SLAM system[J]. Perception, 2012, 3(c): 1691-1696.
  • 3Konolige K, Mihelich E Technical description of Kinect cal- ibration[N/OL]. [2011-11-03]. http://www.ros.org/wiki/kinect_ calibration/technical.
  • 4Henry P, Krainin M, Herbst E, et al. RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments[J]. International Journal of Robotics Research, 2012, 31(5): 647-663.
  • 5Besal P J, McKay H D. A method for registration of 3-D shapes[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992, 14(2): 239-256.
  • 6Dryanovski I, Valenti R G, Xiao J Z. Fast visual odometry and mapping from RGB-D data[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA: IEEE, 2013: 2305-2310.
  • 7Fischler M A, Bolles R C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography[J]. Communications of the ACM, 1981, 24(6): 381-395.
  • 8Ktimmerle R, Grisetti G, Strasdat H, et al. G2o: A general framework for graph optinization[C]//IEEE International Con- ference on Robotics and Automation. Piscataway, USA: IEEE, 2011: 3607-3613.
  • 9Rosten E, Porter R, Drummond T. Faster and better: A machine learning approach to corner detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(1): 105- 119.
  • 10Calonder M, Lepetit V, Strecha C, et al. BRIEF binary ro- bust independent elementary feature[C]//European Conference on Computer Vision. Berlin, Germany: Springer, 2010: 778- 792.

共引文献173

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部