期刊文献+

基于单目直接法的水下视觉惯性里程计方法

Underwater visual-inertial odometry based on monocular direct method
下载PDF
导出
摘要 为将视觉惯性里程计(VIO)有效地应用到水下场景中,提出一种适用于自主水下机器人(AUV)执行近距离作业任务时的水下VIO。针对水下环境中缺乏角点以及存在大量重复特征的特点,该系统在视觉部分使用了基于直接法的数据关联方式,在特征点提取过程中将像素点的梯度模作为特征提取的标准;为保证提取出足够多且有效的特征点,采用动态调整特征点提取数量和提取阈值的方法。此外,通过最小化光度误差对视觉状态量进行估计。在AQUALOC数据集上的实验表明,本系统采用的直接法在比较恶劣的水下环境中获得了比ORBSLAM3更高的定位精度,并可构造出相对稠密的地图。 In order to effectively apply visual-inertial odometry(VIO) to underwater scene, this paper proposes a visual-inertial odometry system which is suitable for autonomous underwater vehicle(AUV) to perform close range tasks. In view of the lack of corner points and the existence of a large number of repeated features in the underwater environment, a data association method based on direct method is used, which takes the gradient modulus of pixels as the standard of feature extraction. In order to extract enough effective feature points, the method of dynamically adjusting the number of feature points and the threshold of feature points extraction is adopted. In addition, the visual state is estimated by minimizing the photometric error. The experimental results on AQUALOC dataset demonstrate that the direct method adopted by this proposed system has higher localization accuracy than ORB-SLAM3 in a relatively harsh underwater environment, and can construct a relatively dense map.
作者 赵洪全 徐会希 ZHAO Hong-quan;XU Hui-xi(State Key Laboratory of Robotics,Shenyang Institute of Automation,Chinese Academy of Sciences,Shenyang 110016,China;Institutes for Robotics and Intelligent Manufocturing,Chinese Academy of Sciences,Shenyang 110016,China;Key Laboratory of Marine Robotics of Liaoning Province,Shenyang 110169,China;University of Chinese Academy of Sciences,Beijing 100049,China)
出处 《舰船科学技术》 北大核心 2022年第5期65-69,共5页 Ship Science and Technology
基金 中国科学院A类战略性先导科技专项资助项目(XDA22040103)。
关键词 自主水下机器人 水下定位 视觉惯性里程计 单目直接法 autonomous underwater vehicle underwater localization visual-inertial odometry monocular direct method
  • 相关文献

参考文献2

二级参考文献95

  • 1张钹.人工智能进入后深度学习时代[J].智能科学与技术学报,2019,0(1):4-6. 被引量:41
  • 2徐玉如,庞永杰,甘永,孙玉山.智能水下机器人技术展望[J].智能系统学报,2006,1(1):9-16. 被引量:121
  • 3Smith R C, Cheeseman P. On the representation and estimationof spatial uncertainty[J]. International Journal of Robotics Research,1986, 5(4):56-68.
  • 4Smith R, Self M, Cheeseman P. Estimating uncertain spatialrelationships in robotics[M] //Autonomous Robot Vehicles. NewYork: Springer, 1990: 167-193.
  • 5Durrant-Whyte H, Bailey T. Simultaneous localization andmapping: Part I[J]. IEEE Robotics & Automation Magazine,2006, 13(2): 99-110.
  • 6Bailey T, Durrant-Whyte H. Simultaneous localization andmapping(SLAM): Part II[J]. IEEE Robotics & AutomationMagazine, 2006, 13(3): 108-117.
  • 7Hartley R, Zisserman A. Multiple view geometry in computervision[M]. Cambridge: Cambridge University Press, 2004.
  • 8Aulinas J, Petillot Y R, Salvi J, et al. The SLAM problem: asurvey[J]. CCIA, 2008, 184(1): 363-371.
  • 9Ros G, Sappa A, Ponsa D, et al. Visual SLAM for driverlesscars: a brief survey[C] //Proceedings of IEEE Workshop onNavigation, Perception, Accurate Positioning and Mapping forIntelligent Vehicles. Los Alamitos: IEEE Computer SocietyPress, 2012: Article No.3.
  • 10Triggs B, Mclauchlan P F, Hartley R I, et al. Bundle adjustment -a modern synthesis[C] //Proceedings of International Workshopon Vision Algorithms: Theory and Practice. Heidelberg: Springer,1999: 298-372.

共引文献202

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部