期刊文献+

粗精结合的室内机器人视觉定位方法 被引量:3

Coarse combined with precise indoor robot visual positioning method
下载PDF
导出
摘要 为实现室内机器人自身精准定位,提出一种由粗匹配到精定位相融合的视觉定位方法。利用搭载在机器人上方的1台相机观测室内天花板、水平方向的4台相机观测周边环境,得到当前环境特征形成指纹信息库。定位时,将测试图像的周边环境信息与信息库中对应的信息粗匹配,得到机器人粗略位置;使用改进的GMS算法对测试图像及粗匹配结果中对应的天花板图像进行拼接,计算距离差与偏转度,得到精确位置。实验结果表明,该方法位移误差可控制在4 cm之内,偏转角误差在2.4°之内;对光照变化、行人闯入等情况鲁棒性明显,定位精度优于ORB视觉定位及指纹定位方法。 To realize accurate robot indoor positioning,a visual positioning method combining precise matching and rough matching was proposed.A camera that was mounted on the wheeled robot was used to capture the information of the indoor ceiling.The characteristics of the surrounding environment were collected using four horizontal cameras and fingerprint information base was generated.When positioning the robot,the peripheral information of test picture was roughly matched with the information in the information base generated earlier.An improved grid-based motion statistics for fast(GMS)technique was used to get mosaic and superposition information from ceiling image of test picture and ceiling image of the roughly matching result obtained earlier.The distance difference and deflection were computed to get the exact position of the robot.Experimental results show that the offset error can be controlled within 4 cm,and the deflection angle error is within 2.4 degrees.The proposed method is less affected by illumination changes,intrusion and other factors.Its positioning accuracy is better than that of the methods of ORB visual localization and fingerprint localization.
作者 唐国栋 方明 雷立宏 TANG Guo-dong;FANG Ming;LEI Li-hong(School of Computer Science and Technology,Changchun University of Science and Technology,Changchun 130022,China;School of Artificial Intelligence,Changchun University of Science and Technology,Changchun 130022,China)
出处 《计算机工程与设计》 北大核心 2021年第3期805-813,共9页 Computer Engineering and Design
基金 吉林省科技厅基金项目(20180201042GX)。
关键词 室内定位 特征匹配优化 图像拼接 特征提取 指纹信息 indoor location feature matching optimization image stitching feature extraction fingerprint information
  • 相关文献

参考文献8

二级参考文献96

  • 1宋毅,崔平远,居鹤华.一种图像匹配中SSD和NCC算法的改进[J].计算机工程与应用,2006,42(2):42-44. 被引量:29
  • 2刘阳成,朱枫.一种新的棋盘格图像角点检测算法[J].中国图象图形学报,2006,11(5):656-660. 被引量:33
  • 3张宏志,张金换,岳卉,黄世霖.基于CamShift的目标跟踪算法[J].计算机工程与设计,2006,27(11):2012-2014. 被引量:57
  • 4张小洪,李博,杨丹.一种新的Harris多尺度角点检测[J].电子与信息学报,2007,29(7):1735-1738. 被引量:79
  • 5Rublee E,Rabaud V,Konolige K,et al.ORB:An efficient alternative to SIFT or SURF[C]//IEEE International Conference on Computer Vision,2011:2564-2571.
  • 6Xia Y,Zhou W.A tracking and registration method based on ORB and KLT for augmented reality system[C]//22nd Wireless and Optical Communication Conference.IEEE,2013:344-348.
  • 7Grana C,Borghesani D,Manfredi M,et al.A fast approach for integrating ORB descriptors in the bag of words model[C]//Proceedings of IS&T/SPIE Electronic Imaging:Multimedia Content Access:Algorithms and Systems,2013.
  • 8Miksik O,Mikolajczyk K.Evaluation of local detectors and descriptors for fast feature matching[C]//21st International Conference on Pattern Recognition,2012:2681-2684.
  • 9LOWE D.Distinctive image features from scale-invariant keypoints[J].International Journal of Computer Vision,2004,60(2):91-110.
  • 10BAY H,ESS A,TUYTELAARS T,et al.Speeded-up robust features(SURF)[J].Computer Vision and Image Understanding,2008,110(3):346-359.

共引文献104

同被引文献20

引证文献3

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部