摘要
为实现室内机器人自身精准定位,提出一种由粗匹配到精定位相融合的视觉定位方法。利用搭载在机器人上方的1台相机观测室内天花板、水平方向的4台相机观测周边环境,得到当前环境特征形成指纹信息库。定位时,将测试图像的周边环境信息与信息库中对应的信息粗匹配,得到机器人粗略位置;使用改进的GMS算法对测试图像及粗匹配结果中对应的天花板图像进行拼接,计算距离差与偏转度,得到精确位置。实验结果表明,该方法位移误差可控制在4 cm之内,偏转角误差在2.4°之内;对光照变化、行人闯入等情况鲁棒性明显,定位精度优于ORB视觉定位及指纹定位方法。
To realize accurate robot indoor positioning,a visual positioning method combining precise matching and rough matching was proposed.A camera that was mounted on the wheeled robot was used to capture the information of the indoor ceiling.The characteristics of the surrounding environment were collected using four horizontal cameras and fingerprint information base was generated.When positioning the robot,the peripheral information of test picture was roughly matched with the information in the information base generated earlier.An improved grid-based motion statistics for fast(GMS)technique was used to get mosaic and superposition information from ceiling image of test picture and ceiling image of the roughly matching result obtained earlier.The distance difference and deflection were computed to get the exact position of the robot.Experimental results show that the offset error can be controlled within 4 cm,and the deflection angle error is within 2.4 degrees.The proposed method is less affected by illumination changes,intrusion and other factors.Its positioning accuracy is better than that of the methods of ORB visual localization and fingerprint localization.
作者
唐国栋
方明
雷立宏
TANG Guo-dong;FANG Ming;LEI Li-hong(School of Computer Science and Technology,Changchun University of Science and Technology,Changchun 130022,China;School of Artificial Intelligence,Changchun University of Science and Technology,Changchun 130022,China)
出处
《计算机工程与设计》
北大核心
2021年第3期805-813,共9页
Computer Engineering and Design
基金
吉林省科技厅基金项目(20180201042GX)。
关键词
室内定位
特征匹配优化
图像拼接
特征提取
指纹信息
indoor location
feature matching optimization
image stitching
feature extraction
fingerprint information