摘要
在室内环境中,机器人使用视觉设备实现室内定位时,需要面临高强度的运算、建立并维护比较大的地图、环境光照影响等挑战。针对以上问题,提出基于端到端模型的机器人室内单目视觉定位算法。首先,使用机器人运动场景图像数据,对每张图像标注机器人此时在世界坐标系下的三维坐标。其次,将运动场景图像及标签输入神经网络,得到机器人运动场景网络模型。最后,在机器人运动过程中,将需要查询的图像输入网络模型中,预测并输出机器人当前所处空间位置,实现一个端到端的定位系统。与传统的视觉SLAM的定位模块进行实验对比,结果表明所提出算法能够将定位误差降低38%以上。此外,所提出算法的定位实现依靠构建神经网络模型,不需要建立环境地图,只需要50 MB左右的存储空间,5 ms的时间便可以实现定位。所提出算法提高了机器人室内定位的速度和精度,减少机器人端的存储要求和计算量。
In an indoor environment,when robots use visual equipment to achieve indoor positioning,lots of challenges need to be face such as high-intensity calculations,establishment and maintenance of relatively large maps,and environmental lighting.To solve the above problems,an indoor monocular vision positioning algorithm for robots based on an end-to-end model is proposed.Firstly,the robot motion scene image data is used to mark the three-dimensional coordinates of the robot in the world coordinate system for each image.Secondly,the motion scene images and tags are input into the neural network to obtain the robot motion scene network model.Finally,during the movement of the robot,the image that needs to be queried is input into the network model,and the current spatial position of the robot is predicted and output to achieve an end-to-end positioning system.Compared with the traditional visual SLAM positioning module,the results show that the proposed algorithm can reduce the positioning error by more than 38%.In addition,the positioning of the proposed algorithm relies on the construction of a neural network model,which does not require the establishment of an environmental map,only requires about 50 MB of storage space,and the positioning can be achieved in 5 ms.The proposed algorithm improves the speed and accuracy of indoor positioning and reduces the storage requirements and computation of the robot.
作者
谢非
吴俊
黄磊
赵静
刘锡祥
钱伟行
XIE Fei;WU Jun;HUANG Lei;ZHAO Jing;LIU Xixiang;QIAN Weixing(School of Electrical and Automation Engineering,Nanjing Normal University,Nanjing 210023,China;School of Mechanical and Electronic Engineering,Nanjing Forestry University,Nanjing 210037,China;College of Automation and College of Artificial Intelligence,Nanjing 210023,China;College of Instrument Science&Engineering,Southeast University,Nanjing 210096,China;Nanjing Institute of Intelligent High-end Equipment Industry Limited Company,Nanjing 210042,China)
出处
《中国惯性技术学报》
EI
CSCD
北大核心
2020年第4期493-498,560,共7页
Journal of Chinese Inertial Technology
基金
国家重点研发计划(2017YFB1103200)
国家自然科学基金(61601228,41974033)
江苏省自然科学基金(BK20180726)
江苏省高校自然基金(17KJB510031)。