摘要
传统的视觉同时定位与建图(SLAM)技术都是根据对静态环境条件的假设而设计,在动态环境中,运动目标的移动会导致特征匹配失败,进而影响位姿的估计。基于此提出了一种结合卷积神经网络的视觉SLAM算法,通过对ORB-SLAM2算法RGB-D模式前端添加结合注意力机制的卷积神经网络动态目标检测线程,在提取图像特征点时剔除动态目标区域,使用静态特征点完成对相机位姿精确的估计。仿真实验在TUM动态数据集下测试,通过多次测试后结果显示改进后的算法的位姿精度比原始算法提高90%以上,并且算法能满足实时性要求。
Traditional simultaneous visual localization and mapping(SLAM)technology is designed based on the assumption of a static environment.In a dynamic environment,the movement of a moving target will cause feature matching failure,which will affect the estimation of pose.A visual SLAM system combined with convolutional neural network is proposed.By adding a dynamic target detection thread of convolutional neural network combined with attention mechanism to the front end of the RGB-D mode of ORB-SLAM2 system,the dynamic target area is eliminated when extracting image feature points.Static feature points are used to complete accurate estimation of camera pose.The simulation experiments are tested under the TUM dynamic dataset,and the results of the multiple tests show that the improved algorithm improves the positional accuracy by more than 90%compared with the original algorithm,and the algorithm can meet the real-time requirements.
作者
陈明强
李奇峰
冯树娟
徐开俊
CHEN Mingqiang;LI Qifeng;FENG Shujuan;XU Kaijun(School of Flight Technology,Civil Aviation Flight University of China,Guanghan 618307)
出处
《计算机与数字工程》
2024年第5期1529-1535,共7页
Computer & Digital Engineering
基金
民航飞行技术与飞行安全重点实验室自主研究项目(编号:FZ2021ZZ06)
高质量民航特色“交通运输”硕士专业学位平台体系建设项目(编号:MHJY2022001)资助。
关键词
同时定位与建图
深度学习
位姿估计
动态场景
目标检测
simultaneous localization and mapping
deep learning
pose estimation
dynamic scene
target detection