期刊文献+

手机视觉与惯性融合的松耦合滤波算法 被引量:1

Loose coupling filtering algorithm for mobile phone vision and inertial fusion
下载PDF
导出
摘要 在智能手机和互联网的普及状态下,对高精度定位技术的需求也更加显著,精确定位服务已渗透到各个领域,如物联网、无人驾驶、机器快递员、应急救援等。在室外环境下,这些服务大多由全球卫星导航系统提供;然而,在深山丛林、矿井隧道、地下室等室内环境下,由于信号衰减及多径效应的影响,GPS无法正常工作。本文针对一些特殊的室内场景,研究了基于松耦合滤波的视觉惯性融合导航方法,设计了一个面向智能手机平台的室内行人定位系统。该方法视觉前端采用了快速、稳健的稀疏直接法,后端采用了扩展卡尔曼滤波器来融合惯性信息,能够有效融合视觉和惯性信息,达到恢复单目视觉尺度、提高稳健性的效果,实现了高精度的室内行人定位。 With the popularity of smart phones and the Internet,the demand for high-precision positioning technology is also more significant,and the accurate positioning services have penetrated into various fields,such as the Internet of things,driverless,machine couriers,emergency rescue and so on.In outdoor environment,most of these services are provided by the global satellite navigation system.However,in indoor environment,such as deep mountain jungle,mine tunnel,basement and so on,GPS can not work properly due to signal attenuation and the influence of multi-path effect.Aiming at some special indoor scenes,this paper studies the visual inertial fusion navigation method based on loose coupling filter,and designs an indoor pedestrian positioning system for smartphone platform.In this method,the fast and robust sparse direct method is used in the visual front end,and the extended Kalman filter is used in the back end to fuse the inertia information.It can effectively integrate visual and inertial information,restore monocular visual scale,improve robustness,and achieve high precision indoor pedestrian location.
作者 刘星 郭杭 LIU Xing;GUO Hang(College of Information Engineering,Nanchang University,Nanchang 330031,China)
出处 《测绘通报》 CSCD 北大核心 2020年第2期61-65,共5页 Bulletin of Surveying and Mapping
基金 科技部重点研发项目(2016YFB0502002) 国家自然科学基金(41764002,41374039).
关键词 松耦合滤波 行人导航 扩展卡尔曼滤波 视觉里程计 智能手机 loose coupling filter pedestrian navigation extended Kalman filter visual odometer smartphone
  • 相关文献

参考文献4

二级参考文献73

  • 1Thrnn S, Burgard W, Fox D. Probabilistic robotics[M]. Cam- bridge, USA: MIT Press, 2005.
  • 2Engel J, Sturm J, Cremers D. Camera-based navigation of a low-cost quadrocopter[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA: IEEE, 2012: 2815-2821.
  • 3Forster C, Pizzoli M, Scaramuzza D. SVO: Fast semi-direct monocular visual odometry[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA: IEEE, 2014: 15-22.
  • 4Mur-Artal R, Montiel J M M, Tard6s J D. ORB-SLAM: A ver- satile and accurate monocular SLAM system[J]. IEEE Transac- tions on Robotics, 2015, 31(5): 1147-1163.
  • 5Engel J, SchSps T, Cremers D. LSD-SLAM: Large-scale direct monocular SLAM[C]//13th European Conference on Computer Vision. Berlin, Germany: Springer, 2014: 834-849.
  • 6Ktimmerle R, Grisetti G, Strasdat H, et al. g2o: A general frame- work for graph optimization[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA: 1EEE, 2011: 3607-3613.
  • 7Lowe D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2): 91-110.
  • 8Bay H, Tuytelaars T, van Gool L. SURF: Speeded up robust fea- tures[C]//9th European Conference on Computer Vision. Berlin, Germany: Springer, 2006: 404-417.
  • 9Rosten E, Drummond T. Machine learning for high-speed cor- ner detection[C]//9th European Conference on Computer Vi- sion. Berlin, Germany: Springer, 2006: 430-443.
  • 10Rublee E, Rabaud V, Konolige K, et al. ORB: An efficient alter- native to SIFT or SURF[C]//IEEE International Conference on Computer Vision. Piscataway, USA: IEEE, 2011: 2564-2571.

共引文献60

同被引文献24

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部