期刊文献+

分层分区域管理的实时图像跟踪算法 被引量:2

Real-Time Image Tracking Using Layer Partitioning Management
下载PDF
导出
摘要 为了实现当前屏幕图像特征点与模板图像中对应尺度下部分区域中特征点的快速匹配,解决图像跟踪算法中匹配精度与效率问题,提出一种特征点分层分区域管理的图像跟踪算法.在预处理阶段,对模板图像构造层次表示并对各尺度下的图像进行区域划分,在每个区域内同时提取ORB(oriented FAST and rotated BRIEF)特征点和Harris特征点,由ORB特征描述子计算区域图像的词袋特征向量,由此构建图像特征点的分层分区域管理模式.在实时跟踪阶段,根据摄像机位姿跟踪的情况区分预测跟踪、重定位跟踪和光流跟踪3个分支.在预测跟踪和重定位跟踪中,先快速定位实时图像对应的模板图像的尺度层与区域,再通过实时采集的图像与模板图像中对应尺度下部分区域中特征点的局部匹配,实时地计算摄像机的位置和方向;在光流跟踪过程中对光流算法跟踪点进行实时更新,延长光流算法的运行持续时间.利用公开图像数据库中不同分辨率的模板图像在移动终端上进行实验的结果表明,文中算法性能稳定,匹配误差在1个像素以内;系统运行帧率总体稳定在20~30帧/s. To achieve fast feature matching between current camera image features and the regions’features of the template image pyramid within the corresponding image scales so as to solve matching accuracy and efficiency of image tracking,an effective method to track a template image by managing its feature points in sub-images within scale-space layers was proposed.In the preprocessing stage,scale-space layers of a template image were constructed,and every scale-space layer image was partitioned into rectangular regions.Then,a key frame structure for each region’s image was built.Specifically,ORB(oriented FAST and rotated BRIEF)key points and Harris key points were extracted from each region’s image,and a BoW(bag of words)feature vector was built from its ORB feature descriptors.For each region,its position,BoW feature vector and Harris key points consist of a key frame structure.In the real-time tracking stage,three tracking branches,i.e.predicting tracking,relocalisation tracking and optical flow tracking,were pointed and processed respectively.In predicting tracking and relocalisation tracking,matching scales and regions of the camera image were fast located by using key frame information,then local feature matching between cam-era image features and the features in its matching scales and regions was done,and camera pose was solved by the feature matching pairs.In optical flow tracking,a method to renew tracking feature points by using Harris points in key frames was proposed to raise the running frame numbers.The new algorithm was compared with five advanced algorithms,i.e.FLISA,IFLISA,ORB,FREAK and BRISK,tested on mobile device using an open image database(Stanford mobile visual search dataset)with different resolution images.The experimental results show that our new algorithm is robust,has higher registration accuracy with less than one pixel,and achieves a real-time camera pose tracking rate of 20-30 frames per second.
作者 孙延奎 苗菁华 Sun Yankui;Miao Jinghua(Department of Computer Science and Technology,Tsinghua University,Beijing 100084)
出处 《计算机辅助设计与图形学学报》 EI CSCD 北大核心 2018年第4期611-617,共7页 Journal of Computer-Aided Design & Computer Graphics
基金 国家自然科学基金(61671272) 国家重点研发计划项目(2016YFB1000602) 国家"八六三"高技术研究发展计划(2013AA013702)
关键词 位姿跟踪 特征匹配 移动终端 重定位 光流跟踪 camera pose tracking feature matching mobile device relocalisation optical flow tracking
  • 相关文献

参考文献3

二级参考文献78

  • 1朱淼良,姚远,蒋云良.增强现实综述[J].中国图象图形学报(A辑),2004,9(7):767-774. 被引量:204
  • 2成光,刘卫东,魏尚俊,张蕊.基于卡尔曼滤波的目标估计和预测方法研究[J].计算机仿真,2006,23(1):8-10. 被引量:9
  • 3姚远,朱淼良,卢广.增强现实场景光源的实时检测方法和真实感渲染框架[J].计算机辅助设计与图形学学报,2006,18(8):1270-1275. 被引量:4
  • 4Wagner D, Reitmayr G, Mulloni A, et al. Real-Time detection and tracking for augmented reality on mobile phones [ J ]. IEEE Transactions on Visualization and Computer Graphics, 2010, 16(3) : 355-368. [DOI: 10. 1109/TVCG. 2009.99].
  • 5Lowe D G. Distinctive image features from scale-invariant key- points [ J ]. International Journal of Computer Vision, 2004, 60(2) : 91-110. [DOh 10. 1023/B:VISI. 0000029664. 99615.94].
  • 6Ozuysal M, Fua P, Lepetit V. Fast keypoint recognition in ten lines of code [ C ]//Proceedigs of IEEE Conference on Computer Vision and Pattern Recognition. Minneapolis: IEEE Xplore, 2007 : 1-8 [DOI : 10. 1109/CVPR. 2007. 383123 ].
  • 7Yang X, Cheng K T. LDB : an ultra-fast feature for scalable Aug- mented Reality on mobile devices[ C ]//IEEE International Sym- posium on Mixed and Augmented Reality. Atlanta: IEEE Xplore, 2012:49-57. [DOI : 10. 1109/ISMAR. 2012. 6402537 ].
  • 8Calonder M, Lepetit V, Streeha C, et al. BRIEF: binary robust independent elementary features [ C ]//Proceedings of the Euro- pean Conference on Computer Vision. Berlin Heidelberg: Spring- er 2010:778-792. [DOI : 10. 1007/978-3-642-15561-1_56].
  • 9Ufkes A, Fiala M. A markerless augmented reality system for mo- bile Devices [ C ]//Proceedings of International Conference on Computer and Robot Vision. Regina: IEEE Xplore, 2013:226- 233. [DOI: 10. 1109/CRV. 2013.5].
  • 10Rublee E, Rabaud V, Konolige K, et al. ORB: an efficient al- ternative to SIFT or SURF[ C ]//Proceedings of IEEE Internation- al Conference on Computer Vision. Barcelona: IEEE Xplore, 2011 : 2564-2571 [ DOI:10.1109/ICCV. 2011. 6126544].

共引文献32

同被引文献9

引证文献2

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部