期刊文献+

基于快速傅里叶变换的局部分块视觉跟踪算法 被引量:4

Local Patch Tracking Algorithm Based on Fast Fourier Transform
下载PDF
导出
摘要 针对视觉跟踪中目标表观变化、局部遮挡、背景干扰等问题,该文提出一种基于快速傅里叶变换的局部分块视觉跟踪算法。通过建立目标分块核岭回归模型并构建循环结构矩阵进行分块穷搜索来提高跟踪精度,利用快速傅里叶变换将时域运算变换到频域运算提高跟踪效率。首先,在包含目标的初始跟踪区域建立目标分块核岭回归模型;然后,提出通过构造循环结构矩阵进行分块穷搜索,并构建目标分块在相邻帧位置关系模型;最后,利用位置关系模型精确估计目标位置并进行分块模型更新。实验结果表明,该文算法不仅对目标表观变化、局部遮挡以及背景干扰等问题的适应能力有所增强,而且跟踪实时性较好。 In order to solve the problems of appearance change, local occlusion and background distraction in the visual tracking, a local patch tracking algorithm based on Fast Fourier Transform(FFT)is proposed. The tracking precision can be improved by establishing object's patch kernel ridge regression model and using patch exhaustive search based on circular structure matrix, and the efficiency can be improved by transforming time domains operation into frequency domains based on FFT. Firstly, patch kernel ridge regression model is constructed according to the initialized tracking area. Secondly, a patch exhaustive search method based on circular structure matrix is proposed, then the position model is constructed in adjoining frame. Finally, the position of the object is estimated accurately using the position model and the local patch model is updated. Experimental results indicate that the proposed algorithm not only can obtain a distinct improvement in coping with appearance change, local occlusion and background distraction, but also have high tracking efficiency.
出处 《电子与信息学报》 EI CSCD 北大核心 2015年第10期2397-2404,共8页 Journal of Electronics & Information Technology
基金 国家自然科学基金(61175029 61473309) 陕西省自然科学基金(2011JM8015)~~
关键词 视觉跟踪 核岭回归模型 快速傅里叶变换 分块穷搜索 位置关系模型 Visual tracking Kernel ridge regression model Fast Fourier Transform(FFT) Patch exhaustive search Position model
  • 相关文献

参考文献15

  • 1Yang H X, Shao L, Zheng F, et al: Recent advances and trends in visual tracking: a review[J]. Neurocomputing, 2011, 74(18): 3823-3831.
  • 2Smeulders A W, Chu D M, Cucchiara R, et aL. Visual tracking :an experimental survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(7): 1442-1468.
  • 3Wu Y, Lim J, and Yang M H. Online object tracking: a benchmark[C]. Proceedings of the Computer Vision and Pattern Recognition, Portland, United States, 2013: 2411-2418.
  • 4Comaniciu D and Rmnesh V. Kernel-based object tracking[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(5): 564-577.
  • 5Collins R T. Mean-Shift blob tracking through scale space[C]. IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Madison, United States, 2003: 234-240.
  • 6Ning J F, Zhaag L, et al: Robust mean shift tracking with corrected background-weighted histogram[J]. IET Computer Vision, 2012, 6(1): 62-69.
  • 7Babenko B, Yang M H, and Belongie S. Robust object tracking with online multiple instance learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(8): 1619-1632.
  • 8Henriques J F, Caseiro R, Martins P, et al: Exploiting the circulant structure of tracking-by-detection with kernels[C]. Proceedings of European Conference on Computer Vision (ECCV), Florence, Italy, 2012: 702-715.
  • 9Henriques J F, Caseiro R, Martins P, et aL. High-speed tracking with kernelized correlation filters[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 583-596.
  • 10Adam A, Rivlin E, and Shimshoni I. Robust fragments-basedtracking using the integral histogram[C]. Proceedings of the Computer Vision and Pattern Reco:ition, New York, United States, 2006: 798-805.

二级参考文献16

  • 1Bakhtari A, Mackay M, and Benhabib B. Active-vision for the autonomous surveillance of dynamic, multi-object environments[J]. Journal of Intelligent Robot System, 2009, 54(4): 567-593.
  • 2Vadakkepat P, Lim P, Desilva L, et al.. Multimodal approach to human-face detection and tracking[J]. IEEE Transactions on Industrial Electronics, 2008, 55(3): 1385-1392.
  • 3Matei B, Sawhney H, and Samarasekera S. Vehicle tracking across non-overlapping cameras using joint kinematic and appearance features[C]. The 24th IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs 2011: 3465-3472.
  • 4Kalal Z, Mikolajczyk K, and Matas J. Tracking- learning-detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(7): 1409-1422.
  • 5Yang Han-xuan, Shao Ling, Zhang Feng, et al.. Recent advances and trends in visual tracking: a review[J]. Neurocomputing, 2012, 74(18): 3823-3831.
  • 6Kwon J and Lee K. Tracking of a non-rigid object via patch- based dynamic appearance modeling and adaptive basin hopping Monte Carlo sampling[C]. IEEE Conference on Computer Vision and Pattern Recognition, Miami, 2009: 1208-1215.
  • 7Nejhum S, Ho J, and Yang Ming-hsuan. Online visual tracking with histograms and articulating blocks[J]. Computer Vision and Image Understanding, 2010, 114(8): 901-914.
  • 8Maggio E and Cavallaro A. Multi-part target representation for color tracking[C]. IEEE International Conference on Image Processing, Genoa, 2005: 729-732.
  • 9Adam A, Rivlin E, and Shimshoni I. Robust fragments-based tracking using the integral histogram[C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, 2006: 798-805.
  • 10Fang Jiang-xiong, Yang Jie, and Liu Hua-xiang. Efficient and robust fragments-based multiple kernels tracking[J]. International Journal of Electronics and Communications, 2011, 65(11): 915-923.

共引文献30

同被引文献8

引证文献4

二级引证文献26

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部