期刊文献+

在线复合模板模型表示的视觉目标跟踪 被引量:3

Online integrative template-based model representation for visual object tracking
原文传递
导出
摘要 目的视觉目标跟踪中,目标往往受到自身或场景中各种复杂干扰因素的影响,这对正确捕捉所感兴趣的目标信息带来极大的挑战。特别是,跟踪器所用的模板数据主要是在线学习获得,数据的可靠性直接影响到候选样本外观模型表示的精度。针对视觉目标跟踪中目标模板学习和候选样本外观模型表示等问题,采用一种较为有效的模板组织策略以及更为精确的模型表示技术,提出一种新颖的视觉目标跟踪算法。方法跟踪框架中,将候选样本外观模型表示假设为由一组复合模板和最小重构误差组成的线性回归问题,首先利用经典的增量主成分分析法从在线高维数据中学习出一组低维子空间基向量(模板正样本),并根据前一时刻跟踪结果在线实时采样一些特殊的负样本加以扩充目标模板数据,再利用新组织的模板基向量和独立同分布的高斯—拉普拉斯混合噪声来线性拟合候选目标外观模型,最后估计出候选样本和真实目标之间的最大似然度,从而使跟踪器能够准确捕捉每一时刻的真实目标状态信息。结果在一些公认测试视频序列上的实验结果表明,本文算法在目标模板学习和候选样本外观模型表示等方面比同类方法更能准确有效地反映出视频场景中目标状态的各种复杂变化,能够较好地解决各种不确定干扰因素下的模型退化和跟踪漂移问题,和一些优秀的同类算法相比,可以达到相同甚至更高的跟踪精度。结论本文算法能够在线学习较为精准的目标模板并定期更新,使得跟踪器良好地适应内在或外在因素(姿态、光照、遮挡、尺度、背景扰乱及运动模糊等)所引起的视觉信息变化,始终保持其最佳的状态,使得候选样本外观模型的表示更加可靠准确,从而展现出更为鲁棒的性能。 Objective Visual object tracking is aprocess that continuously infers the state of a target from several uncon- strained scenes. It is commonly formulated as a searching (or classification) problem that aims to identify the candidate that matches the target template the most as the tracking result. The target template is maintained over time and updated on- line once the tracking result is available. Prior to tracking at the current time, a set of candidates are sampled around the state of the target at the previoustime. Both the target template and candidates are represented by an appearance model. Then, a target searching strategy is employedto find the candidate that matches the template most as the tracking result. A1-though several excellent methods of visual tracking exist, this area remains an overwhelming research topic because of sever- al unresolved challenging issues that arise from both template learning and appearance modeling. From the point of view of appearance modeling, exploiting several representative templates from online data is the core problem and plays a key role in complex scenes where the target state is being changed over time significantly. Method In the proposed tracking frame- work, several now-dimensional basis vectors called positive templates are learned from high-dimensional online data by using the online PCA algorithm. Several negative templates are then sampled according to the last tracking resuh. The most repre- sentative object templates are organized by combining both positive and negative templates, and the target candidate is well represented through the use of online learned integrative templates with some additive Gaussian-Laplacian noise. Finally, the maximum likelihood between the target candidate and real object is estimated. Thus, the tracker can capture accurate information on the real object in each frame. Reasonable arrangements of the template update strategy are used to enhance the object templates during tracking. Result The online integrative templates can exploit the most comprehensive informa- tion on the target object with positive and negative templates compared with the simplex positive template learning approach because the online negative template expansion operation generates strong magnetic anisotropy between the target candidate and background data. In other words, positive templates help the tracker find the most possible target while negative tem- plates actively represent the background data to help the tracker avoid the drifting problem. Thus, the tracker maintains the good capability to identify the greatest possible target candidate easily. Extensive experiments are conducted to validate the new algorithm. The tracker can learn several comparative object templates and self-updates at a fixed period, adapt well to several variations caused by intrinsic or extrinsic factors (pose, illumination, occlusion, scaling, background cluttering, motion blur, etc. ) , and maintain the capability to exhibit favorable performance. Conclusion Although template learning with online PCA is a widely-used feature extraction method for computer vision problems ( e. g. , visual object tracking) and its learned templates contain some representative infol^nation on the target object, it is not very representative and needs to be enhanced with some additional information on the object to adapt well to uncertain complex variations. In this paper, two core issues ( online template learning and appearance modeling) in visual object tracking are studied. Detailed descrip- tions of an efficient template organization strategy and an accurate model representation technique are provided, and a novel visual object tracking framework is proposed. The proposed algorithm can automatically exploit several useful integrative templates of the object from online data and self-updates. Hence, model representation exhibits strong robustness and im- proved tracking accuracy. Experiments on many challenging image sequences demonstrate that the proposed method achieves the same and even better resuhs when compared with several state-of-the-art tracking algorithms.
出处 《中国图象图形学报》 CSCD 北大核心 2015年第9期1199-1211,共13页 Journal of Image and Graphics
基金 国家自然科学基金项目(61272220)
关键词 在线学习 复合模板 模型表示 视觉目标跟踪 online learning integrative template model representation visual object tracking
  • 相关文献

参考文献25

  • 1Frey B J. Filling in scenes by propagating probabilities through layers and into appearance models [C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Hilton Head Island, SC: IEEE, 2000 (1): 185-192.
  • 2Olson C F. Maximum-likelihood template matching [C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Hilton Head Island, SC: IEEE, 2000 (2): 52-57.
  • 3Lee W C, Chen C H. A fast template matching method for rotation invariance using two-stage process [C]//Proceedings of the 5th International Conference on Intelligent Information Hiding and Multimedia Signal Processing. Kyoto: IEEE, 2009: 9-12.
  • 4Black M J, Jepson A D. Eigentracking: robust matching and tracking of articulated objects using a view based representation [J]. International Journal of Computer Vision,1998, 26(1): 63-84.
  • 5Zhang J X, Cai W L, Tian Y, et al. Visual Tracking via Sparse Representation Based Linear Subspace Model [C]//Proceedings of the 9th IEEE International Conference on Computer and Information Technology. Xiamen: IEEE, 2009, (1):166-171.
  • 6He K, Wang G, Yang Y. Optical flow-based facial feature tracking using prior measurement [C]//Proceedings of the 7th IEEE International Conference on Cognitive Informatics. Stanford, CA: IEEE, 2008: 324-331.
  • 7Eldeeb S M, Khalifa A M, Fahmy A S. Hybrid intensity-and phase-based optical flow tracking of tagged MRI [C]//Proceedings of the 36th Annual International Conference of the IEEE on Engineering in Medicine and Biology Society. Chicago, IL:IEEE, 2014: 1059-1062.
  • 8Yang M, Tao J, Shi L, et al. An outlier rejection scheme for optical flow tracking [C] //Proceedings of the IEEE International Workshop on Machine Learning for Signal Processing. Santander: IEEE, 2011: 1-4.
  • 9Tao H, Sawhney H S, Kumar R. Object tracking with bayesian estimation of dynamic layer representations[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, 21(1): 75-89.
  • 10Barth E, Stuke I, Aach T, et al. Spatio-temporal motion estimation for transparency and occlusions [C]// Proceedings of International Conference on Image Processing. Barcelona: IEEE, 2003,(3): 65-68.

同被引文献17

引证文献3

二级引证文献2

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部