期刊文献+

基于全局和局部约束直推学习的鲁棒跟踪研究 被引量:2

Transductive Learning with Global and Local Constraints for Robust Visual Tracking
下载PDF
导出
摘要 在目标跟踪中,大部分算法都是假设目标亮度不变或者目标子空间不变,然而,这些假设在实际场景中并不一定满足,特别是当目标和背景都发生较大变化时,目标容易丢失.针对这种情况,本文从直推学习的角度重新描述跟踪问题,并提出一种鲁棒的目标跟踪方法.为获得更好的跟踪效果,目标当前状态估计不仅要逼近目标模型,而且要与以前的结果具有相同的聚类.本方法利用目标模型对跟踪问题进行全局约束,利用以前的结果约束状态局部分布,构造代价函数.将以前的状态估计作为正样本,当前的候选状态作为未标记样本,以所有样本为顶点建立图,同时学习目标的全局外观模型和所有状态的局部聚类结构.最后利用图拉普拉斯,通过简单的线性代数运算,获得代价函数的最优解.在实验中,选取包含各种情形的视频,如目标的姿势改变、表情变化、部分遮挡以及周围光照的变化等,利用本文提出的方法测试,并和其他算法比较.实验结果表明,本文方法能够很好处理这些情形,实现对目标的鲁棒跟踪. In the problem for object tracking,most methods assume brightness constancy or subspace constancy.When both of the foreground and background are changed,the above assumes are violated in practice and the object will be lost.In this paper,the object tracking problem is considered as a transductive learning problem and a robust tracking method is proposed.In order to obtain good result,the object not only fits the object model but also has the same cluster as the previous objects.The previous objects are the labeled data and the candidates are considered as unlabeled data.The cost function is obtained with global and local constraints.Moreover,a novel graph is constructed over the positive samples and candidate patches,which can simultaneously learn the object s global appearance and the local intrinsic geometric structure of all the patches.The solution for minimizing the cost function can be solved by the simple linear algebra with graph Laplacian.The proposed method is tested on different videos,which undergo large pose,expression,illumination,and partial occlusion,and is compared with state-of-the-art algorithms.Experimental results and comparative studies are provided to demonstrate that the proposed method works well with these situations and tracks the object robustly.
出处 《自动化学报》 EI CSCD 北大核心 2010年第8期1084-1090,共7页 Acta Automatica Sinica
关键词 鲁棒跟踪 图拉普拉斯 直推学习 全局约束 局部约束 Robust tracking graph Laplacian transductive learning global constrain local constrain
  • 相关文献

参考文献12

  • 1Black M, Jepson A. Eigentracking: robust matching and tracking of articulated objects using a view-based representation. International Journal of Computer Vision, 1998, 26(1): 63-84.
  • 2Jepson A, Fleet D, EI-Maraghi T. Robust online appearance models for visual tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(10): 1296-1311.
  • 3Comaniciu D, Ramesh V, Meer P. Kernel-based object tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(5): 564-575.
  • 4Wang H, Suter D, Schindler K, Shen C. Adaptive object tracking based on an effective appearance filter. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(9): 1661-1667.
  • 5Lin R, Ross D, Lim J, Yang M. Adaptive discriminative generative model and its applications. In: Proceedings of the Conference on Advances in Neural Information Processing Systems. Massachusetts, USA: The MIT Press, 2004. 801-808.
  • 6Nguyen H, Smeulders A. Robust tracking using foreground- background texture discrimination. International Journal of Computer Vision, 2006, 69(3): 277-293.
  • 7Zhang X, Hu W, Maybank S, Li X. Graph based discriminative learning for robust and efficient object tracking. In: Proceedings of the llth International Conference on Computer Vision. Rio de Janeiro, Brazil: IEEE, 2007. 1-8.
  • 8Ross D, Lim J, Lin R, Yang M. Incremental learning for robust visual tracking. International Journal of Computer Vision, 2008, 77 (1-3): 125-141.
  • 9Levy A, Lindenbaum M. Sequential Karhunen-Loeve basis extraction and its application to images. IEEE Transactions on Image Processing, 2000, 9(8): 1371-1374.
  • 10Zhu X. Semi-Supervised Learning Literature Survey, Technical Report 1530, Computer Science, University of Wisconsin-Madison, USA, 2005.

同被引文献11

引证文献2

二级引证文献281

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部