期刊文献+

随机场中运动一致性的多线索目标跟踪 被引量:3

Multi-cues object tracking based on motion consistence in random field
原文传递
导出
摘要 目的通过建立各线索间的关联,提高多线索目标跟踪方法的鲁棒性,利用简单而有效的模型使多线索目标跟踪方法的表达和实现变得容易。方法在不同线索描述下的目标对象间引入运动一致性约束,利用链状结构随机场模型表达不同线索描述下的目标对象及其约束关系,将多线索目标跟踪问题转化为随机场目标函数的简单优化求解。实验中结合亮度直方图、方向梯度直方图和局部二进制模式描述目标对象。结果 15组公测视频序列上的实验结果表明,所提方法相对于多种优秀的目标跟踪方法,在目标受到遮挡、运动模糊、光照变化、背景杂乱等因素干扰时,获得了较低中心位置误差和较高的精度值,反映了所提方法的有效性。结论运动一致性约束能够较好地增强各线索间的关联,通过链状结构的随机场模型表达该约束关系和各线索描述下的目标对象,在提高跟踪鲁棒性的同时,使跟踪方法的实现变得简单。 Objective The relationships among different cues are established to improve the robustness of a tracking method. A simple but effective model is utilized to easily implement the tracking method. Method A motion-consistency constraint is proposed among objects represented by different cues. A chain-structure Markov random field is used to express the objects represented by different cues and the constraint among them. The tracking problem is converted into a simple optimization of the target function of a Markov random field. The cues used in the experiment are luminance histogram, oriented gradient histogram, and local binary pattern. Result The comparison between several state-of-the-art tracking methods and the pro- posed method on 15 video sequences shows the effectiveness of the latter. The proposed method has low position error and high tracking accuracy when an object is influenced by occlusion, motion blur, illumination changes, and clutter. Conclu- sion A motion-consistency constraint enhances the relationships among different cues to a certain degree. Expressing the constraint and the objects represented by different cues through a chain-structure Markov random field improves the robust- ness of the tracking method and makes it easy to implement.
出处 《中国图象图形学报》 CSCD 北大核心 2015年第1期59-71,共13页 Journal of Image and Graphics
基金 国家自然科学基金项目(61273237 61271121 60905005 61403116) 中央高校基本科研业务费专项资金(2013HGBH0045) 中国博士后科学基金项目(2014M560507)
关键词 目标跟踪 多线索 运动一致性 随机场模型 object tracking multi cues motion consistency random field model
  • 相关文献

参考文献25

  • 1Yang H, Shao L, Zheng F, et al. Recent advances and trends in visual tracking: a review[J]. Neurocomputing, 2011, 74(18): 3823-3831.
  • 2Yilmaz A, Javed O, Shah M. Object tracking: a survey[J]. ACM Computing Surveys, 2006, 38(4): 13(1-45).
  • 3侯志强,韩崇昭.视觉跟踪技术综述[J].自动化学报,2006,32(4):603-617. 被引量:255
  • 4Arulampalam M S, Maskell S, Gordon N, et al. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking[J]. IEEE Transactions on Signal Processing, 2002, 50(2): 174-188.
  • 5Comaniciu D, Ramesh V, Meer P. Real-time tracking of non-rigid objects using mean shift[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. South Carolina, USA: IEEE, 2000: 2142-2142.
  • 6Nguyen H T, Smeulders A W M. Robust tracking using foreground-background texture discrimination[J]. International Journal of Computer Vision, 2006, 69(3): 277-293.
  • 7Grabner H, Grabner M, Bischof H. Real-time tracking via on-line boosting[C]//Proceedings of the British Machine Vision Conference. Edinburgh, UK: BMVA, 2006, 1(5): 1-10.
  • 8Avidan S. Ensemble tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(2): 261-271.
  • 9Sevilla-Lara L, Learned-Miller E. Distribution fields for tracking[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Rhode Island, USA: IEEE, 2012: 1910-1917.
  • 10Adam A, Rivlin E, Shimshoni I. Robust fragments-based tracking using the integral histogram[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. New York, USA: IEEE, 2006, 1: 798-805.

二级参考文献6

共引文献254

同被引文献26

  • 1李文斌,刘椿年,陈嶷瑛.基于特征信息增益权重的文本分类算法[J].北京工业大学学报,2006,32(5):456-460. 被引量:19
  • 2Ross D, Lira J, Lin R S, et al. Incremental learning for robust visual tracking [ J]. International Journal of Computer Vision, 2008, 77(1-3) : 125-141.
  • 3Kalal Z, Matas J, Mikolajczyk K. P-N learning: bootstrapping binary classifier by structural constraints [ C] // Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, United States: IEEE, 2010:49-56.
  • 4Mei X, Ling H. Robust visual tracking using L1 minimization [ CI // Proceedings of the 2009 IEEE International Conference on Computer Vision, United States: IEEE, 2009 : 1436-1443.
  • 5Collins R, Liu Y, Leordeanu M. Online selection of discrimina- tive tracking features [J]. IEEE Trans. on Pattern Anal. Mach. Intell. , 2005, 27(10) :1631-1643.
  • 6Avidan S. Ensemble tracking [ J]. IEEE Trans. on Pattern A- nal. Mach. Intell. , 2007, 29(2) : 261-271.
  • 7Grabner H, Grabner M, Bisehof H. Real-time tracking via on- line boosting [ C ] // Proceedings of the 2006 British Machine Vision Conference. Edinburgh, United Kingdom : British Ma- chine Vision Association, 2006: 47-56.
  • 8Grabner H, Bischof H. On-line boosting and vision [ C] // Pro- ceedings of the IEEE Computer Soeiety Conference on Computer Vision and Pattern Recognition. New York, NY, United States: IEEE, 2006: 260-267.
  • 9Liu R, Cheng J, Lu H. A robust boosting tracker with minimum effor bound in a co-training framework [ C ]//Proceedings of the IEEE International Conference on Computer Vision. United States: IEEE, 2009: 1459-1466.
  • 10Babenko B, Yang M H, Belongie S. Robust object tracking with online multiple instance learning [ Jl. IEEE Trans. on Pattern Anal. Math. Intell. , 2031, 33(8): 1619-1632.

引证文献3

二级引证文献6

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部