期刊文献+

基于MAP多子空间增量学习的目标跟踪算法 被引量:5

Visual tracking algorithm based on MAP multi-subspace incremental learning
原文传递
导出
摘要 目标跟踪是计算机视觉中的一个重要领域.基于PCA子空间学习的跟踪算法假设目标从子空间中生成并且误差项服从小方差的Gauss分布.但是这些算法没有考虑样本在子空间中的先验分布,在以重构误差最小化为目标进行优化时,容易产生过度拟合测试样本现象.为了解决过度拟合问题,本文首先从最大似然估计(ML)角度分析了基于PCA子空间学习的算法原理,然后推导出了样本投影在子空间中坐标方差的无偏估计,最后提出了基于最大后验概率(MAP)的PCA子空间学习算法.在处理跟踪过程中出现的遮挡问题时,本文提出了一种局部跟踪策略.首先,将目标图像分成多个块,假设每个块都从独立的子空间中生成.其次,根据每个分块对目标的重构误差大小判断该分块是否被遮挡.最后,在计算粒子权值和更新子空间时仅使用未被遮挡的分块,从而避免了遮挡物对跟踪结果的干扰.实验表明本文提出的算法能有效地解决目标的部分遮挡、运动模糊和背景干扰等问题,在与其他算法的比较中体现了本算法的优越性. Object tracking is an important area of computer vision.Tracking algorithms based on principal component analysis(PCA) in linear subspace learning assume that the object is generated from the same subspace,and that the error term is subject to a Gaussian distribution with a small variance.However,these algorithms do not consider the distribution of samples in the subspace,which often results in over-fitting when minimizing the reconstruction error.To solve the over-fitting problem,we first analyze PCA subspace learning algorithms from the maximum-likelihood-estimation perspective.We then deduce an unbiased estimation of the variance of the subspace,and finally we propose a new PCA subspace learning algorithm based on the maximum posteriori probability(MAP) method.We propose a local patch model to deal with the occlusion problem.First,we divide the object image into several patches.It is assumed that every patch is generated from an independent subspace.Then,we use the reconstruction error to judge whether a patch is occluded.Finally,we compute the particles' weight and train the new subspace using only the un-occluded patches,which avoids the adverse effect caused by occlusion.Experiments show that the proposed algorithm can effectively solve the influencing factors such as partial occlusion,motion blur and background interference.The proposed approach shows excellent performance in comparison with other state-of-the-art trackers.
出处 《中国科学:信息科学》 CSCD 北大核心 2016年第4期476-495,共20页 Scientia Sinica(Informationis)
基金 国家自然科学基金(批准号:61472289) 湖北省自然科学基金(批准号:2015CFB254)资助项目
关键词 目标跟踪 主成分分析 子空间方法 最大后验概率 最大似然估计 object tracking principal component analysis subspace methods maximal posterior maximum likelihood estimation
  • 相关文献

参考文献37

  • 1Yilmaz A, Javed O, Shah M. Object tracking: a survey. ACM Comput Surv, 2006, 38:1-45.
  • 2Yang H, Shao L, Zheng F, et al. Recent advances and trends in visual tracking: a review. Neurocomputing, 2011, 74: 3823-3831.
  • 3张焕龙,胡士强,杨国胜.基于外观模型学习的视频目标跟踪方法综述[J].计算机研究与发展,2015,52(1):177-190. 被引量:64
  • 4DaneUjan M, Khan F S, Felsberg M, et al. Adaptive color attributes for real-time visual tracking. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Columbus, 2014. 1090-1097.
  • 5Lee D Y, Sim J Y, Kim C S. Visual tracking using pertinent patch selection and masking. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Columbus, 2014. 3486-3493.
  • 6Zhang T, Jia K, Xu C, et al. Partial occlusion handling for visual tracking via robust part matching. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Columbus, 2014. 1258-1265.
  • 7Isard M, MacCormick J. BraMBLe: a Bayesian multiple-blob tracker. In: Proceedings of IEEE International Confer- ence on Computer Vision, Vancouver, 2001, 2:34-41.
  • 8Nummiaro K, Koller-Meier E, van Gool L. A color-based particle filter. In: Proceedings of the 1st International Workshop on Generative-Model-Based Vision, Kopenhagen, 2002, 1:53-60.
  • 9Babenko B, Yang M H, Belongie S. Visual tracking with online multiple instance learning. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Miami, 2009. 983-990.
  • 10Zhang K, Zhang L, Yang M H. Real-time compressive tracking. In: Proceedings of European Conference on Computer Vision, Firenze, 2012. 864-877.

二级参考文献105

  • 1王震宇,张可黛,吴毅,卢汉清.基于SVM和AdaBoost的红外目标跟踪[J].中国图象图形学报,2007,12(11):2052-2057. 被引量:11
  • 2Adam A,Rivlin E,Shimshoni I.Robust fragments-basedtracking using theintegral histogram[C]// Proc of the 19th IEEE Computer Vision and Pattern Recognition.LosAlamitos,CA:IEEE Computer Society,2006;798-805.
  • 3Comaniciu D,Ramesh V,Meer P.Kernel-based objecttracking[J],IEEE Trans on Pattern Analysis and Machine Intelligence,2003,25(5):564-575.
  • 4Liang D,Huang Q,Jiang S,et al.Mean-shift blob trackingwith adaptive feature selection and scale adaptation[C]//Proc of the 11th IEEE Int Conf on Computer Vision.LosAlamitos,CA:IEEE Computer Society,2007:369-372.
  • 5Ning J,Zhang L,Zhang D,et al.Scale and orientationadaptive mean shift tracking[J].Computer Vision,IET,2012,6(1);52-61.
  • 6Yu T,Wu Y.Differential tracking based on spatial-appearance model (SAM)[C]// Proc of the 19th IEEE Computer Vision and Pattern Recognition.Los Alamitos,CA:IEEE Computer Society,2006:720-727.
  • 7Han B,Davis L.On-line density-based appearance modeling for object tracking[C]// Proc of the 10th IEEE Int Conf onComputer Vision.Los Alamitos,CA:IEEE Computer Society,2005:1492-1499.
  • 8Wang H,Suter D,Schindler K,et al.Adaptive objecttracking based on an effective appearance filter[J].IEEETrans on Pattern Analysis and Machine Intelligence, 2007,29(9):1661-1667.
  • 9Ross D,Lim J,et al.Incremental learning for robust visualtracking[J].International Journal Computer Vision,2008,77(1):125-141.
  • 10Wen L,Cai Z,Lei Z,et al.Online spatio-temporalstructural context learning for visual tracking[G]//LNCS7575:Proc of European Conf on Computer Vision.Berlin:Springer,2012:716-729.

共引文献86

同被引文献23

引证文献5

二级引证文献30

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部