期刊文献+

增量深度学习目标跟踪 被引量:17

Target tracking based on incremental deep learning
下载PDF
导出
摘要 由于现有目标跟踪算法在复杂环境下易发生目标漂移甚至跟踪丢失,故本文提出了以双重采样粒子滤波为框架,基于增量深度学习的目标跟踪算法。该算法在粒子滤波中引入粒子集规模自适应调整的双重采样来解决粒子衰减及贫化问题,并利用无监督特征学习预训练深度去噪自编码器以克服跟踪中训练样本的不足。将深度去噪自编码器应用到在线跟踪中,使提取的特征集合能够有效表达粒子图像区域。在深度去噪自编码器中添加了增量特征学习方法,得到了更有效的特征集以适应跟踪过程中目标外观变化。该方法还用线性支持向量机对特征集合进行分类,提高对粒子集合的分类精度,以得到更精确的目标位置。在复杂环境下对不同图片序列进行的实验表明:该算法的跟踪综合评价指标为94%、重叠率为74%,平均帧率为13frame/s。与现有的跟踪算法相比,本算法有效地解决目标漂移甚至跟踪丢失问题,并且对遮挡、相似背景、光照变化、外观变化具有更好的鲁棒性及精确度。 As current tracking algorithms lead to target drift or target loss in the complex environment,a tracking algorithm based on the incremental deep learning was proposed under a double-resampling particle filter framework.To solve the problem of particle degradation and depletion,the double-resampling method was introduced to adapt to the particle size in particle filtering and a Stacked Denoising Autoencoder(SDAE)was pre-trained by the unsupervised featurelearning to alleviate the lack of training samples in visual tracking.Then,the SDAE was applied to online tracking,so that the extracted feature sets could express the region image representations of the particles effectively.The incremental feature learning was introduced to the encoder of SDAE,the feature sets were optimized by adding new features and merging the similar features to adapt to appearance changes of the moving object.Moreover,a support vector machine was used to classify the features then to improve the classification accuracy of the particles and to obtain a higher tracking precision.According to the results of experiments on variant challenging image sequences in the complex environment,the F-measure and the overlapping ratio of the presented algorithm are 94%,74%,respectively and the average frame rate is 13frame/s.Compared with the state-of-the-art tracking algorithms,the proposed method solves the problems of target drift and target loss efficiently and has better robust and higher accuracy,especially for the target in the occlusions,background clutter,illumination changes and appearance changes.
出处 《光学精密工程》 EI CAS CSCD 北大核心 2015年第4期1161-1170,共10页 Optics and Precision Engineering
基金 吉林省科技厅资助项目(No.20090512 20100312)
关键词 目标跟踪 粒子滤波 深度去噪自编码器 支持向量机 增量特征 深度学习 target tracking partical filter stacked denoising autoencoder support vector machine incremental feature deep learning
  • 相关文献

参考文献22

  • 1郭敬明,何昕,魏仲慧.基于在线支持向量机的Mean Shift彩色图像跟踪[J].液晶与显示,2014,29(1):120-128. 被引量:16
  • 2WU Y, LIM J, YANG M H. Online object tracking: A benchmark [C]. 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Portland, 2013: 2411-2418.
  • 3ROSS D A, LIM J, LIN R S, et al. Incremental learning for robust visual tracking [J]. International Journal of Computer Vision, 2008, 77 (1-3) : 125- 141.
  • 4李静宇,王延杰.基于子空间的目标跟踪算法研究[J].液晶与显示,2014,29(4):617-622. 被引量:15
  • 5BABENKO B, YANG M H, BELONGIE S. Robust object tracking with online multiple instance learning [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(8): 1619-1632.
  • 6陈东成,朱明,高文,孙宏海,杨文波.在线加权多示例学习实时目标跟踪[J].光学精密工程,2014,22(6):1661-1667. 被引量:29
  • 7GRABNER H, GRABNER M, BISCHOF H. Real time tracking via on-line boosting [C]. Proceedings of British Machine Vision Conference, Edinburgh, Scotland= BMVA, 2006: 47-56.
  • 8GRABNER H, LEISTNER C, BISCHOF H. Semi- supervised on-line boosting for robust tracking[C]. Proceedings of European Conference on Computer Vision, Berlin, Germany: Springer, 2008: 234- 247.
  • 9颜佳,吴敏渊.遮挡环境下采用在线Boosting的目标跟踪[J].光学精密工程,2012,20(2):439-446. 被引量:22
  • 10KALAL Z, MIKOLAJCZYK K,MATAS J. Tracking- learning detection [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34 (7): 1409-1422.

二级参考文献65

  • 1刘俊承,原魁,邹伟,朱海兵.基于特征粒子的Monte Carlo自定位方法[J].机器人,2006,28(1):30-35. 被引量:3
  • 2Thrun S, Burgard W, Fox D. Probabilistic Robotics. London: MIT Press, 2005. 91-280.
  • 3Doucet A, Johansen A M. A tutorial on particle filtering and smoothing: fifteen years later. Handbook of Nonlinear Filtering. London: Oxford University Press, 2009.
  • 4Koller D, Fratkina R. Using learning for approximation in stochastic processes. In: Proceedings of the 15th International Conference on Machine Learning. Madison, USA: Morgan Kaufmann, 1998. 287-295.
  • 5Fox D. Adapting the sample size in particle filters through KLD-sampling. The International Journal of Robotics Research, 2003, 22(12): 985-1003.
  • 6Kwok C, Fox D, Meila M. Real-time particle filters. Proceedings of the IEEE, 2004, 92(3): 469-484.
  • 7Soto A. Self adaptive particle filter. In: Proceedings of the 19th International Joint Conferences on Artificial Intelligence. Edinburgh, UK: Morgan Kaufmann, 2005. 1398-1406.
  • 8Grisetti G, Stachniss C, Burgard W. Improving grid-based SLAM with Rao-Blackwellized particle filters by adaptive proposals and selective resampling. In: Proceedings of the IEEE International Conference on Robotics and Automation. Barcelona, Spain: IEEE, 2005. 2432-2437.
  • 9Hu X L, Schon T B, Ljung J. A basic convergence result for particle filtering. IEEE Transactions on Signal Processing, 2008, 56(4): 1337-1348.
  • 10Shan C F, Tan T N, Wei Y C. Real-time hand tracking using a mean shift embedded particle filter. Pattern Recognition, 2007, 40(7): 1958-1970.

共引文献79

同被引文献116

引证文献17

二级引证文献131

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部