期刊文献+

多特征融合的视频目标深度跟踪 被引量:7

Deep Visual Object Tracking by Multi-feature Fusion
下载PDF
导出
摘要 针对现有基于卷积神经网络跟踪中需要大量离线训练以及在线更新耗时的问题,提出了一种多特征融合的视频目标卷积跟踪算法。算法首先设计了一种浅层前向自学习卷积网络提取目标候选区域的局部卷积特征;然后计算融合了空间信息的颜色直方图特征;在此基础上,采用归一化加权方法在全连接层融合卷积特征和全局颜色特征形成目标的表观描述;最后基于粒子滤波算法,通过计算目标模板与候选目标之间的相似度,估计目标位置。采用OTB-2013公开测试集验证所提跟踪算法的性能,与8种主流目标跟踪算法进行了分析对比。实验结果表明,算法的目标跟踪精度和跟踪成功率在多种场景下取得了不错的性能;在保证跟踪精确率的前提下,跟踪鲁棒性优于其他算法。可见提出的多特征融合的卷积跟踪算法通过提取所跟踪视频的自身特征生成卷积器而无需进行大量离线训练,且与手动特征进行融合增强了目标的表达能力,这种策略具有一定的借鉴性。 For the problems that current convolutional neural network based tracking needs time-consuming offline training and online updating is difficult,a convolutional tracking algorithm based on multi-feature fusion is proposed. First,a narrow feed-forward convolutional network is designed to extract the local deep features. Then,the color histogram is calculated combining with the space distribution. These two types of features are connected in the full layer with normalization operation. Finally,particle filter is applied to estimate the target location. The performance of the proposed tracking algorithm is verified using public dataset OTB-2013. The objective and subjective evaluations show that the algorithm performs well in several challenging scenes. The robustness of the proposed algorithm is superior to that of state-of-art algorithms under the premise of ensuring the tracking accuracy. The fusion strategy can be used and expanded when difficult types of features are adopted.
作者 钱小燕 张代浩 张艳琳 QIAN Xiao-yan;ZHANG Dai-hao;ZHANG Yan-lin(Civil College,Nanjing University of Aeronautics and Astronautics,Nanjing 210000,China)
出处 《科学技术与工程》 北大核心 2019年第7期139-147,共9页 Science Technology and Engineering
基金 国家自然科学基金(61803199)资助
关键词 视频目标跟踪 卷积滤波 多特征融合 粒子滤波 颜色直方图 visual object tracking convolutional filtering multi-feature fusion particle filter color histogram
  • 相关文献

参考文献3

二级参考文献33

  • 1A. Yilmaz, O. Javed, M. Shah. Object tracking: a survey. ACM Computer Surveys, 2006, 38(4): 229 — 240.
  • 2A. Puri, K. P. Valavanis, M. Kontitsis. Statistical profile generation for traffic monitoring using real-time UAV based video data. Proceedings of the Mediterranean Conference on Control and Automation. New York: IEEE, 2007: 1 - 6.
  • 3D. Comaniciu, V. Ramesh, P. Meer. Real-time tracking of non-rigid objects using mean shift. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Los Alamitos, CA: IEEE Computer Society, 2000: 142 - 149.
  • 4R. T. Collins. Mean-shift blob tracking through scale space. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Los Alamitos, CA: IEEE Computer Society, 2003: 230-240.
  • 5D. Comaniciu, V. Ramesh, P. Meer. Kernel-based object tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003,25(5): 564 - 575.
  • 6J. Wang, X. Chen, W. Gao. Online selecting discriminative tracking features using particle filter. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Los Alamitos, CA: IEEE Computer Society, 2005: 1037 - 1042.
  • 7J. Wang, Y. Yagi. Integrating color and shape-texture features for adaptive real-time object tracking. IEEE Transactions on Image Processing, 2008,17(2): 235 - 240.
  • 8J. WANG, Y. Yagi. Integrating shape and color features for adaptive real-time object tracking. Proceedings of the IEEE International Conference on Robotics and Biomimetics. Piscataway: IEEE, 2006: 1-6.
  • 9S. T. Birchfield, S. Rangarajan. Spatiograms versus histograms for region-based tracking. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Los Alamitos, CA: IEEE Computer Society, 2005: 1158 - 1163.
  • 10N. Dalai, B. Triggs. Histograms of oriented gradients for human detection. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Los Alamitos, CA: IEEE Computer Society, 2005: 886 - 893.

共引文献92

同被引文献83

引证文献7

二级引证文献19

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部