期刊文献+

区域损失函数的孪生网络目标跟踪 被引量:1

Regional loss function based siamese network for object tracking
下载PDF
导出
摘要 针对预训练卷积神经网络提取的深度特征空间分辨率低,快速运动造成运动目标空间细节信息丢失等问题,提出用区域损失函数构建孪生网络的目标跟踪,进一步降低深度特征通道之间的冗余性,并减少高层信息丢失。利用线下预训练的VGG-16卷积神经网络提取深度特征,构成初始深度特征空间。通过区域损失函数构建特征和尺度选择网络,根据反向传播的梯度大小进行特征选择。对筛选后的特征进行拼接,融入到孪生网络中匹配跟踪。在OTB-2013、OTB-2015、VOT2016、TempleColor数据集上与其他算法对比。实验结果表明,该算法在快速运动、低分辨率等场景中表现出较好的跟踪精度和鲁棒性。 Due to the low spatial resolution of deep features extracted by pre-trained convolutional neural network,fast motion causes loss of spatial details of a moving object.This paper proposes a method to construct a siamese network for object tracking,so as to reduce the redundancy between the deep feature channels and the loss of high-level information.First,the VGG-16 convolutional neural network is trained offline to extract deep features and form the initial deep feature space.And then,the regional loss function is used to construct the feature and scale selection network.The feature is selected according to the gradient size of back propagation.Further,the selected features are spliced and integrated into the siamese network for matching tracking.By comparing OTB-2013,OTB-2015,VOT2016 and TempleColor benchmark datasets with other algorithms,it shows that the algorithm has preferable precision and robustness in the challenging scenarios such as fast motion and low resolution.
作者 吴贵山 林淑彬 钟江华 杨文元 WU Guishan;LIN Shubin;ZHONG Jianghua;YANG Wenyuan(School of Computer Science,Minnan Normal University,Zhangzhou 363000,China;Fujian Key Laboratory of Granular Computing and Application,Minnan Normal University,Zhangzhou 363000,China;Information and Network Center,Minnan Normal University,Zhangzhou 363000,China)
出处 《智能系统学报》 CSCD 北大核心 2020年第4期722-731,共10页 CAAI Transactions on Intelligent Systems
基金 国家自然科学青年基金项目(61703196) 福建省自然科学基金项目(2018J01549)。
关键词 计算机视觉 目标跟踪 区域损失 深度特征 孪生网络 卷积神经网络 反向传播 VGG网络 computer vision object tracking regional loss depth features siamese network convolutional neural network back propagation VGG network
  • 相关文献

参考文献3

二级参考文献26

  • 1HOSEINNEZHAD R, VO B N, VO B T. Visual tracking inbackground subtracted image sequences via multi-Bernoullifiltering[ J]. IEEE Transactions on Signal Processing,2013,61(2) : 392-397.
  • 2ALLEN J G, XU R Y D, JIN J S. Object tracking usingcamshift algorithm and multiple quantized feature spaces[C ]//Proceedings of the Pan-Sydney Area Workshop onVisual Information Processing. Darlinghurst, Australia,2004; 3-7.
  • 3ZHANGShengping, YAO Hongxun, SUN Xin, et al. Sparsecoding based visual tracking : review and experimental com-parison [J ] . Pattern Recognition, 2013,46(7): 1772-1788.
  • 4TSECHPENAKIS G, RAPANTZIKOS K, TSAPATSOULISN,et al. A snake model for object tracking in natural se-quences [J ]. Signal Processing : Image Communication,2004, 19(3) : 219-238.
  • 5KOSCHANA, KANG S, PAIK J, et al. Color active shapemodels for tracking non-rigid objects [ J ] . Pattern Recogni-tion Letters, 2003, 24(11): 1751-1765.
  • 6ZOIDI0,NIKOLAIDIS N, TEFAS A, et al. Stereo objecttracking with fusion of texture, color and disparity informa-tion[ J ] . Signal Processing: Image Communication, 2014,29(5) : 573-589.
  • 7MUFFERTM,PFEIFFER D,FRANKE U. A stereo-visionbased object tracking approach at roundabouts [ J ]. IEEEIntelligent Transportation Systems Magazine, 2013, 5(2):22-32.
  • 8CAI Ling, HE Lei, XU Yiren, et al. Multi-object detectionand tracking by stereo vision [ J ]. Pattern Recognition,2010,43( 12) : 4028-4041.
  • 9LI Jinping, QIN Min, XIA Yingjie, et al. Remarks on anovel statistical histogram——average scene cumulative his-togram [C ] //2012 IEEE International Conference on Gran-ular Computing. Hangzhou, China, 2012; 253-257.
  • 10BRADSKIG, KAEHLER A. Learning OpenCV: Computervision with the OpenCV library [ M ] . [ S. 1. ] : 0, ReillyMedia, Inc., 2008; 405-458,193-219.

共引文献16

同被引文献1

引证文献1

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部