期刊文献+

结合交并比损失的孪生网络目标跟踪算法研究 被引量:1

Siamese Object Tracking Algorithm Combined with the Intersection over Union Loss
下载PDF
导出
摘要 SiamRPN算法采用Ln范数损失训练边界框预测,未考虑预测框与真值框间交并比(inersection over union,IoU)的关系,导致准确性不足。针对该问题,提出一种结合IoU损失的SiamRPN目标跟踪改进算法。设计了IoU-smooth L1范数联合优化模块,对候选正样本进行IoU损失与smooth L1范数损失的联合优化;依据回归预测结果,用预测框与真值框的IoU作为权重对正样本进行加权分类预测,增加正样本间的区分度,同时确保分类预测与回归预测的关联性。对比实验结果表明:本文所提改进算法能有效提升跟踪性能。 To improve the accuracy of the object bounding box regression prediction of the SiamRPN,solve the problem of low discrimination of positive samples in classification prediction and the lack of correlation between regression prediction and classification prediction,an improved object tracking algorithm of SiamRPN which combined with IoU(intersection over union)loss is proposed.A joint optimization module of IoU-smooth L1 is designed to optimize the IoU loss of the best positive sample and the smooth L1 loss of other positive samples jointly.According to the regression prediction results,the weighted classification prediction is performed on the positive samples with the weight calculated by the IoU of the prediction box and the truth box,so as to increase the discrimination between the positive samples,while ensuring the correlation between the classification prediction and the regression prediction.The results show that the proposed algorithm can effectively improve the tracking performance.
作者 周维 刘宇翔 廖广平 马鑫 Wei Zhou;Yuxiang Liu;Guangping Liao;Xin Ma(Xiangtan University,School of Computer Science&School of Cyberspace Science,Xiangtan 411105,China)
出处 《系统仿真学报》 CAS CSCD 北大核心 2022年第9期1956-1967,共12页 Journal of System Simulation
基金 湖南省科技计划(2016TP1020) 衡阳师范学院智能信息处理与应用湖南省重点实验室开放基金(IIPA20K04)。
关键词 机器视觉 目标跟踪 孪生网络 锚点框 交并比损失 machine vision object tracking siamese network anchor boxes inersection over union loss
  • 相关文献

参考文献4

二级参考文献37

  • 1侯志强,韩崇昭.视觉跟踪技术综述[J].自动化学报,2006,32(4):603-617. 被引量:255
  • 2WANG Naiyan, SHI Jianping, YEUNG Dityan, et al. Understanding and diagnosing visual tracking systems[C]. International Conference on Computer Vision, Santiago, Chile, 2015: 11-18.
  • 3BABENKO B, YANG M, and BELONGIE S. Visual tracking with online multiple instance learning[C]. International Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 2009: 983-990.
  • 4KALAL Z, MIKOLAJCZYK K, and MATAS J. Tracking learning detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(7): 1409-1422.
  • 5HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification[C]. International Conference on Computer Vision, Santiago, Chile, 2015: 1026-1034.
  • 6COURBARIAUX M, BENGIO Y, and DAVID J P. Binary Connect: training deep neural networks with binary weights during propagations[C]. Advances in Neural Information Processing Systems, Montréal, Quebec, Canada, 2015: 3105-3113.
  • 7SAINATH T N, VINYALS O, SENIOR A, et al. Convolutional, long short term memory, fully connected deep neural networks[C]. IEEE International Conference on Acoustics, Speech and Signal Processing, Brisbane, Australia, 2015: 4580-4584.
  • 8PARKHI O M, VEDALDI A, and ZISSERMAN A. Deep face recognition[J]. Proceedings of the British Machine Vision, 2015, 1(3): 6.
  • 9WANG Naiyan and YEUNG Dityan. Learning a deep compact image representation for visual tracking[C]. Advances in Neural Information Processing Systems, South Lake Tahoe, Nevada, USA, 2013: 809-817.
  • 10RUSSAKOVSKY O, DENG J, SU H, et al. Imagenet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015, 115(3): 211-252.

共引文献201

同被引文献3

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部