期刊文献+

多模态目标的联合压缩跟踪

Joint Compressive Tracking for Multimodal Target
下载PDF
导出
摘要 针对多模态目标跟踪中大多仅考虑单个图像的异种特征融合或不同模态图像的同种特征融合,为了使得这两者间能自然集成,提出基于联合压缩感知的多模态目标统一跟踪方法.通过将多模态跟踪问题转化为多重?2-范数不等式约束下的多?1-范数联合最小化问题,并设计了能求解该联合最小化问题的特定增广拉格朗日乘子算法,从而实现快速而精准的目标跟踪,可同时处理来自同一图像或不同模态图像中的多种不同特征间的融合,并能自由地添加或删除特征.此外,还提出了基于稀疏集中度指标的目标模板协同更新方案,来筛选出表现最优的目标模板.在DCU,OTCBVS,BEPMDS,OTB50和VOT-TIR等数据集上采用逐帧跟踪的方法进行实验,结果表明在跟踪精度、成功率以及速度3个评价指标上,文中方法的平均性能分别达到了0.96,0.91和3.48. Multimodal tracking is a challenging task.Most of the existing methods only consider the fusion of different features from single image or the fusion of identical feature from different modal images.In order to integrate them naturally,a unified multimodal tracking framework based on joint compressive sensing is proposed in this paper.Our framework can integrate different features extracted from single image or different modal images,and provide the flexibility to arbitrarily add or remove feature.We formulate the multimodal tracking as a joint minimization problem of multiple?1-norms with inequality constraint of multiple?2-norms,and derive a customized augmented Lagrange multiplier algorithm to solve the minimization problem,so that efficient tracking with both low computational burden and high accuracy.Besides,a collaborative template update scheme induced by sparsity concentration index is developed to screen out the best templates throughout the tracking procedure.The experiments are carried out on DCU,OTCBVS,BEPMDS,OTB50 and VOT-TIR image datasets by frame tracking method,and the experimental results show the average tracking accuracy,success rate and speed of our method is 0.96,0.91 and 3.48 respectively.
作者 唐艳平 张灿龙 李燕茹 李志欣 Tang Yanping;Zhang Canlong;Li Yanru;Li Zhixin(School of Computer Science and Information Security,Guilin University of Electronic Technology,Guilin 541004;Guangxi Key Laboratory of Multi-Source Information Mining and Security,Guangxi Normal University,Guilin 541004;Guangxi Collaborative Innovation Center of Multi-Source Information Integration and Intelligent Processing,Guilin 541004)
出处 《计算机辅助设计与图形学学报》 EI CSCD 北大核心 2020年第4期616-627,共12页 Journal of Computer-Aided Design & Computer Graphics
基金 国家自然科学基金(61866004,61663004,61966004,61962007,61751213) 广西自然科学基金(2018GXNSFDA281009,2017GXNSFAA198365,2019GXNSFDA245018,2018GXNSFDA294001) 广西“八桂学者”创新研究团队。
关键词 多模态跟踪 压缩感知 增广拉格朗日乘子法 multimodal tracking compressive sensing augmented Lagrange multiplier(ALM)
  • 相关文献

参考文献3

二级参考文献24

  • 1XUE Jianru ZHENG Nanning ZHONG Xiaopin.Sequential stratified sampling belief propagation for multiple targets tracking[J].Science in China(Series F),2006,49(1):48-62. 被引量:6
  • 2刘从义,敬忠良,肖刚,杨波.Feature-based fusion of infrared and visible dynamic images using target detection[J].Chinese Optics Letters,2007,5(5):274-277. 被引量:5
  • 3Zhang S, Yao H, Sun X, et al: Sparse coding based visual tracking: review and experimental comparison[J]. Pattern Recognition, 2013, 46(7): 1772-1788.
  • 4Wu Y, Lira J, and Yang M. Online object tracking: a benchmark[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Oregon, Portland USA, 2013: 2411-2418.
  • 5Isard M and Black A. Condensation-conditional density propagation for visual tracking[J]. International Journal on Computer Vision, 1998, 29(1): 5-28.
  • 6Comaniciu D, Ramesh V, and Meer P. Kernel-based object tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(5): 564-577.
  • 7Leichter I. Mean shift trackers with cross-bin metrics[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(4): 695-706.
  • 8Avidan S. Support vector tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(8): 1064-107%.
  • 9Zhang K and Song H. Real-time visual tracking via online weighted multiple instance learning[J]. Pattern Recognition, 2013, 46(1): 397-411.
  • 10Ross D, Lim J, Lin R, et al: Incremental learning for robust visual tracking[J]. International Journal of Computer V:sion: 2008, 77(3): 125-141.

共引文献35

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部