期刊文献+

多方向显著性权值学习的行人再识别 被引量:13

Person re-identification based on multi-directional saliency metric learning
原文传递
导出
摘要 目的针对当前行人再识别匹配块的显著性外观特征不一致的问题,提出一种对视角和背景变化具有较强鲁棒性的基于多向显著性相似度融合学习的行人再识别算法。方法首先用流形排序估计目标的内在显著性,并融合类间显著性得到图像块的显著性;然后根据匹配块的4种显著性分布情况,通过多向显著性加权融合建立二者的视觉相似度,同时采用基于结构支持向量机排序的度量学习方法获得各方向显著性权重值,形成图像对之间全面的相似度度量。结果在两个公共数据库进行再识别实验,本文算法较同类方法能获取更为全面的相似度度量,具有较高的行人再识别率,且不受背景变化的影响。对VIPeR数据库测试集大小为316对行人图像的再识别结果进行了定量统计,本文算法的第1识别率(排名第1的搜索结果即为待查询人的比率)为30%,第15识别率(排名前15的搜索结果中包含待查询人的比率)为72%,具有实际应用价值。结论多方向显著性加权融合能对图像对的显著性分布进行较为全面的描述,进而得到较为全面的相似度度量。本文算法能够实现大场景非重叠多摄像机下的行人再识别,具有较高的识别力和识别精度,且对背景变化具有较强的鲁棒性。 Objective Person re-identification is important in video surveillance systems because it reduces human efforts in searching for a target from a large number of video sequences. However, this task is difficult because of variations in lighting conditions, clutter in the background, changes in individual viewpoints, and differences in individual poses. To tackle this problem, most studies concentrated either on designing a feature representation, metric learning method, or discriminative learning method. Visual saliency based on discriminative learning methods has recently been exploited because salient regions can help humans efficiently distinguish targets. Given the problem of inconsistent salience properties between matched patches in person re-identification, this study proposes a multi-directional salience similarity evaluation approach for person re-identifi- cation based on metric learning. The proposed method is robust to viewpoints and background variations. Method First, the salience of image patches is obtained by fusing inter-salience and intra-salience, which are estimated by manifold ranking. The visual similarity between matched patches is then established by the multi-directional weighted fusion of salience according to the distribution of the four saliency types of matched patches. The weight of saliency in each direction is obtained by using metric learning in the base of structural SVM ranking. Finally, a comprehensive similarity measure of image pairs is formed. Result The proposed method is demonstrated on two public benchmark datasets (e. g. , VIPER and ETHZ), and experimental results show that the proposed method achieves excellent re-identification rates with comprehensive similarity measures com- pared with other similar algorithms. Moreover, the proposed method is invariant to the effects of background variations. The re-identification results on the VIPER dataset with half of the dataset sampled as training samples are quantitatively analyzed, and the performance of the proposed method outperforms existing learning based methods by 30% at rank 1 (represents the correct matched pair) and 72% at rank 15 (represents the expectation of the matches at rank 15) . The proposed method can still achieve state-of-the-art performance even if the size of the training pair is small. For generalization verification, experi- ments are conducted on the ETHZ dataset for testing. Result shows that the proposed method outperforms existing feature-de- sign-based methods and supervised-learning-based methods on all three sequences. Thus, the proposed method shows practical significance. Conclusion The multi-directional weighted fusion of salience can yield a comprehensive description of the sali- ency distribution of image pairs and obtain a comprehensive similarity measure. The proposed method can realize person reidentification in large-scale, non-overlapping, multi-camera views. Furthermore, the proposed method improves the discrim- inative and accuracy performance of re-identification and has strong robustness to background changes.
作者 陈莹 霍中花
出处 《中国图象图形学报》 CSCD 北大核心 2015年第12期1674-1683,共10页 Journal of Image and Graphics
基金 国家自然科学基金项目(61104213 61573168) 江苏省自然科学基金资助项目(BK2011146)~~
关键词 行人再识别 度量学习 显著性特征 排序 person re-identlfication metric learning salience feature ranking
  • 相关文献

参考文献21

  • 1Doretto G, Sebastian T, Tu P, et al. Appearance-based person reidentification in camera networks : problem overview and current approaches [J]. Journal of Ambient Intelligence and Humanized Computing, 2011, 2(2) : 127-151. [ DOI: 10. 1007/s12652- 010-0034-y].
  • 2Vezzani R, Baltieri D, Cucchiara R. People reidentification in surveillance and forensics: a survey [ J]. ACM Computing Sur- veys, 2013, 46 ( 2 ) : # 29. [ DOI: 10. 1145/2543581.2 543596 ].
  • 3Ma B P, Jurie F, Su Y. Covariance descriptor based on bio-in- spired features for person re-Identification and face verification [J]. Image & Vision Computing, 2014, 32(6): 379-390. [DOI: 10. 1016/j. imavis. 2014.04. 002].
  • 4Gong S, Cristani M, Yan S, et al. Person Re-Identification [M]. Belin: Springer, 2014: 1-20. [DOI: 10. 1007/978-1- 4471-6296 -4 ].
  • 5Ma B, Su Y, Jurie F. Bieov: a novel image representation for person re-identifieation and face verification [ C ]//Proceedings of the British Maehive Vision Conference. Guildford, UK: BMVA Press, 2012: 1-11. [DOI: 10. 5244/C. 26.57].
  • 6Farenzena M, Bazzani L, Perina A, et al. Person re-identifica- tion by symmetry-driven aeeumulation of local features [ C ]// Proceedings of IEEE Conference on Computer Vision and Pattern Reeognition. San Francisco: IEEE Press, 2010: 2360-2367. [DOI: 10. ll09/CVPR. 2010.5 539926].
  • 7Prosser B, Zheng W S, Gong S, et al. Person re-identification by support vector ranking [ C ]//Proceedings of the British Machine Vision Conference. Aberystwyth, UK: BMVA Press, 2010, 2(5): 1-11. [DOI: 10.5244/C. 24.21].
  • 8Zheng W S, Gong S, Xiang T. PeFn re-identification by proba- bilistic relative distance comparison [ C ]// Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Provi- dence: IEEE Press, 2011: 649-656. [DOI: 10. ll09/CVPR. 2011. 5995598].
  • 9Layne R, Hospedales T M, Gong S. Person re-identification by attributes [ C ]//Proceedings of the British Machine Vision Con- ference. Surrey, UK: BMVA Press, 2012, 2(3) : 1-9. [DOI: 10. 5244/C. 26. 24].
  • 10Zhao R, Ouyang W, Wang X. Unsupervised salience learning for person re-identification [ C ]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Portland: IEEE Press, 2013 : 3586-3593. [DOI: 10. 1109/ CVPR. 2013. 460].

同被引文献26

引证文献13

二级引证文献25

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部