期刊文献+

利用密度描述符对应的视觉跟踪算法 被引量:3

A Visual Object Tracking Algorithm Using Dense Descriptors Correspondences
下载PDF
导出
摘要 针对稀疏描述符由于关键点检测不稳定及其误匹配造成跟踪失败的问题,提出了一种利用密度描述符对应实现目标跟踪的算法。该算法通过计算目标在相邻两帧之间的密度描述符流,考虑目标的空间分布特性和描述符的权重,得到目标的运动矢量,获得目标在当前帧中的估计;根据密度描述符的运动矢量与目标运动矢量的关系及其匹配程度,更新密度描述符的权重。在大量测试数据上对所提算法进行实验,并与相关跟踪算法进行比较,定性和定量分析表明:当发生光照、遮挡、姿态变化时,所提算法能够稳定地跟踪目标,跟踪成功率均在90%以上,跟踪误差要小于其他算法。 An object tracking algorithm is proposed to overcome the problem that the sparse local invariance feature descriptor depends on the feature detection and always leads to failure.The proposed algorithm uses the dense descriptors correspondences.The dense descriptor flows of objects between consecutive frames are calculated,and object estimations are obtained by considering the spatial distribution and weights of dense flows.Then the weights of descriptors are updated according to the relation and the matching degree between the descriptor motion and the object motion.Qualitative and quantitative analyses on challenging benchmark image sequences show that the average tracking successes rate of the proposed algorithm is over 90% and the performance of the algorithm is better than the performances of several state-of-art methods that assume the constant brightness and template.
出处 《西安交通大学学报》 EI CAS CSCD 北大核心 2014年第9期13-18,共6页 Journal of Xi'an Jiaotong University
基金 国家自然科学基金资助项目(61202339 61203628) 陕西省自然科学基金资助项目(2012JQ8034)
关键词 目标跟踪 密度描述符 稀疏描述符 object tracking dense descriptors sparse descriptors
  • 相关文献

参考文献21

  • 1SENST T,EISELEIN V,SIKORA T.Robust local optical flow for feature tracking[J].IEEE Transactions on Circuits and Systems,2012,22 (9):1377-1387.
  • 2LEE K Y,PARK R H,LEE S W.Color matching for soft proofing using a camera[J].IET Image Process,2012,6(3):292-300.
  • 3PEREZ P,HUE C,VERMAAK J,et al.Color-based probabilistic tracking[C]// Proceedings of 7th European Conference on Computer Vision.Berlin,Germany:Springer,2002:661-675.
  • 4COMANICIU D,RAMESH V,MEER P.Kernel-based object tracking[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2003,25(5):564-577.
  • 5ROSS D,LIM J,LIN Ruei-Sung,et al.Incremental learning for robust visual tracking[J].International Journal of Computer Vision,2007,77(1):125-141.
  • 6DAVID G L.Scale & affine invariant interest point detectors[J].International Journal of Computer Vision,2004,60(1):63-86.
  • 7CALONDER M,LEPETIT V,FUA P.Binary robust independent elementary features[C]// Proceedings of 11th European Conference on Computer Vision.Berlin,Germany:Springer,2010:778-792.
  • 8HEINLY J,DUMN E,FRAHM J M.Comparative evaluation of binary features[C]//Proceedings of 12th European Conference on Computer Vision.Berlin,Germany:Springer,2012; 369-382.
  • 9MIKOLAJCZYK K,SCHMID C.A performance evaluation of local descriptors[J].IEEE Trans on Pattern Analysis and Machine Intelligence,2005,27(10):1615-1630.
  • 10ZHOU Huiyu,YUAN Yuan,SHI Chunmei.Object tracking using SIFT features and mean shift[J].Computer Vision and Image Understand,2009,113 (3):345-352.

二级参考文献8

  • 1Yilmaz A, Javed O, Shah M. Object tracking: a survey. ACM Computing Surveys, 2006, 38(4): 229-240.
  • 2Comaniciu D, Ramesh V, Meer P. Kernel-based object tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(5): 564-575.
  • 3Feng Z R, Lu N, Jiang P. Posterior probability mea sure for image matching. Pattern Recognition, 2008, 41(7): 2422-2433.
  • 4Hu W M, Tan T N, Wang L, Maybank S. A survey on visual surveillance of object motion and behaviors. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 2004, 34(3): 334-352.
  • 5Zhou H Y, Yuan Y, Shi C M. Object tracking using SIFT features and mean shift. Computer Vision and Image Understanding, 2009, 113(3): 345-352.
  • 6Suga A, Fukuda K, Takiguchi T, Ariki Y. Object recognition and segmentation using SIFT and graph cuts. In: Proceedings of the 19th International Conference on Pattern Recognition. Tampa, USA: IEEE, 2008. 1-4.
  • 7Lowe D G. Distinctive image features from scale invariant key points. International Journal of Computer Vision, 2004, 60(2): 91-110.
  • 8Lowe D G. Object recognition from local scale invariant features. In: Proceedings of the 7th International Conference on Computer Vision. Corfu, Greece: IEEE, 1999. 1150-1157.

共引文献70

同被引文献32

  • 1黄晓林,杨建刚,朱雯兰.目标运动趋势预测跟踪算法及其软件实现[J].工业控制计算机,2005,18(6):42-43. 被引量:3
  • 2HU W, TAN T, WANG L, et al. A survey on visual surveillance of object motion and behaviors[J].IEEE Transactions on Systems, Man, and Cybernetics: Part CApplications and Reviews, 2004, 34(3): 334-352.
  • 3KIM I S, CHOI H S, YI K M, et al. Intelligent visual surveillance-a survey[J].International Journal of Control, Automation and Systems, 2010, 8(5): 926-939.
  • 4YU E, AGGARWAL J K. Detection of fence climbing from monocular video [C]∥Proceedings of the 18th 2006 International Conference on Pattern Recognition. Piscataway, NJ, USA: IEEE, 2006: 375-378.
  • 5YU E, AGGARWAL J K. Recognizing persons climbing fences[J].International Journal of Pattern Recognition and Artificial Intelligence, 2009, 23(7): 1309-1332.
  • 6YU E, AGGARWAL J K. Human action recognition with extremities as semantic posture representation [C]∥Proceedings of the IEEE Computer Vision and Pattern Recognition Workshops. Piscataway, NJ, USA: IEEE, 2009: 1-8.
  • 7CHENG Guangchun, WAN Yiwen, SAVDAGAR A N, et al. Advances in human action recognition: a survey[J].ArXiv Preprint ArXiv, 2015: 150105964.
  • 8VISHWAKARMA S, AGRAWAL A. A survey on activity recognition and behavior understanding in video surveillance[J].The Visual Computer, 2013, 29(10): 983-1009.
  • 9ZHANG T, YANG Z, JIA W, et al. A new method for violence detection in surveillance scenes[J].Multimedia Tools and Applications, 2015: 1-23.
  • 10ZIVKOVIC Z. Improved adaptive Gaussian mixture model for background subtraction [C]∥Proceedings of the 17th 2004 International Conference on Pattern Recognition. Piscataway, NJ, USA: IEEE, 2004: 28-31.

引证文献3

二级引证文献6

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部