期刊文献+

基于深度稠密时空兴趣点的人体动作描述算法 被引量:4

Human Action Description Algorithm Based on Depth Dense Spatio-Temporal Interest Points
下载PDF
导出
摘要 目前基于深度数据的动作识别算法得到极大关注,至今仍无一种鲁棒、区分性好的基于深度数据的动作描述算法.针对该问题,文中提出基于深度稠密时空兴趣点的人体动作描述算法.该算法选择多尺度深度稠密特征时空兴趣点,跟踪兴趣点并保存对应轨迹,基于轨迹信息描述动作.通过在DHA、MSR Action 3D和UTKinect深度动作数据集上评估可知,与一些代表性算法相比,文中算法性能更优. Much attention is paid to action description algorithm based on depth data now. However, there is no robust, efficient and distinguishing feature representation for depth data. To solve the problem, human action description algorithm based on depth dense spatio-temporal interest point is proposed. Multi-scale depth dense feature spatio-temporal interest points are selected and then tracked, and the trajectories of these points are saved. Finally, the trajectory information is utilized to represent human action. Through the evaluation on DHA, MSR Action 3 D and UTKinect depth action dataset, the proposed algorithm show better performance compared with some state-of-the-art algorithms.
出处 《模式识别与人工智能》 EI CSCD 北大核心 2015年第10期939-945,共7页 Pattern Recognition and Artificial Intelligence
基金 国家自然科学基金项目(No.61572357 61202168 61201234) 天津市自然科学基金项目(No.13JCQNJC0040) 天津市应用基础与前沿技术研究计划项目(No.14JCZDJC31700) 天津市教育委员会科学技术发展基金会项目(No.20120802)资助
关键词 深度数据 稠密时空兴趣点 人体动作描述 轨迹跟踪 Depth Data, Dense Spatio-Temporal Interest Point, Human Action Description,Trajectory Tracking
  • 相关文献

参考文献28

  • 1Lin Y C, Hu M C, Cheng W H, et al. Human Action Recognition and Retrieval Using Sole Depth Information // Proc of the 20th ACM International Conference on Multimedia. Nara, Japan, 2012: 1053-1056.
  • 2Wang J, Liu Z C, Wu Y, et al. Mining Actionlet Ensemble for Action Recognition with Depth Cameras // Proc of the IEEE Con-ference on Computer Vision and Pattern Recognition. Providence, USA, 2012: 1290-1297.
  • 3Li W Q, Zhang Z Y, Liu Z C. Action Recognition Based on a Bag of 3D Points // Proc of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. San Francisco, USA, 2010: 9-14.
  • 4Ni B B, Wang G, Moulin P. RGBD-HuDaAct: A Color-Depth Vi-deo Database for Human Daily Activity Recognition // Proc of the IEEE International Conference on Computer Vision Workshops. Barcelona, Spain, 2011: 1147-1153.
  • 5Megavannan V, Agarwal B, Venkatesh Babu R. Human Action Recognition Using Depth Maps // Proc of the International Conference on Signal Processing and Communications. Bangalore, India, 2012. DOI: 10.1109/SPCOM.2012.6290032.
  • 6Bobick A F, Davis J W. The Recognition of Human Movement Using Temporal Templates. IEEE Trans on Pattern Analysis and Machine Intelligence, 2001, 23(3): 257-267.
  • 7Kellokumpu V, Pietikinen M, Heikkil J. Human Activity Recognition Using Sequences of Postures // Proc of the IAPR Conference on Machine Vision Applications. Tsukuba Science City, Japan, 2005: 570-573.
  • 8Dollar P, Rabaud V, Cottrell G, et al. Behavior Recognition via Sparse Spatio-Temporal Features // Proc of the 2nd Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance. Beijing, China, 2005: 65-72.
  • 9Laptev I, Lindeberg T. Space-Time Interest Points // Proc of the 9th IEEE International Conference on Computer Vision. Nice, France, 2003, I: 432-439.
  • 10Chen M Y, Hauptmann A. MoSIFT: Recognizing Human Actions in Surveillance Videos. Technical Report, CMU-CS-09-161. Pittsburgh, USA: Carnegie Mellon University, 2009.

二级参考文献19

  • 1Gavrila D M. Vision-Based 3-D Tracking of Human in Action. Ph. D Dissertation. Maryland, USA: University of Maryland, 1996.
  • 2Sminchisescu C, Kanaujia A, Li Zhiguo, et al. Conditional Models for Contextual Human Motion Recognition / / Proc of the 10th Inter?national Conference on Computer Vision. Beijing, China, 2005, II: 1808-1815.
  • 3Belongie S, MalikJ, PuzichaJ. Shape Matching and Object Recog?nition Using Shape Context. IEEE Trans on Pattern Analysis and Machin", Intelligence, 2002, 24(4): 509-522.
  • 4Kumar S, Hebert M. Discriminative Random Fields: A Discrimina?tive Framework for Contextual Interaction in Classification / / Proc of the 9th IEEE Conference on Computer Vision. Nice, France, 2003, II: 1150-1157.
  • 5McCallum A, Freitag D, Pereira F. Maximum Entropy Markov Mo?dels for Information Extraction and Segmentation / / Proc of the 17 th International Conference on Machine Learning. Stanford, USA, 2000: 591-598.
  • 6Laptev I, Lindeberg T. Space-Time Interest Points / / Proc of the 9th IEEE Conference on Computer Vision. Nice, France, 2003, I: 432-439.
  • 7Bobick A F, DavisJ W. The Recognition of Human Movement Using Temporal Templates. IEEE Trans on Pattern Analysis and Machine Intelligence, 2001, 23 (3) : 257-267.
  • 8Kellokumpu V, Pietikainen M, HeikkilaJ. Human Activity Recogni?tion Using Sequences of Postures / / Proc of the International Associ?ation for Pattern Recognition Conference on Machine Vision Applica?tions. Tsukuba Science City,Japan, 2005: 570-573.
  • 9Wang Liang, Suter D. Learning and Matching of Dynamic Shape Manifolds for Human Action Recognition. IEEE Trans on Image Pro?cessing, 2007, 16(6): 1646-1661.
  • 10Megavannan V, Agarwal B, Venkatesh B R. Human Action Re?cognition Using Depth Maps / / Proc of the International Conference on Signal Processing and Communications. Bangalore, India, . 2012: 1-5.

共引文献15

同被引文献16

引证文献4

二级引证文献7

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部