期刊文献+

基于深度运动轨迹信息的人体动作描述算法 被引量:5

Human action description algorithm based on depth motion trajectory information
原文传递
导出
摘要 受到dense轨迹特征的启发,本文提出了基于深度运动轨迹信息的动作描述算法,首先,利用稠密光流场对L帧深度视频提取稠密(dense)兴趣点并形成稠密轨迹,其次,利用轨迹前后兴趣点的深度信息计算深度变化值,并将它加入到稠密轨迹和HOG描述算子的计算中;再次,在整个数据集上,计算所有动作的平均深度变化值并利用它判断每类动作的深度信息变化情况;最后,根据深度信息变化剧烈程度选择不同的码书,对视频样本进行投影并分类。在两个公开深度动作数据集DHA-17和UTkinect上进行了实验,实验结果表明基于深度运动轨迹信息的动作描述算法具有较好的区分性和鲁棒性,其性能与一些先进的且具有代表性的算法具有可比性。 Inspired by dense trajectory feature,an action description algorithm based on depth motion trajectory information is proposed in this paper.Firstly,according to dednse optical flow field,dense interest points and dense trajectories for Lframes depth videos are extracted.Secondly,the depth information of front and back interest points in the same trajectory is exploited to compute the change values of depth which are added into dense trajectory and the computing method of histogram of oriented gradient(HOG)descriptor.Thirdly,average change value of depth which is used to judge the change status of depth information for all actions in entire dataset is calculated.Finally,based on the change status of depth information,distinctive codebooks are chosen to make projection and classification for video sample.The experimental results on two public datasets of DHA-17 and UTKinect show that the action description algorithm based on the depth motion trajectory information has desirable distinctiveness and robustness,and the performance is comparable with that of some advanced and representative algorithms.
出处 《光电子.激光》 EI CAS CSCD 北大核心 2017年第1期100-107,共8页 Journal of Optoelectronics·Laser
基金 国家自然科学基金(61572357 61202168) 天津市应用基础与前沿技术研究计划(14JCZDJC31700) 天津市自然科学基金(13JCQNJC0040)资助项目
关键词 深度数据 稠密时空兴趣点 深度运动轨迹 人体动作描述 轨迹跟踪 depth data dense space-time interest points human motion description trajectory tracking
  • 相关文献

参考文献2

二级参考文献47

  • 1Gavrila D M. Vision-Based 3-D Tracking of Human in Action. Ph. D Dissertation. Maryland, USA: University of Maryland, 1996.
  • 2Sminchisescu C, Kanaujia A, Li Zhiguo, et al. Conditional Models for Contextual Human Motion Recognition / / Proc of the 10th Inter?national Conference on Computer Vision. Beijing, China, 2005, II: 1808-1815.
  • 3Belongie S, MalikJ, PuzichaJ. Shape Matching and Object Recog?nition Using Shape Context. IEEE Trans on Pattern Analysis and Machin", Intelligence, 2002, 24(4): 509-522.
  • 4Kumar S, Hebert M. Discriminative Random Fields: A Discrimina?tive Framework for Contextual Interaction in Classification / / Proc of the 9th IEEE Conference on Computer Vision. Nice, France, 2003, II: 1150-1157.
  • 5McCallum A, Freitag D, Pereira F. Maximum Entropy Markov Mo?dels for Information Extraction and Segmentation / / Proc of the 17 th International Conference on Machine Learning. Stanford, USA, 2000: 591-598.
  • 6Laptev I, Lindeberg T. Space-Time Interest Points / / Proc of the 9th IEEE Conference on Computer Vision. Nice, France, 2003, I: 432-439.
  • 7Bobick A F, DavisJ W. The Recognition of Human Movement Using Temporal Templates. IEEE Trans on Pattern Analysis and Machine Intelligence, 2001, 23 (3) : 257-267.
  • 8Kellokumpu V, Pietikainen M, HeikkilaJ. Human Activity Recogni?tion Using Sequences of Postures / / Proc of the International Associ?ation for Pattern Recognition Conference on Machine Vision Applica?tions. Tsukuba Science City,Japan, 2005: 570-573.
  • 9Wang Liang, Suter D. Learning and Matching of Dynamic Shape Manifolds for Human Action Recognition. IEEE Trans on Image Pro?cessing, 2007, 16(6): 1646-1661.
  • 10Megavannan V, Agarwal B, Venkatesh B R. Human Action Re?cognition Using Depth Maps / / Proc of the International Conference on Signal Processing and Communications. Bangalore, India, . 2012: 1-5.

共引文献18

同被引文献58

引证文献5

二级引证文献14

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部