摘要
受到dense轨迹特征的启发,本文提出了基于深度运动轨迹信息的动作描述算法,首先,利用稠密光流场对L帧深度视频提取稠密(dense)兴趣点并形成稠密轨迹,其次,利用轨迹前后兴趣点的深度信息计算深度变化值,并将它加入到稠密轨迹和HOG描述算子的计算中;再次,在整个数据集上,计算所有动作的平均深度变化值并利用它判断每类动作的深度信息变化情况;最后,根据深度信息变化剧烈程度选择不同的码书,对视频样本进行投影并分类。在两个公开深度动作数据集DHA-17和UTkinect上进行了实验,实验结果表明基于深度运动轨迹信息的动作描述算法具有较好的区分性和鲁棒性,其性能与一些先进的且具有代表性的算法具有可比性。
Inspired by dense trajectory feature,an action description algorithm based on depth motion trajectory information is proposed in this paper.Firstly,according to dednse optical flow field,dense interest points and dense trajectories for Lframes depth videos are extracted.Secondly,the depth information of front and back interest points in the same trajectory is exploited to compute the change values of depth which are added into dense trajectory and the computing method of histogram of oriented gradient(HOG)descriptor.Thirdly,average change value of depth which is used to judge the change status of depth information for all actions in entire dataset is calculated.Finally,based on the change status of depth information,distinctive codebooks are chosen to make projection and classification for video sample.The experimental results on two public datasets of DHA-17 and UTKinect show that the action description algorithm based on the depth motion trajectory information has desirable distinctiveness and robustness,and the performance is comparable with that of some advanced and representative algorithms.
出处
《光电子.激光》
EI
CAS
CSCD
北大核心
2017年第1期100-107,共8页
Journal of Optoelectronics·Laser
基金
国家自然科学基金(61572357
61202168)
天津市应用基础与前沿技术研究计划(14JCZDJC31700)
天津市自然科学基金(13JCQNJC0040)资助项目