摘要
为了融合深度图中不易受光照等环境因素影响的深度信息和RGB视频序列中丰富的纹理信息,提出一种基于深度运动图(Depth Motion Maps,DMMs)和密集轨迹的人体行为识别算法。利用卷积神经网络训练DMMs数据并提取高层特征作为行为视频的静态特征表示,使用密集轨迹来描述RGB视频序列的动态运动信息,将行为视频的静态特征和动态特征串联,作为整个视频的行为特征表示并输入到线性支持向量机(Support Vector Machine,SVM)进行识别。实验结果表明,在公开的动作识别库UTD-MHAD和MSR Daily Activity 3D上,该算法能够有效提取深度信息和纹理信息,并取得了较好的识别效果。
A new human action recognition algorithm based on Depth Motion Maps(DMMs)and dense trajectory is proposed to fuse depth information of depth map sequences and rich texture information in RGB video sequences.It is not affected by environmental factors in depth map sequences easily including illumination.The Convolutional Neural Network(CNN)is utilized to train the DMM data and also extract the high-level features of the network as the static feature representation of the video.The dense trajectories are applied to describe the dynamic information of the RGB video sequences.Furthermore,the static and the dynamic features are connected to a series of the feature representation of the entire video which also is injected to the Support Vector Machine(SVM)for the video classification.The experimental results show that the algorithm can effectively extract depth information and texture information,achieve better recognition results on the public action recognition of library UTD-MHAD and MSR Daily Activity 3D.
作者
李元祥
谢林柏
LI Yuanxiang;XIE Linbo(Engineering Research Center of Internet of Things Applied Technology,Ministry of Education,School of IoT Engineering,Jiangnan University,Wuxi,Jiangsu 214122,China)
出处
《计算机工程与应用》
CSCD
北大核心
2020年第3期194-200,共7页
Computer Engineering and Applications
基金
教育部中国移动科研基金(No.MCM20170204)