摘要
由于计算开销大等原因,基于RGB视频和人工特征的行为识别方法在近些年的研究进展比较缓慢。相对于RGB视频,深度视频能提取运动物体的几何结构信息,不会随着光线的变化而变化,因此在视频分割、行为识别等视觉任务中比RGB视频具有更好的区分性。以深度视频中的关节运动信息为基础,提出一种简单而有效的人体行为识别方法。首先,根据深度视频中人体关节信息分别提取表示关节之间角度和相对位置的2个特征向量,然后使用LIBLINEAR分类器分别对提取的2个特征向量进行分类识别,最后,通过融合其分类结果得到最终的行为识别结果。该提取的特征仅包括关节间的相对位置和角度信息,不会因视角的变化而变化,具有一定的视角不变性。实验结果表明,所提出方法在UTKinect-Action3D数据集上能够获得与当前最好方法一致的识别效果,而且该方法具有很低的时间开销,实时性好。
Due to high computation overhead,little progress has been achieved in human action recognition(HAR) based on hand-crafted feature and RGB videos in recent years. Compared with RGB video,depth video sequence is able to extract geometry structural information of animated objects,it is also more insensitive to light changes and more discriminative in many vision tasks such as segmentation and activity recognition. Thus,an effective and straightforward HAR method by using joints motion information of the depth sequence is proposed. First,two feature vectors are extracted according to the human joints information in depth video to indicate the angle and position information among joints. Then, the obtained vectors are classified and identified by using Liblinear classifier. Finally, the action recognition result is achieved by fusing the classification results. The extracted features are view-invariant because the vectors contain only angle and relative position information among joints,they keep the same even though the angle of view is different. Experimental results demonstrate that the method has low computation overhead and performs comparable results on the UTKinect-Action3D dataset compared with state-of-the-art methods.
出处
《计算机应用与软件》
2017年第2期189-192,219,共5页
Computer Applications and Software
基金
国家自然科学基金项目(61202348)
重庆市教委科学技术研究项目(KJ1400926)
关键词
深度学习
人体行为识别
深度视频
关节信息
Deep learning
Human action recognition
Depth video
Skeleton information