摘要
针对单视角下深度相机跟踪关节点运动存在的自遮挡问题,提出一种基于投影子空间视图的人体动作识别方法。在不增加数据采集设备的情况下,通过子空间投影,将单视角下获得的三维动作序列投影到多个二维子空间中,在二维投影空间寻求最大类间距离,以尽可能增加基于多个子空间视图融合后的3D动作类间距离。在自建AQNU数据集的识别率为99.69%,较基准方法提升1.22%。在公共NTU-RGB+D数据集子集的识别率为80.23%,较基准方法提升1.98%。实验结果表明:本文方法可在一定程度上减少单视角数据集的自遮挡问题,提高识别率和计算效率,可达到与多视角数据集相当的识别效果。
In view of the self-occlusion problem of joint action tracking by a depth camera under a single viewing angle,a new human action recognition method based on projection subspace views is proposed.Without adding data acquisition equipment,the method projects the three-dimensional(3D)action sequences obtained under a single viewing angle into multiple two-dimensional subspaces and then seeks the maximum distance between classes in the two-dimensional subspaces,so as to increase the distance between 3D actions based on the fusion of multiple subspace views as much as possible.The recognition rate in the self-built AQNU dataset is 99.69%,which is 1.22%higher than the benchmark method.The recognition rate in the public NTU-RGB+D dataset subset is 80.23%,which is 1.98%higher than the benchmark method.The experimental results show that the method proposed in this paper can alleviate the self-occlusion problem of datasets of single viewing angles to a certain extent,effectively improve the recognition rate and computational efficiency,and achieve the recognition effect equivalent to that of datasets of multiple viewing angles.
作者
苏本跃
孙满贞
马庆
盛敏
Su Benyue;Sun Manzhen;Ma Qing;Sheng Min(The Key Laboratory of Intelligent Perception and Computing of Anhui Province,Anqing Normal University,Anqing 246133,China;School of Mathematics and Computer,Tongling University,Tongling 244061,China;School of Computer and Information,Anqing Normal University,Anqing 246133,China;School of Mathematics and Physics,Anqing Normal University,Anqing 246133,China)
出处
《系统仿真学报》
CAS
CSCD
北大核心
2023年第5期1098-1108,共11页
Journal of System Simulation
基金
安徽省自然科学基金(2108085QF269)
高校领军人才团队项目(皖教秘人[2019]16号)。
关键词
动作识别
单视图
投影子空间
图卷积网络
action recognition
single view
projection subspace
graph convolutional network