期刊文献+

投影深度向量分解融合PEMS的视角不变人体动作识别 被引量:1

Research of view invariant human action recognition based on fusion of projection depth vector decomposition and PEMS
下载PDF
导出
摘要 针对摄像机内部参数的不确定性和投影平面选择难的问题,提出一种新的投影深度算法用于视角不变的动作识别,该算法采用对称镜面平面提取(plane extraction from mirror symmetry,PEMS)策略,有效解决了投影平面选择难的问题。首先通过摄像机组观察获得3D动作姿势,然后运用PEMS策略从场景中提取平面,相对于提取平面估计身体点的投影深度,最后使用这个信息进行动作识别。该算法的核心是投影平面的提取和投影深度组成向量的求解。利用该算法在CMU Mo Cap数据集、TUM数据集和多视图IXMAS数据集上进行测试,精度可分别高达94%、91%和90%,且在较少动作实例情况下,仍然能够准确定义新动作。比较表明,该算法的人体动作识别性能明显优于其他几种较新的算法。 As the uncertainty of the inside camera parameter and hard to choose the projection plane,this paper proposed a new algorithm for the perspective projection depth invariant action recognition. The proposed algorithm used the strategy of planes extraction from mirror symmetry( PEMS),which was an effective solution to the projection plane choosing. Firstly,it observed a 3D postures by the camera group,and then used PEMS strategies to extract the plane from the scene,the plane estimated projected depth of the body relatived to the extraction point,and finally used this information in action recognition.The core of proposed algorithm was to extract the depth of the projection plane and the solution of projection of the vector composition. It obtained overall accuracies: 94%,91%,and 90% with the proposed algorithm on the CMU Mo Cap data sets,TUM data sets and the IXMAS data sets respectively. And it is still able to accurately define new actions in the case of small movements' instance. The proposed algorithm has better recognition performance for action recognition than several other advanced algorithms.
出处 《计算机应用研究》 CSCD 北大核心 2016年第3期940-944,共5页 Application Research of Computers
基金 国家自然科学基金资助项目(61273241) 四川省教育厅科研项目(13ZAO125)
关键词 视角不变 人体动作识别 投影深度 对称镜面 特征向量分解 view invariant human action recognition projection depth mirror symmetry eigenvector decomposition
  • 相关文献

参考文献16

二级参考文献45

  • 1李宗民,刘玉杰,李振波,崔丽,李华.Bezier矩及其在人体姿态识别中的应用[J].计算机工程与应用,2005,41(24):38-40. 被引量:4
  • 2俞洋,殷志锋,田亚菲.基于自适应人工鱼群算法的多用户检测器[J].电子与信息学报,2007,29(1):121-124. 被引量:37
  • 3Efros A A, Berg A C, Mori G, et al. Recognizing action at a dis- tance[ C ]//Proceedings of the 9th IEEE International Conference on Computer Vision. Nice, France: IEEE, 2003:726-733.
  • 4Yilmaz A, Shah M. Matching actions in presence of camera mo- tion[ J]. Computer Vision and Image Understanding, 2006, 104(2-3) :221-231.
  • 5Dollar P, Rabaud V, Cottrell G, et al. Behavior recognition via sparse spatio-temporal features [ C ]//Proceedings of the 2nd Joint 1EEE International Workshop on :isual Surveillance and Perform- ance Evaluation of Tracking and Surveillance. Beijing, China: IEEE, 2005:65-72.
  • 6Laptev L On space-time interest points [ J ]. International Journal of Computer Vision, 2005, 64 (2) : 107-123.
  • 7Bregonzio M, Gong S, Xiang T. Recognising action as clouds of space-time interest points[ C ]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Florida, USA: IEEE, 2009 : 1948-1955.
  • 8Niebles J C, Wang H, Li F F. Unsupervised learning of human action categories using spatial-temporal words [ J ]. International Journal of Computer Vision, 2008,79(3): 299-318.
  • 9Laptev I, Marszalek M, Schmid C, et al. Learning realistic hu- man actions from movies [ C ]// Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Anchorage, AK, USA : IEEE, 2008 : 1-8.
  • 10Klaser A, M Marszalek, C Schmid. A spatio-temporal descriptor based on 3D-gradients [ C ]//Proceedings of British Machine Vi- sion Conference. Leeds, England : University of Leeds, 2008 : 955-1004.

共引文献153

同被引文献11

引证文献1

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部