期刊文献+

基于深度学习的关节点行为识别综述 被引量:24

A Review of Action Recognition Using Joints Based on Deep Learning
下载PDF
导出
摘要 关节点行为识别由于其不易受外观影响、能更好地避免噪声影响等优点备受国内外学者的关注,但是目前该领域的系统归纳综述较少。该文综述了基于深度学习的关节点行为识别方法,按照网络主体的不同将其划分为卷积神经网络(CNN)、循环神经网络(RNN)、图卷积网络和混合网络。卷积神经网络、循环神经网络、图卷积网络分别擅长处理的关节点数据表示方式是伪图像、向量序列、拓扑图。归纳总结了目前国内外常用的关节点行为识别数据集,探讨了关节点行为识别所面临的挑战以及未来研究方向,高精度前提下快速行为识别和实用化仍然需要继续推进。 Action recognition using joints has attracted the attention of scholars at home and abroad because it is not easily affected by appearance and can better avoid the impact of noise.However,there are few systematic reviews in this field.In this paper,the methods of action recognition using joints based on deep learning in recent years are summarized.According to the different subjects of the network,it is divided into Convolutional Neural Network(CNN),Recurrent Neural Network(RNN),graph convolution network and hybrid network.The representation of joint point data that convolution neural network,recurrent neural network and graph convolution network are good at is pseudo image,vector sequence and topological graph.This paper summarizes the current data sets of action recognition using joints at home and abroad,and discusses the challenges and future research directions of behavior recognition using joints.Under the premise of high precision,rapid action recognition and practicality still need to be further promoted.
作者 刘云 薛盼盼 李辉 王传旭 LIU Yun;XUE Panpan;LI Hui;WANG Chuanxu(College of Information Science and Technology,Qingdao University of Science and Technology,Qingdao 266061,China)
出处 《电子与信息学报》 EI CSCD 北大核心 2021年第6期1789-1802,共14页 Journal of Electronics & Information Technology
基金 国家自然科学基金(61702295,61472196)。
关键词 深度学习 关节点行为识别 卷积神经网络 循环神经网络 图卷积 Deep learning Action recognition using joints Convolution Neural Network(CNN) Recurrent Neural Network(RNN) Graph convolution
  • 相关文献

参考文献7

二级参考文献70

  • 1杜友田,陈峰,徐文立,李永彬.基于视觉的人的运动识别综述[J].电子学报,2007,35(1):84-90. 被引量:79
  • 2Fujiyoshi H, Lipton A J, Kanade T. Real-time human mo- tion analysis by image skeletonization. IEICE Transactions on Information and Systems, 2004, 87-D(1): 113-120.
  • 3Chaudhry R, Ravichandran A, Hager G, Vidal R. His- tograms of oriented optical flow and Binet-Cauchy kernels on nonlinear dynamical systems for the recognition of hu- man actions. In: Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, FL: IEEE, 2009. 1932-1939.
  • 4Dalal N, Triggs B. Histograms of oriented gradients for human detection. In: Proceedings of the 2005 IEEE Con- ference on Computer Vision and Pattern Recognition. San Diego, CA, USA: IEEE, 2005. 886-893.
  • 5Lowe D G. Object recognition from local scale-invariant fea- tures. In: Proceedings of the 7th IEEE International Confer- ence on Computer Vision. Kerkyra: IEEE, 1999. 1150-1157.
  • 6Schuldt C, Laptev I, Caputo B. Recognizing human actions: a local SVM approach. In: Proceedings of the 17th In- ternational Conference on Pattern Recognition. Cambridge: IEEE, 2004. 32-36.
  • 7Dollar P, Rabaud V, Cottrell G, Belongie S. Behavior recog- nition via sparse spatio-temporal features. In: Proceedings of the 2005 IEEE International Workshop on Visual Surveil- lance and Performance Evaluation of Tracking and Surveil- lance. Beijing, China: IEEE, 2005.65-72.
  • 8Rapantzikos K, Avrithis Y, Kollias S. Dense saliency-based spatiotemporal feature points for action recognition. In: Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, FL: IEEE, 2009. 1454-1461.
  • 9Knopp J, Prasad M, Willems G, Timofte R, Van Gool L. Hough transform and 3D SURF for robust three dimensional classification. In: Proceedings of the llth European Confer- ence on Computer Vision (ECCV 2010). Berlin Heidelberg: Springer. 2010. 589-602.
  • 10Klaser A, Marszaeek M, Schmid C. A spatio-temporal de- scriptor based on 3D-gradients. In: Proceedings of the 19th British Machine Vision Conference. Leeds: BMVA Press, 2008. 99.1-99.10.

共引文献206

同被引文献168

引证文献24

二级引证文献50

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部