期刊文献+

基于SVM和HMM二级模型的行为识别方案 被引量:4

Human Activity Recognition Based on Combined SVM&HMM
下载PDF
导出
摘要 人体行为识别对于个人辅助机器人和智能家居等一些智能应用,是非常必要的功能,本文运用SVM&HMM混合分类模型进行日常生活环境的人体行为识别。首先,使用微软的Kinect(一种RGBD感应器)作为输入感应器,提取融合特征集,包括运动特征、身体结构特征、极坐标特征。其次,提出SVM&HMM模型,SVM&HMM二级模型发挥了SVM和HMM各自的优点,既结合了SVM适于反映样本间差异性特点,又发挥了HMM适合处理连续行为的特点。该二级模型克服了单一SVM模型、传统HMM模型和在人体复杂和相似行为建模过程中精度、鲁棒性和计算效率上的不足。通过大量实验,结果表明SVM&HMM二级模型对室内日常行为的识别具有较高的识别率,且具有较好的区分性和鲁棒性。 Absrt act:Being able to recognize human activities is essential for several intelligent applications , including personal assistive ro-botics and smart homes .In this paper , we perform the recognition of the human activity based on the combined SVM&HMM in daily living environments .Firstly, we use a RGBD sensor ( Microsoft Kinect ) as the input sensor , and extract a set of the fusion features, including motion, body structure features and joint polar coordinates features .Secondly, we propose a combined SVM&HMM Model which not only combines the SVM characteristics of reflecting the difference among the samples , but also de-velops the HMM characteristics of dealing with the continuous activities .The SVM&HMM model plays their respective advantages of SVM and HMM comprehensively .Thus, the combined model overcomes the drawbacks of accuracy , robustness and computa-tional efficiency compared with the separate SVM model or the traditional HMM model in the human activity recognition .The ex-periment results show that the proposed algorithm possesses the better robustness and distinction .
出处 《计算机与现代化》 2015年第5期1-8,12,共9页 Computer and Modernization
关键词 行为识别 融合特征 Kinect SVM HMM Kinect activity recognition fusion features SVM HMM
  • 相关文献

参考文献22

  • 1Ning Huazhong, Han Tony Xu, Wahher D B, et al. Hier- archical space-time model enabling efficient search for hu- man actions [ J ]. IEEE Transactions on Circuits and Sys- tems for Video Technology, 2009,19(6) :808-820.
  • 2Gupta A, Srinivasan P, Shi Jianbo, et al. Understanding videos, constructing plots: Learning a visually grounded storyline model from annotated videos [ C ]// Proceedings of the 2009 IEEE International Conference on Computer Vi- sion and Pattern Recognition. 2009:2012-2019.
  • 3Wu Jianxin, Osuntogun A, Choudhury T, et al. A scalable approach to activity recognition based on object use [ C ]// Proceedings of the 11 th IEEE International Conference on Computer Vision. 2007.
  • 4Liu Jingen, Ali S, Shah M. Recognizing human actions u- sing multiple features [ C ]//Proceedings of the 2008 IEEE International Conference on Computer Vision and Pattern Recognition. 2008.
  • 5Yao Bangpeng, Li Feifei. Modeling mutual context of object and human pose in human-object interaction activities [ C l// Proceedings of the 2010 IEEE International Conference on Computer Vision and Pattern Recognition. 2010:17-24.
  • 6Aksoy E, Abramov A, Worgotter F, et al. Categorizing ob- ject-action relations from semantic scene graphs [ C ]//Pro- ceedings of the 2010 IEEE International Conference on Ro- botics and Automation. 2010:398-405.
  • 7Jiang Yugang, Li Zhenguo, Chang Shih-Fu. Modeling scene and object contexts for human action retrieval with few exam- ples [ J ]. IEEE Transactions on Circuits and Systems for Video Technology, 2011,21 (5) :674-681.
  • 8Pirsiavash H, Ramanan D. Detecting activities of daily liv- ing in first-person camera views [ C ]// Proceedings of the 2012 IEEE International Conference on Computer Vision and Pattern Recognition. 2012:2847-2854.
  • 9Li Wanqing, Zhang Zhengyou, Liu Zicheng. Action recog- nition based on a bag of 3D points [ C]// Proceedings of the 3rd IEEE International Workshop on CVPR for Human Communicative Behavior Analysis. 2010:9-14.
  • 10Sung J, Ponce C, Selman B, et al. Human activity detec- tion from RGBD images [ C ]// Proceedings of the 2011 AAAI Workshop on Pattern, Activity and Intent Recogni- tion (PAIR). 2011.

二级参考文献2

  • 1Christopher J.C. Burges. A Tutorial on Support Vector Machines for Pattern Recognition[J] 1998,Data Mining and Knowledge Discovery(2):121~167
  • 2Corinna Cortes,Vladimir Vapnik. Support-Vector Networks[J] 1995,Machine Learning(3):273~297

共引文献40

同被引文献36

  • 1POPPE R. A survey on vision-based human action recog-nition [J]. Image and Vision Computing, 2010,28 (6):976-990.
  • 2AGGARWAL J K,Ryoo M S. Human activity analysis: areview[J].Acm Computing Surveys,2011 ?43(3) : 1-43.
  • 3SHEIKH Y,SHEIKH M,SHAH M.Exploring the spaceof a human action[C] // 2005 IEEE International Confer-ence on Computer Vision (ICCV). Beijing: IEEE, 2005 :144-149.
  • 4NATARAJAN P, NEVATIA R. Coupled hidden semimarkov models for activity recognition[C] // 2007 IEEEWorkshop on Motion and Video Computing (WMVC).Austin: IEEE, 2007 : 10.
  • 5OLIVER N,HORVITZ E’GARG A.Layered representa-tions for human activity recognition[C] // 2002 IEEE In-ternational Conference on Multimodal Interfaces (ICMI).Pittsburgh, PA : IEEE,2002 : 3-8.
  • 6JOO S W, CHELLAPPA R. Attribute grammar-based e-vent recognition and anomaly detection[C] // 2006 IEEEConference on Computer Vision and Pattern RecognitionWorkshops(CVPRW).New York:IEEE,2006 :107.
  • 7GUPTA A, SRINIVASAN P, JIANBO S,et al. Under-standing videos, constructing plots learning a visuallygrounded storyline model from annotated videos [C]//2009 IEEE Conference on Computer Vision and PatternRecognition(CVPR).Miami,FL: IEEE,2009 : 2012-2019.
  • 8HINTON G E,SALAKHUTDINOV R R.Reducing thedimensionality of data with neural networks [J ]. Science.2006,313(5786):504-507.
  • 9VINCENT P, LAROCHELLE H, LAJOIE I,et al.Stacked denoising autoencoders : learning useful represen-tations in a deep network with a local denoising criterion[J]. Journal of Machine Learning Research, 2010,11:3371-3408.
  • 10ALLEN J F.Rethinking logics of action and time[C] //2013 International Symposium on Temporal Representa-tion and Reasoning (TIME).Pensacola,FL: IEEE,2013 :3-4.

引证文献4

二级引证文献8

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部