期刊文献+

复杂人机交互场景下的指势用户对象识别 被引量:8

Pointing User Recognition in Human-Computer Interaction with Cluttered Scene
下载PDF
导出
摘要 采用指势进行人机交互,可充分发挥人类日常技能,摆脱常规输入设备束缚.实现自然的指势人机交互的关键是,如何从复杂的人机交互场景中有效提取指势用户对象,提出了基于时/空运动特征的指势用户对象识别新方法.基于多尺度小波变换在时/空域所具有的优异局部化特性,从复杂场景中提取前景运动对象,克服环境条件约束以及动态环境变化及先验假设等不足;基于多尺度小波变换的梯度积分图方法,获取稳定可靠的指势手部HOG特征,采用机器学习方法,对上述特征向量分类,并基于指势手与指势用户对象的空间关联性识别指势用户对象.通过实验对比,结果表明本文方法有效、可行. Human being daily skill can be exerted fully and bondage can be delivered efficiently in which people use ordinary equipment as an input way if pointing gesture is used for human-computer interaction( HCI). One of key problems is how to reliably recognize pointing user from HCI scene with cluttered background. A novel method has been developed based on spatio-temporal motion. According to multi-scale wavelet transform( MWT) with outstanding local characteristics both in spatial and temporal domains,it is adopted to extract foreground motion subject from cluttered scene. Some disadvantages are overcome including restrictions in environment conditions,dynamic environment variation,and a priori assumption. MWT based gradient integral graph is used to get some HOG feature vectors in pointing hand which are classified and learnt based on machine learning. Pointing user is recognized according to spatial relationship between pointing hand and its corresponding subject. Experimental results have been shown that the proposed method is efficient and viable.
作者 管业鹏
出处 《电子学报》 EI CAS CSCD 北大核心 2014年第11期2135-2141,共7页 Acta Electronica Sinica
基金 国家自然科学基金(No.11176016 60872117) 教育部高等学校博士学科点专项科研基金(No.20123108110014)
关键词 人机交互 模式识别 时/空特征 对象分割 特征提取 Human-computer interaction pattern recognition spatio-temporal feature object segmentation feature extraction
  • 相关文献

参考文献32

  • 1徐一华,李善青,贾云得.一种基于视觉的手指屏幕交互方法[J].电子学报,2007,35(11):2236-2240. 被引量:23
  • 2马翠霞,任磊,滕东兴,王宏安,戴国忠.云制造环境下的普适人机交互技术[J].计算机集成制造系统,2011,17(3):504-510. 被引量:46
  • 3A Schmidt, B Pfleging, F Alt, A Sahami, G Fitzpatrick. Inter- acting with 21 st-century computers [ J ]. IEEE Pervasive Com- puting,2012,11 (1) :22 - 31.
  • 4N H Dardas, N D Georganas. Real-time hand gesture detection and recognition using bag-of-features and support vector ma- chine techniques [ J]. 1EEE Transactions on Instrumentation and Measurement, 2011,60 ( 11 ) : 3592 - 3607.
  • 5C-Y Tsai, Y-H lee. The parameters effect on performance in ANN for hand gesture recognition system [ J ]. Expert Systems with Applications, 2011,38 (7) : 7980 - 7983.
  • 6P Matikainen, P Pillai, L Mummert, R Sukthankar, M Hebert. Prop-free pointing detection in dynamic cluttered environments [ A ]. Proceedings of International Conference on Automatic Face and Gesture Recognition [ C ]. Santa Barbara, United States: IEEE Computer Society,2011:374- 381.
  • 7M J Reale,S Canavan, Y Lijun, H Kaoning, T Hung. A multi- gesture interaction system using a 3-D iris disk model for gaze estimation and an active appearance model for 3-D hand point- ing [J]. IEEE Transactions on Multimedia,2011,13(3) :474 - 486.
  • 8N J Enfield, S Kita, J P de Ruiter. Primary and secondary prag- matic functions of pointing gestures [J]. Journal of Pragmatics, 2OY7,39(10) : 1722 - 1741.
  • 9R Kehl, L V Gool. Real-time pointing gesture recognition for an immersive environment [ A ]. Proceedings of International Conference on Automatic Face and Gesture Recognition [ C ]. Seoul, South Korea: IF.EF. Computer Society, 2004: 577 - 582.
  • 10K Nickel, R Stiefelhagen. Visual recognition of pointing ges- tures for human-robot interaction [ J]. Image and Vision Com- puting,2007,25(12) : 1875 - 1884.

二级参考文献79

  • 1王典,程咏梅,杨涛,潘泉,赵春晖.基于混合高斯模型的运动阴影抑制算法[J].计算机应用,2006,26(5):1021-1023. 被引量:20
  • 2杜友田,陈峰,徐文立,李永彬.基于视觉的人的运动识别综述[J].电子学报,2007,35(1):84-90. 被引量:79
  • 3L W Campbell, D A Becker, A Azarbayejani, A F Bobick, A Penfland. Invariant features for 3D gesture recognition[A]. Proceedings of International Conference on Automatic Face and Gesture Recognition [C]. Vermont, USA: IEEE, 1996. 157 -162.
  • 4N Jin, F Mokhtarian. Image-based shape model for view-invariant human motion recognition [ A ]. Proceedings of Conference on Advanced Video and Signal Based Surveillance [ C ]. London: IEEE, 2007. 336 - 341.
  • 5A S Ogale, A Karapurkar, Y Aloimonos. View-invariant modeling and recognition of human actions using grammars[A]. International Conference on Computer Vision, Workshop on Dynamical Vision[C]. Beijing, China: Springer Verlag, 2005.
  • 6C Rao, A Yilmaz, M Shah. View-invariant representation and recognition of actions[J]. International Joumal of Computer Vision, 2002,50(2) : 203 - 226.
  • 7V Parameswaran, R Chellappa. Using 2D projective invariance for human action recognition[J]. International Journal of Computer Vision, 2006,66(1) : 83 - 101.
  • 8P C Chung, C D Liu. A daily behavior enabled hidden Markov model for human behavior understanding [J]. Pattern Recognition,2008,41 (5) : 1572 - 1580.
  • 9Y Wang, K Huang, T N Tan. Abnormal activity recognition in office based on R transform[A]. Proceedings of IEEE Conference on Image Processing [C]. San Antonio, Texas: IEEE, 2007. 341 - 344.
  • 10N T Nguyen, D Q Phung, S Venkatesh, H Bui. Learning and detecting activities from movement trajectories using the hierarchical hidden Markov model[ A]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition [ C ]. San Diego, CA, USA: IEEE, 2005.955 - 960.

共引文献112

同被引文献80

引证文献8

二级引证文献106

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部