期刊文献+

基于隐动态条件神经域的在线行为识别方法 被引量:6

Online behavior recognition based on latent-dynamic conditional neural fields
下载PDF
导出
摘要 针对视频中连续的未分割人体动作识别存在的一些问题,提出一种基于隐动态条件神经域模型(latent-dynamic conditional neural fields,LDCNF)的在线行为识别方法。LDCNF模型含有两个隐层,在潜动态条件随机场(LDCRF)的基础上,增加一层神经网络层,即门层,提取输入数据和输出标签间的非线性关系;增加一种新规则项训练该模型,辨别动作序列隐状态间的差异性。在仿真实验中,针对10种连续的行为动作,将该算法与条件随机场(CRF)、HCRF、LDCRF进行识别效果的对比。实验结果表明,对于联机处理行为序列,该算法相比于CRF、HCRF、LDCRF模型具有更好的识别率。 In view of the continuous unsegmented human behavior recognition in video,a kind of online behavior recognition algorithm based on latent-dynamic conditional neural field(LDCNF)was introduced.LDCNF model contained two hidden layers,on the basis of latent-dynamic conditional random field(LDCRF),a layer of neural network was added,i.e.gating layer,to extract non-linear relationships between input data and output labels.A new regularization term was added for the training of this model,encouraging action sequences' diversity between hidden-states.In the simulation experiment,ten kinds of behavior recognition results for conditional random field(CRF),HCRF,LDCRF and LDCNF were compared.For the online processing behavior sequence,the results show that the proposed algorithm,compared to CRF,HCRF,LDCRF,has better recognition rate.
出处 《计算机工程与设计》 北大核心 2016年第6期1632-1635,1653,共5页 Computer Engineering and Design
基金 2012年度江苏省"青蓝工程"中青年学术带头人基金项目 国家自然科学基金项目(51205185) 江苏省2015年度普通高校研究生科研创新计划基金项目(KYLX15_0784)
关键词 潜动态条件神经域 行为识别 潜动态条件随机场 非线性关系 新规则项 LDCNF behavior recognition LDCRF nonlinear relationship new regularization term
  • 相关文献

参考文献18

  • 1Aggarwal JK, Ryoo MS. Human activity analysis: A review[J]. ACMComputing Surveys, 2011, 43 (3): 1-43.
  • 2李瑞峰,王亮亮,王珂.人体动作行为识别研究综述[J].模式识别与人工智能,2014,27(1):35-48. 被引量:96
  • 3Zhu Chun, Sheng Weihua. Motion and location based online human daily activity recognition [J]. Pervasive and Mobile Computing, 2011, 7 (2): 256-269.
  • 4Weng E-Jui, Chen Fu Li. On-line human action recognition by combining joint tracking and key pose recognition [C] //IEEE International Conference on Intelligent Robots and Systems. Vilamoura: IEEE, 2012: 4112-4117.
  • 5Tran K, Kakadiaris 1, Shah S. Fusion of human posture features for continuous action recognition [G]. LNCS 6553: The 11th European Conference on Computer Vision. Berlin: Sprknger, 2012: 244-257.
  • 6Awais M, Henrich D. Online intention learning for human-ro- bot interaction by scene observation [ C] //Proceedings of IEEE Workshop on Advanced Robotics and its Social Impacts, ARSO. Munich: IEEE, 2012: 13-18.
  • 7Chaaraoui A A, Ciment-Perez P, Florez-Revuelta F. Silhoue-tte based human action recognition using sequences ofkey poses [J]. Pattern Recognition Letters, 2013, 34 (15): 1799-1807.
  • 8Barnachon M, Bouakaz S, Boufama B, et al. A real-time sys- tem for motion retrieval and interpretation [J]. Pattern Recog- nition Letters, 2013, 34 (15): 1789-1798.
  • 9Barnachon M, Bouakaz S, Boufama B, et al. Ongoing human action recognition with motion capture [J]. Pattern Recogni- tion, 2014, 47 (1): 238-247.
  • 10Charles Sutton, Andrew McCallum. An introduction to condi- tional random fields [J]. Foundations and Trends in Machine Learning. 2011, 4 (4): 267-373.

二级参考文献133

  • 1Mokhber A,Achard C,Milgram M. Recognition of Human Behavior by Space-Time Silhouette Characterization[J].Pattern Recognition Let-ters,2008,(01):81-89.
  • 2Polat E,Yeasin M,Sharma R. Robust Tracking of Human Body Parts for Collaborative Human Computer Interaction[J].{H}COMPUTER VISION AND IMAGE UNDERSTANDING,2003,(01):44-69.
  • 3Kjellstr?m H,Romero J,Kragic' D. Visual Object-Action Recogni-tion:Inferring Object Affordances from Human Demonstration[J].{H}COMPUTER VISION AND IMAGE UNDERSTANDING,2011,(01):81-90.
  • 4Suma E A,Krum D M,Lange B. Adapting User Interfaces for Gestural Interaction with the Flexible Action and Articulated Skele-ton Toolkit[J].Computers& Graphics,2012,(03):193-201.
  • 5Ayers D,Shah M. Monitoring Human Behavior from Video Taken in an Office Environment[J].{H}IMAGE AND VISION COMPUTING,2001,(12):833-846.
  • 6López M T,Fernández-Caballero A,Fernández M A. Visual Surveillance by Dynamic Visual Attention Method[J].Pattern Recogni-tion,2006,(11):2194-2211.
  • 7Aggarwal J K,Park S. Human Motion:Modeling and Recognition of Actions and Interactions[A].Thessaloniki,Greece,2004.640-647.
  • 8Moeslund T B,Hilton A,Krüger V. A Survey of Advances in Vision-Based Human Motion Capture and Analysis[J].{H}COMPUTER VISION AND IMAGE UNDERSTANDING,2006,(2/3):90-126.
  • 9Poppe R. A Survey on Vision-Based Human Action Recognition[J].{H}IMAGE AND VISION COMPUTING,2010,(06):976-990.
  • 10Weinland D,Ronfard R,Boyer E. A Survey of Vision-Based Meth-ods for Action Representation,Segmentation and Recognition[J].Com-puter Vision and Image Understanding,2011,(02):224-241.

共引文献95

同被引文献37

引证文献6

二级引证文献9

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部