期刊文献+

基于关键原子动作的视频事件学习与识别方法

Learning and Recognition of Video Events Based on Key Atomic Actions
下载PDF
导出
摘要 提出了一种基于关键原子动作的视频事件学习与识别方法.通过与或图来表示事件、子事件、原子动作之间的层次结构,以及子事件和原子动作间的时序关系,通过最小描述长度准则从训练数据中学习事件的与或图结构.在此基础上,提出了一种事件中关键原子动作的学习方法,根据原子动作的重要性赋予相应的权值,该权值可以用于事件的实时解析,提高事件的识别率.基于原子动作的权值及漏检数目定义了事件的可识别度,用于减少待识别的事件数目,进而提高事件识别的算法效率.多种场景实验结果表明所提出的方法可以有效地进行事件识别. In this paper, a method for video events learning and recognition based-on key atomic actions is presented. First, the hierarchical structures of events and the temporal relations between sub-events and atomic actions are expressed by and-or graph, which is learned from training data by the minimum description length principle. Then, the weights of the atomic actions are computed according to their importance, and the atomic action with the maximum weight is regarded as the key atomic action. The weight values of the atomic actions could be used for real-time events parsing which leads to the improvement of event recognition ratio. We also define the event recognition degree based on the weight of atomic action to reduce the number of identified events and raise the efficiency of events recognition algorithm. Finally, the experimental results show that, in a variety of scenes, the proposed method is effective for event recognition.
出处 《北京理工大学学报》 EI CAS CSCD 北大核心 2013年第3期290-295,共6页 Transactions of Beijing Institute of Technology
基金 国家自然科学基金资助项目(60805028) 中央高校基础研究资助项目(2011B11114 2012B07314) 山东科技大学科研创新团队支持计划项目(2010KYTD101) 山东省自然科学基金资助项目(ZR2010FM027) 中国博士后科学基金资助项目(2012M521336)
关键词 与或图 事件规则学习 事件识别 事件解析 时序关系 and-or graph event rules learning event recognition event parsing temporal rela-tionship
  • 相关文献

参考文献9

  • 1Natarajan P, Nevatia R. Coupled hidden semi-Markov models for activity reeognition [C] // Proceedings of IEEE Workshop on Motion and Video Computing. Austin, USA:[s. n. ], 2007:1 -10.
  • 2Ivanov Y A,Bobick A F. Recognition of visual activities and interactions by stochastic parsing [J]. Pattern Analysis and Machine Intelligence, 2000, 22 (8): 852 -872.
  • 3Pei M T, Jia Y D, Zhu S C. Parsing video events with goal inference and intent prediction[C]//Proceedings of International Conference on Computer Vision. Barcelona, Spain:[s. n. ], 2011: 487 - 494.
  • 4Zhang Z, Huang K Q, Tan T N, et al. Trajectory series analysis based event rule induction for visual sur- veillance [C] // Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Minneapolis, Minnesota, USA:[s. n. ], 2007:1 - 8.
  • 5Hakeem A, Shah M. Learning, detection and representation of multi-agent events in videos[J]. Artificial Intelligence, 2007,171(8 - 9) : 586 - 605.
  • 6Si Z Z, Pei M T, Yao B,et al. Unsupervised learning of event and-or grammar and semantics from video[C]// Proceedings of International Conference on Computer Vision. Barcelona, Spain:[s. n. ], 2011:41 - 48.
  • 7Turaga P, Chellappa R, Subrahmanian V S, et al. Machine recognition of human activities: a survey[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2008,18(11) : 1473 - 1488.
  • 8Laptev I, Lindeberg T. Space-time interest points[C]// Proceedings of International Conference on Computer Vision. Nice, France:[s. n. ] ,2003:432 -439.
  • 9Weinland D, Ronfard R, Boyer E. Free viewpoint action recognition using motion history volumes[J]. Computer Vision and Image Understanding, 2006,104(2) : 249 - 257.

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部