期刊文献+

基于格框架的视频事件时空相关描述分析方法

Spatio-temporal Correlation Described and Analysis Method of Video Event Based on Case Frame
下载PDF
导出
摘要 在海量的视频资源中,如何描述和表示视频事件内容,是当下多媒体信息处理的热点问题之一。在用于自然语言理解的格语法理论基础上,引入了语义框架结构,设计了用以描述复杂事件中的子事件之间关系的格框架结构,并定义了视频综合事件中子事件框架关系。其中,在子事件参照关联关系上,对子事件的时间、空间关联性进行了分析推理。并采用格语义框架网络(Case Semantic Frame Net,CSFN)对实际监控视频集中的典型事件进行描述和时空关联分析,对比了格框架网络和传统格语法方法对事件进行描述分析之后,用户对视频进行检索的结果。实验证明,格框架网络能更加准确地描述和理解复杂事件,并有效提高视频事件检索的准确率和召回率。 In the mass of video resources, how to describe and represent video event content is one of hot issues in the current multimedia information processing. The original theory of case grammar for natural language understanding was extended; Case Frame was designed to describe the relationships between the structures of complex events in the sub-events. In Ref_Asso, which is one of relationships, spatio-temporal correlation was analyzed and reasoned among sub-events. In the experimental part, Case Semantic Frame Net was used to describe and understand the typical complex events in surveillance video. And results of users' retrieval were compared; in which video test set was described respectively with Case Semantic Frame Net and traditional Case grammar. Experiment results show that the new method can more accurately describe and understand complex events, and has a higher precision and recall rate in video retrieval.
出处 《系统仿真学报》 CAS CSCD 北大核心 2015年第4期770-778,共9页 Journal of System Simulation
基金 国家自然科学基金资助项目(61170126) 国家自然科学基金青年科学基金项目(61203244) 江苏大学高级技术人才科研启动基金项目(13JDG126)
关键词 格语法 格框架 复杂事件 子事件 参照完整性 case grammar case framework complex events sub-event Ref_Asso
  • 相关文献

参考文献8

  • 1柯佳,詹永照,陈潇君,汪满容.基于扩展格框架标注的视频事件多维关联规则挖掘方法[J].计算机应用研究,2013,30(10):3133-3138. 被引量:2
  • 2金标,胡文龙,王宏琦.基于时空语义信息的视频运动目标交互行为识别方法[J].光学学报,2012,32(5):145-151. 被引量:6
  • 3Jia Ke,Yongzhao Zhan,Xiaojun Chen,Manrong Wang.The retrieval of motion event by associations of temporal frequent pattern growth[J]. Future Generation Computer Systems . 2013 (1)
  • 4Chunmei Liu,Changbo Hu,Qingshan Liu,J.K. Aggarwal.Video event description in scene context[J]. Neurocomputing . 2012
  • 5Slobodan Ribaric,Tomislav Hrkac.A model of fuzzy spatio-temporal knowledge representation and reasoning based on high-level Petri nets[J]. Information Systems . 2011 (3)
  • 6Asaad Hakeem,Mubarak Shah.Learning, detection and representation of multi-agent events in videos[J]. Artificial Intelligence . 2007 (8)
  • 7Brigitte Nerlich,David D. Clarke.Semantic fields and frames: Historical explorations of the interface between language, action, and cognition[J]. Journal of Pragmatics . 2000 (2)
  • 8Josef R,Michael E,Miriam RL,et al.Frame Net. http://framenet.icsi.berkely.edu/ . 2007

二级参考文献39

  • 1G. Lavee, E. Rivlin, M. Rudzsky. Understanding video events:a survey of methods for automatic interpretation of semantic occurrences in video [J]. IEEE Trans. Systems, Man, and Cybernetics, Part C : Applications and Reviews, 2009, 39 (5) : 4894504.
  • 2M. S. Ryoo, J. K. Aggarwal. Human activity analysis: a review [J]. ACM Comhuter Surveys, 2011, 43(3): 16.
  • 3J. W. Davis, A. F. Bobick. The representation and recognition of human movement using temporal templates [ C]. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR'97), Washington, DC, USA, 1997: 928.
  • 4M. S. Ryoo, J. K. Aggarwal. Spatio-temporal relationship Match: video structure comparison for recognition of complex human activities[C]. In Proceedings of the IEEE International Conference on Computer Vision (ICCV'09), Kyoto, Japan, Oct 2009.
  • 5S. Hongeng, R. Nevatia, F. Bremond. Video-based event recognition: activity representation and probabilistic recognition methods [ J]. Computer Vision and Image Understanding (CVIU), 2004, 96(2) : 129-162.
  • 6N. M. Oliver, B. Rosario, A. P. Pentland. A Bayesian computer vision system for modeling human interactions [J]. IEEE Trans. Pattern Analysis and Machine Intelligence, 2000,22(8): 831-843.
  • 7S. Park, J. K. Aggarwal. A hierarchical Bayesian network for event recognition of human actions and interactions [ J ]. Multimedia System, 2004, 10(2) : 164-179.
  • 8P. Natarajan, R. Nevatia. Coupled hidden semi Markov models for activity recognition [C]. In IEEE Workshop on Motion and Video Computing (WMVC'07), Austin, TX, USA, 2007.
  • 9A. Galata, A. G. Cohn, D. Mageeet a/. Modeling interaction using learned qualitative spatio-temporal relations and variable length Markov models [C]. Proceeding of the 15th European Conference on Artificial Intelligence (ECAI'02), 2002: 741-745.
  • 10M. S. Ryoo, J. K. Aggarwal. Semantic representation and recognition of continued and recursive human activities [ J]. Internat. J. Computer Vision, 2009, 82(1): 1-24.

共引文献6

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部