期刊文献+

复杂监控背景下基于边缘感知学习网络的行为识别算法 被引量:1

BEHAVIOR RECOGNITION ALGORITHM BASED ON EDGE-AWARE LEARNING NETWORK IN COMPLEX SURVEILLANCE BACKGROUND
下载PDF
导出
摘要 由于复杂背景、多视角变化等因素的影响,准确识别、分析现实场景中人体的行为仍然是一个具有挑战性的问题。为了提升行人检测与行为识别的精度,提出一种新颖的边缘感知深度网络。通过边缘感知融合模块提升行人轮廓精度,利用多尺度金字塔池化层捕获视频序列的空时特征。边缘相关特征的互补特征能够有效地保留行人目标的清晰边界,而辅助旁侧输出与金字塔池化层输出的组合可以提取丰富的全局空时上下文信息。大量定性定量的实验结果表明,该模型可以有效地提高现有行人检测与行为识别网络的性能,在UCF101数据集上取得了90.55%的行人行为识别准确率。 Due to the influence of complex background and multi-angle changes,it is still a challenging problem to accurately identify and analyze human behaviors in real scenes.In order to improve the accuracy of pedestrian detection and behavior recognition,this paper proposes a novel edge-aware deep network method.It used the edge-aware fusion module to improve the accuracy of pedestrian contours,and used the multi-scale pyramid pooling layer to capture the space-time features of the video sequence.The complementary features of the edge-aware features could effectively preserve the clear boundary,while the combination of the auxiliary side output and the pyramid pooling layer output could extract rich global context information.A large number of qualitative and quantitative experimental results show that the proposed model can effectively improve the performance of existing pedestrian detection and behavior recognition networks.On the UCF101 data set,it achieves 90.55% accuracy rate of pedestrian behavior recognition.
作者 聂玮 曹悦 朱冬雪 朱艺璇 黄林毅 Nie Wei;Cao Yue;Zhu Dongxue;Zhu Yixuan;Huang Linyi(Electric Power Research Institute,State Grid Tianjin Electric Power Company,Tianjin 300384,China;National Key Laboratory of Fundamental Science on Synthetic Vision,Sichuan University,Chengdu 610064,Sichuan,China)
出处 《计算机应用与软件》 北大核心 2020年第8期227-232,共6页 Computer Applications and Software
基金 国家自然科学基金项目(61703077,51777196) 国家重大科学仪器设备开发专项(2013YQ490879) 国网天津市电力公司科技项目(520312170002)。
关键词 行为识别 边缘感知 深度学习 金字塔池化 空时上下文 Behavior recognition Edge-aware Deep learning Pyramid pooling Spatial-time context
  • 相关文献

参考文献1

二级参考文献112

  • 1Over P,A wad G,Martial M, et al. Trecvid 2014-anoverview of the goals, tasks, data,evaluation mechanismsand metrics [C/OL] //Proc of TRECVID 2014. [ 2014-07-09]. http://www. nist. gov/itl/iad/mig/trecvid_sed_2014. cfm.
  • 2Soomro K, Zamir A, Shah M. UCF101 : A dataset of 101human actions classes from videos in the wild, CRCV-TR-12-01 [R/OL]. (2012-12-01) [2015-04-15]. http://crcv.ucf.edu/data/UCF101. php.
  • 3Aggarwal J, Ryoo M. Human activity analysis: A review[J]. ACM Computing Surveys,2011, 43(3) : 1-43.
  • 4Turaga P,Chellappa R,Subrahmanian V,et al. Machinerecognition of human activities: A survey [J]. IEEE Transon Circuits and Systems for Video Technology, 2008, 18(11): 1473-1488.
  • 5Poppe R. A survey on vision-based human action recognition[J]. Image and Vision Computing, 2010, 28(6) : 976-990.
  • 6Kru"ger V,Kragic D,Ude A,et al. The meaning of action:A review on action recognition and mapping [J]. AdvancedRobotics, 2007, 21(13): 1473-1501.
  • 7Ye Mao, Zhang Qing, Wang Liang, et al. A survey onhuman motion analysis from depth data [C] //Proc of Time-of-Flight and Depth Imaging, Sensors,Algorithms, andApplications. New York: Elsevier Science Inc, 2013: 495-187'.
  • 8Ke S,Thuc H, Lee Y,et al. A review on video-basedhuman activity recognition [J]. Computers, 2013,2(2) : 88-131.
  • 9Vishwakarma S, Agrawal A. A survey on activityrecognition and behavior understanding in video surveillance[J]. The Visual Computer, 2013,29(10) : 983-1009.
  • 10Chaquet J, Carmona E, Caballero A. A survey of videodatasets for human action and activity recognition [J].Computer Vision and Image Understanding, 2013, 117(6):633-659.

共引文献46

同被引文献13

引证文献1

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部