期刊文献+

基于非局部注意力生成对抗网络的视频异常事件检测方法 被引量:3

Non-local Attention Based Generative Adversarial Network for Video Abnormal Event Detection
下载PDF
导出
摘要 针对异常事件的不确定性,文中选择使用未来帧预测的方式对视频进行异常事件检测。通过正常样本对预测模型进行训练,使模型能够准确预测不包含异常事件的未来帧,但对于包含未知事件的视频帧,模型无法进行预测,利用生成对抗网络以及表观约束和运动约束对用于预测的生成器模型进行训练。为了减少相关目标特征丢失,提出了非局部注意力U型网络生成器(Nonlocal Attention Unet Generator,NA-UnetG)模型,提升了生成器的预测精度,同时提升了视频异常事件检测的准确度。通过公开数据集CUHK Avenue和UCSD Ped2对所提方法进行实验验证,实验结果表明,所提方法的AUC指标优于其他方法,AUC分别达到了83.4%和96.3%。 As the uncertainty of abnormal events,the method of future frame prediction is chosen to detect abnormal events in video.The prediction model is trained with normal samples,so that the model can accurately predict the future frames without abnormal events.However,it cannot predict video frames with unknown events.Combining with apparent constraints and motion constraints,generative adversarial network is used to train the generator model for prediction.In order to reduce the loss of relative target features,a nonlocal attention Unet generator(NA-UnetG)model is proposed to improve the prediction accuracy of generator and the accuracy of abnormal video event detection.Experiments on datasets CUHK Avenue and UCSD Ped2 validate the effectiveness of the proposed method.The results show that the AUC of the proposed method is better than that of other methods,reaches 83.4%and 96.3%,respectively.
作者 孙奇 吉根林 张杰 SUN Qi;JI Gen-lin;ZHANG Jie(School of Computer and Electronic Information/School of Artificial Intelligence,Nanjing Normal University,Nanjing,210023,China)
出处 《计算机科学》 CSCD 北大核心 2022年第8期172-177,共6页 Computer Science
基金 国家自然科学基金(41971343)。
关键词 视频异常事件检测 生成对抗网络 视频预测 非局部注意力机制 深度学习 Video anomaly event detection Generative adversarial network Video prediction Non-local attention mechanism Deep learning
  • 相关文献

参考文献2

二级参考文献5

共引文献25

同被引文献28

引证文献3

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部