摘要
为解决传统视频异常检测方法在不同场景下多尺度特征提取不完全的问题,提出两种方法:一种是用于简单场景的基于UNet3+的生成对抗网络方法(简称U3P^(2)),另一种是用于复杂场景的基于UNet++的生成对抗网络方法(简称UP^(3))。两种方法分别对连续输入的视频帧生成预测,引入多种损失函数和光流模型学习其外观与运动信息,通过计算AUC进行性能评估。U3P^(2)方法以6.3 M参数量在Ped2数据集的AUC提升约0.6%,而UP^(3)方法在Avenue数据集的AUC提升约0.8%,验证其能够有效应对不同场景下的异常检测任务。
Two methods were proposed to solve the problem of incomplete multi-scale feature extraction of traditional video ano-maly detection methods in different scenes.One was a UNet3+based generative adversarial network detection method(U3P^(2))for simple scenes,and the other was a UNet++based adaptive generative adversarial network method(UP^(3))for complex scenes.Two ways were used to generate predictions for continuous input video frames.Predictions for continuous input video frames was generated,various loss functions and optical flow models were incorporated to learn their appearance and motion information.Performance was evaluated by calculating the area under the curve(AUC).The U3P^(2)method increases the AUC of Ped2 dataset by about 0.6%with 6.3 M parameters,while the UP^(3) method increases the AUC of Avenue dataset by about 0.8%.It is verified that it can cope with anomaly detection tasks in different scenes.
作者
陈景霞
林文涛
龙旻翔
张鹏伟
CHEN Jing-xia;LIN Wen-tao;LONG Min-xiang;ZHANG Peng-wei(School of Electronic Information and Artificial Intelligence,Shaanxi University of Science and Technology,Xi’an 710021,China)
出处
《计算机工程与设计》
北大核心
2024年第3期777-784,共8页
Computer Engineering and Design
基金
国家自然科学基金项目(61806118)
陕西科技大学科研启动基金项目(2020BJ-30)。
关键词
生成对抗网络
视频异常检测
U型卷积网络
全尺度跳跃连接
密集跳跃连接
光流模型
多尺度特征提取
generative adversarial networks
video anomaly detection
U-Net
full-scale skip connection
dense skip connection
optical flow models
multi-scale feature extraction