In video surveillance,anomaly detection requires training machine learning models on spatio-temporal video sequences.However,sometimes the video-only data is not sufficient to accurately detect all the abnormal activi...In video surveillance,anomaly detection requires training machine learning models on spatio-temporal video sequences.However,sometimes the video-only data is not sufficient to accurately detect all the abnormal activities.Therefore,we propose a novel audio-visual spatiotemporal autoencoder specifically designed to detect anomalies for video surveillance by utilizing audio data along with video data.This paper presents a competitive approach to a multi-modal recurrent neural network for anomaly detection that combines separate spatial and temporal autoencoders to leverage both spatial and temporal features in audio-visual data.The proposed model is trained to produce low reconstruction error for normal data and high error for abnormal data,effectively distinguishing between the two and assigning an anomaly score.Training is conducted on normal datasets,while testing is performed on both normal and anomalous datasets.The anomaly scores from the models are combined using a late fusion technique,and a deep dense layer model is trained to produce decisive scores indicating whether a sequence is normal or anomalous.The model’s performance is evaluated on the University of California,San Diego Pedestrian 2(UCSD PED 2),University of Minnesota(UMN),and Tampere University of Technology(TUT)Rare Sound Events datasets using six evaluation metrics.It is compared with state-of-the-art methods depicting a high Area Under Curve(AUC)and a low Equal Error Rate(EER),achieving an(AUC)of 93.1 and an(EER)of 8.1 for the(UCSD)dataset,and an(AUC)of 94.9 and an(EER)of 5.9 for the UMN dataset.The evaluations demonstrate that the joint results from the combined audio-visual model outperform those from separate models,highlighting the competitive advantage of the proposed multi-modal approach.展开更多
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-RG23148).
文摘In video surveillance,anomaly detection requires training machine learning models on spatio-temporal video sequences.However,sometimes the video-only data is not sufficient to accurately detect all the abnormal activities.Therefore,we propose a novel audio-visual spatiotemporal autoencoder specifically designed to detect anomalies for video surveillance by utilizing audio data along with video data.This paper presents a competitive approach to a multi-modal recurrent neural network for anomaly detection that combines separate spatial and temporal autoencoders to leverage both spatial and temporal features in audio-visual data.The proposed model is trained to produce low reconstruction error for normal data and high error for abnormal data,effectively distinguishing between the two and assigning an anomaly score.Training is conducted on normal datasets,while testing is performed on both normal and anomalous datasets.The anomaly scores from the models are combined using a late fusion technique,and a deep dense layer model is trained to produce decisive scores indicating whether a sequence is normal or anomalous.The model’s performance is evaluated on the University of California,San Diego Pedestrian 2(UCSD PED 2),University of Minnesota(UMN),and Tampere University of Technology(TUT)Rare Sound Events datasets using six evaluation metrics.It is compared with state-of-the-art methods depicting a high Area Under Curve(AUC)and a low Equal Error Rate(EER),achieving an(AUC)of 93.1 and an(EER)of 8.1 for the(UCSD)dataset,and an(AUC)of 94.9 and an(EER)of 5.9 for the UMN dataset.The evaluations demonstrate that the joint results from the combined audio-visual model outperform those from separate models,highlighting the competitive advantage of the proposed multi-modal approach.