期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Facilitating Condition for E-learning Adoption--Case of Ugandan Universities
1
作者 Kasse John Paul Moya Musa Annette K. Nansubuga 《通讯和计算机(中英文版)》 2015年第5期244-249,共6页
关键词 电子学习 乌干达 大学 E-LEARNING E-LEARNING 教育过程 案例 便利
下载PDF
A Recurrent Neural Network for Multimodal Anomaly Detection by Using Spatio-Temporal Audio-Visual Data
2
作者 Sameema Tariq Ata-Ur-Rehman +4 位作者 Maria Abubakar Waseem Iqbal Hatoon S.Alsagri Yousef A.Alduraywish Haya Abdullah AAlhakbani 《Computers, Materials & Continua》 SCIE EI 2024年第11期2493-2515,共23页
In video surveillance,anomaly detection requires training machine learning models on spatio-temporal video sequences.However,sometimes the video-only data is not sufficient to accurately detect all the abnormal activi... In video surveillance,anomaly detection requires training machine learning models on spatio-temporal video sequences.However,sometimes the video-only data is not sufficient to accurately detect all the abnormal activities.Therefore,we propose a novel audio-visual spatiotemporal autoencoder specifically designed to detect anomalies for video surveillance by utilizing audio data along with video data.This paper presents a competitive approach to a multi-modal recurrent neural network for anomaly detection that combines separate spatial and temporal autoencoders to leverage both spatial and temporal features in audio-visual data.The proposed model is trained to produce low reconstruction error for normal data and high error for abnormal data,effectively distinguishing between the two and assigning an anomaly score.Training is conducted on normal datasets,while testing is performed on both normal and anomalous datasets.The anomaly scores from the models are combined using a late fusion technique,and a deep dense layer model is trained to produce decisive scores indicating whether a sequence is normal or anomalous.The model’s performance is evaluated on the University of California,San Diego Pedestrian 2(UCSD PED 2),University of Minnesota(UMN),and Tampere University of Technology(TUT)Rare Sound Events datasets using six evaluation metrics.It is compared with state-of-the-art methods depicting a high Area Under Curve(AUC)and a low Equal Error Rate(EER),achieving an(AUC)of 93.1 and an(EER)of 8.1 for the(UCSD)dataset,and an(AUC)of 94.9 and an(EER)of 5.9 for the UMN dataset.The evaluations demonstrate that the joint results from the combined audio-visual model outperform those from separate models,highlighting the competitive advantage of the proposed multi-modal approach. 展开更多
关键词 Acoustic-visual anomaly detection sequence-to-sequence autoencoder reconstruction error late fusion regularity score
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部