期刊文献+

一种特征融合的视频事故快速检测方法 被引量:2

A Feature Fusion Deep Learning Framework for Video-Based Crash Detection Systems
下载PDF
导出
摘要 交通事故快速检测对于提升交通事故应急管理水平具有重要的现实意义。目前主流的视频事故检测算法较难同时满足高精度和低算力的要求,一定程度上制约了该技术的工程应用。针对存在的问题,本文提出了一种新的基于特征融合的视频事故快速检测方法,以期在有限算力成本下同时获得较高的检测精度和较快的检测速度。模型将特征融合通过两个步骤实现:首先,提出了一种事故注意力模块,并将其嵌入至残差网络(ResNet50)中以从复杂交通场景中筛选事故相关的外观特征;之后,将该外观特征输入到卷积长短时间记忆网络(Conv-LSTM)中,实现外观特征的微调与运动特征的提取。训练后的模型在视频测试集上的精度达到88.89%,检测速度达到FPS>30。事故注意力模块的引入提高了模型的外观特征筛选能力,而Conv-LSTM相比一般LSTM模型在提取运动特征时可以更好地保留外观特征,相比传统基于运动特征的检测方法,该模型可以获得更高的精度。相比典型特征融合模型(如C3D),模型显著降低了计算复杂度,在检测速度上更快。研究结果表明,本文提出的事故检测模型可以在有限算力下较好地取得事故检测精度和速度的平衡,有望实现推广应用。 Instantaneous crash detection is of great importance for saving lives and improving the level of traffic incident management.With a goal of achieving abalance between the detection speed and accuracy of crash detection systems with limited computing resources,we have proposed a feature fusion deep learning framework for video-based crash detection systems in this paper.Firstly,a residual neural network(ResNet)with a novel crash attention module was used to extract crash-related appearance features from traffic videos.Then,the extracted appearance features were transmitted to a feature fusion model,Conv-LSTM(i.e.convolutional long short-term memory),to fine-tune appearance features and capture motion crash features.Eventually,the proposed model achieved an accuracy of 88.89%and an acceptable detection speed(frames per second(FPS)>30).Thus,the proposed model exhibits improved performance in capturing the localized appearance features of crashes as compared to other conventional convolutional neural networks due to the use of the proposed crash attention module.The Conv-LSTM module exhibits better information retention integrity than conventional long short-term memory(LSTM)modules.The proposed model can achieve a higher accuracy than those achieved by traditional motion-based detection methods.Furthermore,it exhibits a higher detection speed than those exhibited by other feature fusion-based models(e.g.,C3D model),on account of the decrease in computational complexity.The results demonstrate that the proposed model can achieve a crucial balance between the accuracy and speed of crash detection systems,and it is expected to be applied in practice.
作者 王晨 周威 章世祥 WANG Chen;ZHOU Wei;ZHANG Shi-xiang(School of Transportation,Southeast University,Nanjing 211189,China;China Design Group Co.,LTD,Nanjing 210014,China)
出处 《交通运输工程与信息学报》 2022年第1期31-38,共8页 Journal of Transportation Engineering and Information
基金 科技部重点研发计划国际合作项目(2018YFE0102700) 江苏省交通科学研究计划项目(2019Z02) 国家自然科学基金项目(71971061)。
关键词 智能交通 视频事故检测算法 残差网络 事故视觉注意力 卷积长短时间记忆网络 intelligent transportation video-based crash detection algorithm ResNet crash attention module Conv-LSTM
  • 相关文献

参考文献4

二级参考文献29

共引文献24

同被引文献65

引证文献2

二级引证文献4

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部