期刊文献+

基于YOLOv7的复杂场景行人检测

Research on Pedetrian Detection in Complex Scenarios Based on YOLOv7
下载PDF
导出
摘要 YOLOv7是目前所有目标检测模型中速度最快和准确度最高的模型,但是应用在复杂场景下的行人目标时,由于提取的特征包含大量冗余背景信息,不能聚焦在行人目标区域,仍存在错检、漏检的情况。为解决这一问题,提出了一种基于YOLOv7的改进模型,在卷积层加入金字塔特征融合策略(Adaptively Spatial Feature Fusion,ASFF),通过在空域过滤冲突信息以抑制不一致的特征,使网络对不同尺度目标的特征融合能力有所提高。在Human Crowd数据集进行训练和测试改进后的模型效果检测。实验结果表明,改进的YOLOv7算法平均精度为73.5%,与原来的YOLOv7相比,平均精度提升了10.6%,且速度提升为原来的26.14%。 YOLOv7 is currently the fastest and most accurate model among all target detection models. However, when it is applied to pedestrian targets in complex scenarios, a lot of redundant background information is contained in the extracted features, making it difficult to focus on the pedestrian target area, and there are still errors and omissions in detection. In order to solve this problem, an improved model based on YOLOv7 is proposed, in which the Adaptively Spatial Feature Fusion (ASFF) strategy is added to the convolution layer. By filtering conflict information in the spatial domain to suppress inconsistent features, the network's feature fusion ability for targets of different scales is improved. The improved model detection effect is trained and tested in the Human Crowd dataset. The experimental results show that the average accuracy of the improved YOLOv7 algorithm is 73.5%. Compared with the original YOLOv7 algorithm, the average accuracy is increased by 10.6%, and the speed is increased by 26.14%.
作者 张子怡 丁学文 刘文艳 蔡鑫楠 ZHANG Ziyi;DING Xuewen;LIU Wenyan;CAI Xinnan(School of Electronic Engineering,Tianjin University of Technology and Education,Tianjin 300222,China;Tianjin Yunzhitong Technology Co.,Ltd.,Tianjin 300350,China)
出处 《计算机与网络》 2023年第18期68-72,共5页 Computer & Network
基金 天津市科委科技特派员项目(20YDTPJC01110) 天津市高等学校科技发展基金计划项目(20110710)。
关键词 YOLOv7 行人检测 特征融合网络 平均精度值 YOLOv7 pedestrian detection feature fusion network average accuracy value
  • 相关文献

参考文献7

二级参考文献25

共引文献89

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部