期刊文献+

改进的GDT-YOLOV3目标检测算法 被引量:10

GDT-YOLOV3Improved algorithm of GDT-YOLOV3 image target detection
下载PDF
导出
摘要 为了解决视频图像中目标检测准确率低、速度慢等问题,本文提出了一种基于YOLOV3改进的目标检测方法。通过引入GIOU Loss,可解决原IOU无法直接优化的非重叠部分问题,在借鉴了密集连接网络的思想之后,将YOLOV3中的3个残差块更换为3个密集块,并结合Max Pooling加强密集连接块之间的特征传递,重新替换IOU和原网络的连接结构,检测设计出新的网络结构后,减少了参数量,增强了特征的复用与融合,最终实现了优于原方法的效果。实验结果表明改进的GDT-YOLOV3算法与原有的算法相比,无论是在简单还是复杂交通场景中都有较优秀的检测效果,本文所提出的算法平均检出准确率高达92.77%,速度达到25.3 f/s,基本满足了实时性。此外在检测精度上,改进的GDT-YOLOV3算法要优于SSD512、YOLOV2与YOLOV3算法。 In order to solve the problems of low accuracy and slow speed of target detection in video images,an improved YOLOV3-based object detection method is proposed.By introducing GIOU Loss,the non-overlapping part problem that the original IOU cannot directly optimize can be solved.After drawing on the idea of densely connected networks,the three residual blocks in YOLOV3 are replaced with three dense blocks,and the denseness is combined with Max Pooling to strengthen the denseness.After transferring the features between the connected blocks,and replacing the IOU and the original network to detect the connection structure,a new network structure is designed.The number of parameters is reduced,the feature reuse and fusion are enhanced,and the effect is better than the original method.The experimental results show that comparing with the original algorithm,the improved GDT-YOLOV3 algorithm has excellent results in both simple and complex traffic scenarios.The average detection accuracy of the algorithm proposed in this paper is up to 92.77%,and the speed has reached 25.3 f/s,which basically meets the real-time performance.In addition,in terms of detection accuracy,the improved GDT-YOLOV3 performs better than SSD512,YOLOV2,and YOLOV3.
作者 唐悦 吴戈 朴燕 TANG Yue;WU Ge;PIAO Yan(School of Electronics and Information Engineering, Changchun University of Science and Technology, Changchun 130022, China)
出处 《液晶与显示》 CAS CSCD 北大核心 2020年第8期852-860,共9页 Chinese Journal of Liquid Crystals and Displays
基金 国家自然科学基金(No.60977011,No.20180623039TC,No.20180201091GX)。
关键词 目标检测 卷积神经网络 YOLOV3 密集连接网络 K-MEANS target detection convolutional neural network YOLOV3 densenet K-means
  • 相关文献

参考文献10

二级参考文献85

  • 1范宏深,倪国强,冯煜芳.复杂背景可见光图像中弱小目标探测的新算法[J].光电工程,2004,31(6):48-51. 被引量:8
  • 2刘靳,姬红兵.基于移动式加权管道滤波的红外弱小目标检测[J].西安电子科技大学学报,2007,34(5):743-747. 被引量:23
  • 3ELGAMMAL A, DURAISWAMI R,HARWOOD D,et al.. Background and foreground modeling using nonparametric ker- nel density estimation for visual surveillance [ J ]. IEEE,2002,90 ( 7 ) : 1151-1163.
  • 4AVIDAN S. Support vector tracking[ J]. IEEE Trans. Part, Analy. Mach. Intell. ,2004,26(8) : 1064-1072.
  • 5PARK S,AGGARWAL J K. A hierarchical bayesian network for event recognition of human actions and interactions. Mul- timed[J]. Syst. ,2004,10(2):164-179.
  • 6VEENMAN C, REINDERS M, BACKER E. Resolving motion correspondence for densely moving points[ J ]. IEEE Trans. Part. Analy. Mach. Intell. ,2001,23(1) :54-72.
  • 7SHAFIQUE K, SHAH M. A non-iterative greedy algorithm for multi-frame point correspondence [ J ]. IEEE Trans. Part. Analy. Mach. lntell. ,2005,27( 1 ) : 110-115.
  • 8COMANICIU D, RAMESH V, MEER P. Kernel-based object tracking [ J]. IEEE Trans. Part. Analy. Mach. Intell., 2003,25:564-575.
  • 9BLACK M,JEPSON A. Eigentraeking:robust matching and tracking of articulated objects using a view-based representa- tion[J]. Int. J. Comput. Vision,1998,26(1) :63-84.
  • 10HARITAOGLU I, HARWOOD D, DAVIS L. W4:real-time surveillance of people and their activities [ J ]. 1EEE Trans. Patt. Analy. Mach. Intell. ,2000,22(8) :809-830.

共引文献450

同被引文献54

引证文献10

二级引证文献75

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部