期刊文献+

基于深度信息融合的密集目标检测 被引量:1

Dense target detection based on deep information fusion
下载PDF
导出
摘要 针对密集行人检测精度低的问题,提出基于深度信息融合的密集目标检测方法——YOLOv4-SD。该方法通过将single scale Retinex(SSR)与目标检测算法信息融合,增强输入图像质量,凸显图像中更多的信息元素;并对YOLOv4算法中特征融合层进行改进,增加原始图像特征的利用率,深度优化特征融合层的网络结构。在VOC 2012等数据集上进行对比实验,结果表明在保持检测速度的前提下,该算法的平均检测精度和交并比分别提高了7.7%和5.2。对于数据集中边缘低像素或高重叠的行人目标,YOLOv4-SD算法能够较为准确地检测出特殊目标具体位置。 Aiming at the problem of low accuracy of dense pedestrian detection,YOLOv4-SD a dense target detection method based on deep information fusion,is proposed.The method combines the information of single scale Retinex(SSR)and the target detection algorithm to improve the quality of the input image and highlight more information elements in the image.By improving the feature fusion layer in the YOLOv4 algorithm,the network structure of the feature fusion layer is deeply optimized to achieve the purpose of improving the utilization of original image features.Comparison experiments are conducted on data sets VOC 2012,the experimental results show that the average detection accuracy and intersection ratio of the algorithm are increased by 7.7%and 5.2,while maintaining the detection speed.For predestrian targets with low pixels or high overlap at the edge of the data set images,YOLOv4-SD algorithm can accurately detect the specific location special targets.
作者 王建浩 呼子宇 张翮翔 代言 郝若欣 高泽航 WANG Jianghao;HU Ziyu;ZHANG Hexiang;DAI Yan;HAO Ruoxin;GAO Zehang(School of Electrical Engineering,Yanshan University,Qinhuangdao 066004)
出处 《高技术通讯》 CAS 2022年第9期914-921,共8页 Chinese High Technology Letters
基金 国家自然科学基金(62003296) 河北省自然科学基金(F2016203249)资助项目。
关键词 信息融合 图像处理 目标检测 深度学习 数据聚类 information fusion image processing target detection deep learning data clustering
  • 相关文献

参考文献6

二级参考文献45

  • 1惠斌,陈法领,罗海波.基于互信息的目标跟踪方法[J].红外与激光工程,2007,36(z2):209-212. 被引量:2
  • 2闫友彪,陈元琰.机器学习的主要策略综述[J].计算机应用研究,2004,21(7):4-10. 被引量:56
  • 3刘卜,屈有山,冯桂兰,杨秀芳,相里斌.小波双线性插值迭代算法应用于光学遥感图像[J].光子学报,2006,35(3):468-472. 被引量:22
  • 4Major Research plan of National Natural Science Founda- tion of China: cognitive computing of visual and auditory information[ Online], available : http :// ccvai, xjtu. edu. cn/rues, do? method =getoverview: xjtu, 2015.
  • 5Yantis S. To See is to attend. Science, 2003, 299 (5603) :54-56.
  • 6Fecteau J H, Bell A H, Munoz D P. Neural correlates of the automatic and goal-driven biases in orienting spatial attention. Journal of Neurophysiology, 2004, 92 ( 3 ) : 1728-1737.
  • 7Corbetta M. Frontoparietal cortical networks for directing attention and the eye to visual locations: identical, inde- pendent, or overlapping neural systems? Proceedings of the National Academy of Sciences of the United States of A- merica (PNAS). 1998, 95(3) :831-838.
  • 8Findlay J M, Gilchrist I D. Visual Attention: The Aetive Vision Perspective. Vision and attention, New York: Springer, 2000. 83-103.
  • 9Yarbus A L. Eye Movements and Vision. New York: Ple- num Press, 1967.
  • 10Torralba A, Oliva A, Castelhano M S. Contextual guid- anee of eye movements and attention in real-world scenes : the role of global features in object search. Psychological Review, 2006. 113(4) : 766-786.

共引文献1783

同被引文献9

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部