期刊文献+

可见光–红外特征交互与融合的YOLOv5目标检测算法

YOLOv5 object detection algorithm with visible-infrared feature interaction and fusion
下载PDF
导出
摘要 目标检测是自动驾驶系统的关键技术,普通RGB目标检测算法在夜间和恶劣气候等场景往往表现一般,融合可见光和红外信息的目标检测算法因而受到诸多研究关注.现有方法通常融合结构复杂,且忽视了模态间信息交流的重要性.对此,本文以YOLOv5为基本框架,提出一种可见光–红外特征交互与融合的目标检测算法,使用一种新的主干网络跨阶段局部(CSPDarknet53-F),采用双分支结构分别提取可见光和红外特征.然后,通过特征交互模块重构各模态的信息成分和比例,提升模态间信息交流,使可见光和红外特征进行更充分的融合.在FLIR-aligned和M3FD数据集上的大量实验证明,本文算法使用的CSPDarknet53-F在协同利用可见光和红外信息方面更加出色,提升了模型精度,同时,拥有对抗光照强度骤变的鲁棒性. Object detection is the key technology of the autonomous driving system,but object detection algorithms based on RGB often perform poorly in scenarios such as nighttime and severe weather.Therefore,the object detection algorithms fusing visible and infrared information have begun to receive a lot of research attention.However,the existing methods usually have complex fusion structures and ignore the importance of information exchange between modalities.In this paper,we take YOLOv5 as the basic framework,and propose an object detection algorithm with visible-infrared feature interaction and fusion.It uses a new backbone network,CSPDarknet53-F,which uses a dual branch structure to extract visible and infrared features,respectively,and then reconstructs the information components and proportions of each mode through feature interaction modules to improve the information exchange between modalities so that visible and infrared features can be more fully integrated.Extensive experiments on the FLIR-aligned dataset and the M3FD dataset show that the CSPDarknet53-F used in our algorithm is more excellent in terms of synergistically utilizing visible and infrared information,which improves the detection accuracy of the model and has robustness against sudden changes in light intensity.
作者 解宇敏 张浪文 余孝源 谢巍 XIE Yu-min;ZHANG Lang-wen;YU Xiao-yuan;XIE Wei(College of Automation Science and Technology,South China University of Technology,Guangzhou Guangdong 510640,China;Yueyang Goaland Energy Conservation Equipment Manufacturing Co.,Ltd,Yueyang Hunan 414000,China;College of Physics and Telecommunication Engineering,South China Normal University,Guangzhou Guangdong 510006,China)
出处 《控制理论与应用》 EI CAS CSCD 北大核心 2024年第5期914-922,共9页 Control Theory & Applications
基金 国家自然科学基金项目(61803161) 广东省自然科学基金项目(2022A1515011887,2023A1515030119) 清远市科技计划项目(2023DZX006) 佛山市重点领域科技攻关项目(2020001006812) 顺德区核心攻关项目(2030218000174) 广州市科技计划项目(202102020379) 江门市基础与应用基础研究项目(2020030103080008999)资助。
关键词 可见光图像 红外图像 特征融合 交互 YOLOv5 visible images infrared images feature fusion interaction YOLOv5
  • 相关文献

参考文献5

二级参考文献32

  • 1李楚为,张志龙,杨卫平.结合布尔图和灰度稀缺性的小目标显著性检测[J].中国图象图形学报,2020,0(2):267-281. 被引量:4
  • 2廖传锦,黄席樾,柴毅.基于多传感器信息融合的目标跟踪与防撞决策[J].控制理论与应用,2005,22(1):127-133. 被引量:5
  • 3SONG K T, CHEN W J. Face recognition and tracking for human- robot interaction [C]//Proceedings of IEEE International Conference on Systems, Man and Cybernetics. Hague, Netherlands: IEEE, 2004: 2877 - 2882.
  • 4ZHAO H, CHEN Y, SHAO X, et al. Monitoring a populated environ- ment using single-row laser range scanners from a mobile platform [C]//Proceedings of lEEE International Conference on Robotics and Automation. Roma, Italy: IEEE, 2007:4739 - 4745.
  • 5LAM C, CHOU C, CHIANG K, et al. Human-centered robot navigation-towards a harmoniously human-robot coexisting environ- ment [J]. IEEE Transactions on Robotics, 2011, 27(1): 99 - 111.
  • 6CHUNG W, KIM H, YOO Y, et al. The detection and following of human legs through inductive approaches for a mobile robot with a single laser range finder [J]. IEEE Transactions on Industrial Elec- tronics, 2012, 59(8): 3156 - 3166.
  • 7XU Z, FITCH R, SUKKARIEH S. Decentralised coordination of mo- bile robots for target tracking with learnt utility models [C] //Proceed- ings of lEEE International Conference on Robotics and Automation. Karlsruhe, Germany: IEEE, 2013:2014 - 2020.
  • 8XAVIER J, PACHECO M, CASTRO D, et al. Fast line, arc/circle and leg detection from laser scan data in a player driver [C] //Proceed- ings of lEEE International Conference on Robotics and Automation. Barcelona, Spain: IEEE, 2005:3930 - 3935.
  • 9KONDAXAKIS P, BALTZAKIS H, TRAHANIAS E Learning mov- ing objects in a multi-target tracking scenario for mobile robots that use laser range measurements [C] //Proceedings oflEEE/RSJ Inter- national Conference on Intelligent Robots and Systems. St. Louis, USA: IEEE, 2009:1667 - 1672.
  • 10WANG C, THORPE C, HEBERT M, et al. Simultaneous localiza- tion, mapping and moving object tracking [J]. International Journal of Robotics Research, 2007, 26(9): 889 - 916.

共引文献60

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部