期刊文献+

人机协同装配多目标检测的改进YOLOv7算法

Improved YOLOv7 Algorithm for Multi-object Detection Method of Human-robot Collaboration Assembly
下载PDF
导出
摘要 针对人机协同装配环境复杂多变且装配零件尺度差异大、部分零件相似度高的特征,为保证人机协作装配过程中机器人准确抓取装配零件,提出了一种改进的YOLOv7模型来提高装配场景中多零件目标检测效果。首先,采用ODConv(Omni-dimensional dynamic convolution)替换YOLOv7主干网络中的卷积层,使其能够自适应调整卷积核的权值,提取不同形状、大小的装配零件的特征。其次,在YOLOv7主干网络中引入SimAM(Selective image attention mechanism)模块来减轻复杂多变的装配环境背景对零件检测准确率的影响。最后,使用Efficient-IOU替换原始的CompleteIOU来加速收敛,同时降低部分装配零件相似度高对检测准确率的影响。实验结果表明,该模型的平均准确率为93.4%,改进后的网络优于原始网络和其他目标检测算法。所提出的改进YOLOv7算法在保持高精度的同时具有较高的FPS,模型参数和计算量也相对较低,适合动态人机协同装配场景下实时目标检测要求。 Aiming at the characteristics of complex and changeable Human-Robot Collaborative assembly environment,large scale difference with the assembly parts,and high similarity of some parts,in order to ensure that the robot can grasp the assembly parts accurately in the Human-Robot Collaborative assembly,an improved YOLOv7 model is proposed to improve the multi-part target detection effect in the assembly scene.Firstly,ODConv(Omni-Dimensional Dynamic Convolution)is used to replace the convolutional layers in the YOLOv7 backbone network,so that it can adjust the weight of the convolutional kernel adaptively and extract the features of assembly parts of different shapes and sizes.Secondly,the SIAM(Selective Image Attention Mechanism)model was introduced into the YOLOv7 backbone network to reduce the influence of the complex and variable assembly environment backgrounds on the detection accuracy of parts.Finally,Efficient-IOU is used to replace the original Complete-IOU to accelerate convergence and reduce the influence of the high similarity of some assembly parts on the detection accuracy.Experimental results show that the average accuracy of the model is 93.4%,and the improved network is superior to the original network and other target detection algorithms.The present improved YOLOv7 algorithm has high FPS while maintaining high precision,relatively low model parameters,and computational load,and is suitable for the real-time target detection requirements in dynamic Human-Robot Collaborative assembly scenarios.
作者 惠记庄 王锦豪 周涛 张雅倩 丁凯 HUI Jizhuang;WANG Jinhao;ZHOU Tao;ZHANG Yaqian;DING Kai(School of Construction Machinery,Chang'an University,Xi'an 710064,China)
出处 《机械科学与技术》 CSCD 北大核心 2024年第8期1418-1426,共9页 Mechanical Science and Technology for Aerospace Engineering
基金 中国博士后科学基金项目(2022T150073) 陕西省秦创原“科学家+工程师”团队建设项目(2022KXJ-150)。
关键词 人机协同装配 YOLOv7 注意力机制 E-IOU 装配零件检测 多目标检测 human-robot collaborative assembly YOLOv7 attention Mechanism E-IOU assembly parts detection multi-object detection
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部