摘要
针对目前机器人抓取检测方法抓取角度预测过于离散,抓取过程可能产生较大偏角,降低抓取检测精度,甚至导致抓取失败的问题,提出了一种基于YOLOv5神经网络模型改进的机器人实时抓取检测方法。以单阶段目标检测模型YOLOv5为基础,提取抓取框坐标及抓取角度。对抓取角度进行更细致的划分,同时引入环形平滑标签以适应角度的周期性,建立相邻角度之间的联系,将YOLOv5检测头进行解耦,并对损失函数进行优化,提高检测精度。在Cornell数据集上进行实验验证。实验结果表明,与经典的抓取检测方法相比,所提算法能够更好地预测抓取角度,提升抓取检测精度;在Cornell数据集上,此模型达到了97.5%的准确率以及71FPS的检测速度。
Aiming at the problems that the current robot grasping detection method is too discrete in predicting the grasp-ing angle and the grasping process may produce large off-angle,which reduces the grasping detection accuracy and even leads to grasping failure,an improved robot real-time grasping detection method based on the YOLOv5 neural network model is proposed.Firstly,the grasping frame coordinates and grasping angles are extracted based on the single-stage object detection model YOLOv5.Afterwards,the grasping angles are divided more carefully,while circular smoothing label is introduced to accommodate the periodicity of the angles,links between adjacent angles are established,the YOLOv5 detection head is decoupled,and the loss function is optimized to improve the detection accuracy.Finally,an experimental validation is performed on the Cornell dataset.The experimental results show that the proposed algorithm can better predict the grasping angle and improve the grasping detection accuracy compared with the classical grasping detection methods.The model achieves 97.5%accuracy and 71 FPS detection speed on the Cornell dataset.
作者
陈春朝
孙东红
CHEN Chunchao;SUN Donghong(School of Mechanical and Power Engineering,Henan Polytechnic University,Jiaozuo,Henan 454000,China)
出处
《计算机工程与应用》
CSCD
北大核心
2024年第6期172-179,共8页
Computer Engineering and Applications
基金
河南省高等学校重点科研项目(19A4600004)
河南理工大学博士基金(B2019-48)。