期刊文献+

基于深度学习的视觉检测及抓取方法 被引量:4

A visual detection and grasping method based on deep learning
下载PDF
导出
摘要 针对现有机器人抓取系统对硬件设备要求高、难以适应不同物体及抓取过程产生较大有害扭矩等问题,提出一种基于深度学习的视觉检测及抓取方法。采用通道注意力机制对YOLO-V3进行改进,增强网络对图像特征提取的能力,提升复杂环境中目标检测的效果,平均识别率较改进前增加0.32%。针对目前姿态估计角度存在离散性的问题,提出一种基于视觉几何组-16(VGG-16)主干网络嵌入最小面积外接矩形(MABR)算法,进行抓取位姿估计和角度优化。改进后的抓取角度与目标实际角度平均误差小于2.47°,大大降低两指机械手在抓取过程中对物体所额外施加的有害扭矩。利用UR5机械臂、气动两指机械手、Realsense D435相机及ATI-Mini45六维力传感器等设备搭建了一套视觉抓取系统,实验表明:所提方法可以有效地对不同物体进行抓取分类操作、对硬件的要求较低、并且将有害扭矩降低约75%,从而减小对物体的损害,具有很好的应用前景。 This paper proposes a deep learning based visual detection and grasping method to solve the problems of the existing robotic grasping systems,including high hardware costs,difficulty in adapting to different objects,and large harmful torques.The channel attention mechanism is used to enhance the ability of the network to extract image features,improving the effect of target detection in complex environments using the improved YOLO-V3.It is found that the average recognition rate is increased by 0.32%compared with that before the improvement.In addition,to address the discreteness of estimated orientation angles,an embedded minimum area bounding rectangle(MABR)algorithm based on VGG-16 backbone network is proposed to estimate and optimize the grasping position and orientation.The average error between the improved predicted grasping angle and the actual angle of the target is less than 2.47°,significantly reducing the additional harmful torque applied by the two-finger gripper to the object in the grasping process.This study then builds a visual grasping system,using a UR5 robotic arm,a pneumatic two-finger robotic gripper,a Realsense D435 camera,and an ATI-Mini45 six-axis force/torque sensor.Experimental results show that the proposed method can effectively grasp and classify objects,with low requirements for hardware.It reduces the harmful torque by about 75%,thereby reducing damage to grasped objects,and showing a great application prospect.
作者 孙先涛 程伟 陈文杰 方笑晗 陈伟海 杨茵鸣 SUN Xiantao;CHENG Wei;CHEN Wenjie;FANG Xiaohan;CHEN Weihai;YANG Yinming(School of Electrical Engineering and Automation,Anhui University,Hefei 230601,China;School of Automation Science and Electrical Engineering,Beihang University,Beijing 100191,China)
出处 《北京航空航天大学学报》 EI CAS CSCD 北大核心 2023年第10期2635-2644,共10页 Journal of Beijing University of Aeronautics and Astronautics
基金 国家自然科学基金(52005001)。
关键词 深度学习 神经网络 目标检测 姿态估计 机器人抓取 deep learning neural network object detection pose estimation robotic grasping
  • 相关文献

参考文献8

二级参考文献34

共引文献423

同被引文献19

引证文献4

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部