摘要
对于物品的纹理特征容易影响到抓取检测的精度,且原始的默认锚框不适合待抓取检测目标的尺度,以及在小目标物体检测效果不佳等问题,该文通过以下方法进行改进。首先通过图像处理将物品的原始图像经过处理使其只具有物品的轮廓信息,然后采用KMeans得到适合所检测目标的锚框尺度,以基于YOLOv4的神经网络模型建立改进后的YOLOv4抓取检测模型,最后去除YOLOv4中检测较大物体19×19的特征层检测层及其附近的卷积及池化层,以此降低系统复杂度减少参数,并且将第11层和第109层进行特征融合得到特征尺寸152×152使其能够更好地提取小目标特征。将原始图像和只具有轮廓信息的图像分别送入改进前和多种进行改进后的网络进行检测性能的分别对比。实验结果表明经过改进的网络抓取检测的平均成功率为81.5%,比原始的YOLOv4提高了4.3%,有效提升了抓取检测的精度并且加强了检测小目标的能力。
For the texture features of objects,it is easy to affect the accuracy of grasping detection,and the original default anchor frame is not suitable for the scale of the target to be grasped,as well as the poor detection effect of small target objects. Firstly,the original image of the object is processed through image processing to make it only have the contour information of the object,and then the anchor frame scale suitable for the detected target is obtained by K-Means,and the improved YOLOv4 capture detection model is established based on the neural network model of YOLOv4. Finally,the feature layer detection layer and its nearby convolution and pooling layer for detecting large objects 19×19 in YOLOv4 are removed,so as to reduce the system complexity and reduce parameters. The feature size 152×152 is obtained by feature fusion between layer 11 and layer 109,so that it can better extract small target features. The original image and the image with only contour information are sent to the improved network and a variety of improved networks to compare the detection performance respectively. The experimental results show that the average success rate of the improved network capture detection is 81.5%,which is 4.3% higher than the original YOLOv4,which effectively improves the accuracy of capture detection and strengthens the ability to detect small targets.
作者
周海明
雷志勇
ZHOU Hai-ming;LEI Zhi-yong(Electronic Information Engineering,Xi’an Technological University,Xi’an 710021,China)
出处
《自动化与仪表》
2022年第2期59-63,69,共6页
Automation & Instrumentation