摘要
长圆孔应用于工业生产可以提高装配容错,增加装配调整量,但其难以解析描述的特点显著增加了工业应用中视觉检测定位算法的难度,限制了长圆孔在自动化装配结构中的应用。该文探索出一种具有鲁棒性的高精度长圆孔视觉分割定位算法,首先分析了长圆孔几何特征,将其描述为一种具有对称性、缺少简单解析表达的封闭图形;进而提出了一种基于语义分割网络和长圆孔区域退化的视觉分割定位算法,该算法结合深度学习和传统图像处理算法,首先构建YOLO和全卷积神经网络串联架构依次实现目标检测、感兴趣区域提取和语义分割,然后利用中轴变换骨架提取方法完成长圆形区域退化定位,有效避免了语义分割输出区域形状误差对定位精度的影响,最终得到了亚像素精度的精准检测结果。在某机器人自动装配系统中开展实验验证,得出所提算法具备精度高、鲁棒性好的特点,因此具有较广阔的工业应用前景。
[Objective]Oblong holes are commonly used across various industries to improve fault tolerance and adjustment capabilities.However,their complex geometric characteristics pose significant challenges for vision detection and location algorithms in industrial applications,impacting their utilization in automatic assembly processes.[Methods]This research investigates a high-precision and robust vision segmentation and location algorithm tailored for oblong holes.First,the geometric features of oblong holes,which are symmetric but lack a simple analytical description,are analyzed.This complexity renders traditional imaging methods ineffective for accurate localization.The detection and segmentation of oblong hole features are conducted using a novel vision location algorithm that integrates deep learning with conventional image processing techniques.Specifically,the algorithm employs a sequential connection framework of YOLO and fully convolutional networks to achieve accurate localization.This framework first identifies the region of interest and then performs semantic segmentation.YOLO networks rapidly detect the region of interest,prioritizing areas where the oblong hole is prominently featured.Semantic segmentation is subsequently performed using fully convolutional networks.Afterward,a skeleton feature extraction method based on medial axis transformation is applied to precisely locate the oblong hole.This method effectively reduces the impact of shape errors from semantic segmentation,achieving subpixel accuracy.However,medial axis transformation may produce redundant lines owing to the presence of image artifacts,potentially leading to inaccuracies.To address this issue,principal component analysis is employed to approximate the center of the oblong hole,thereby minimizing errors.For further precision,a Hough transformation ellipse detection method is utilized to identify the central skeleton of the oblong hole,which is interpreted both as a line segment and a special ellipse.The center of this skeleton represents the center of the oblong hole.[Results]Experimental validation conducted in a specific robotics automatic assembly system confirms the effectiveness of the proposed algorithm.The robustness of the algorithm is further demonstrated through image sampling using camera hardware distinct from that used in the training dataset.Additionally,the impact of surface features and oblong hole shapes on the detection performance is analyzed.The experimental outcomes indicate the optimal performance of the algorithm on objects with nonreflective surfaces,with minimal effect from the shape of the oblong hole on accuracy.Despite potential deformations in segmentation output due to hardware variations,the oblong hole region degenerating location algorithm,based on medial axis transformation,accurately locates the center.The final location error is recorded at 1.05 pixels,which surpasses the accuracy achieved through the direct calculation of the center of gravity of the segmented region.These results underscore the substantial benefits of the algorithm in scenarios with varying hardware and object conditions,demonstrating its high accuracy and exceptional robustness.[Conclusions]By merging deep learning techniques with traditional image processing methods,the location tasks for diverse objects are effectively resolved.The extraction of highly nonlinear features through deep learning,followed by processing with traditional image methods incorporating prior geometric knowledge,enhances the robustness and accuracy of the algorithm,making it suitable for practical production applications.
作者
蒋潇
王松
吴丹
JIANG Xiao;WANG Song;WU Dan(Department of Mechanical Engineering,Tsinghua University,Beijing 100084,China)
出处
《清华大学学报(自然科学版)》
EI
CAS
CSCD
北大核心
2024年第10期1677-1685,共9页
Journal of Tsinghua University(Science and Technology)
基金
国家自然科学基金面上项目(52375019)。
关键词
视觉检测
长圆孔
自动装配
深度学习
vision detection
oblong hole
automatic assembly
deep learning