期刊文献+

改进Otsu算法与ELM融合的自然场景棉桃自适应分割方法 被引量:13

Self-adaptive segmentation method of cotton in natural scene by combining improved Otsu with ELM algorithm
下载PDF
导出
摘要 针对动态行进过程中拍摄的自然棉田场景图像的棉桃分割问题,提出了一种改进的自适应优化分割方法。首先利用改进的Otsu分割算法定位棉桃区域,对棉桃和背景区域像素点的RGB值分别采样;将样本用于训练ELM(extreme learning machine)分类模型;把图像分割转化为像素分类问题,用分类模型对棉桃图像进行像素分类以实现棉桃图像的分割。对晴天和阴天场景下自然棉田的图像进行了算法验证,能正确分割棉桃并定位棉桃位置,实现了非结构光环境下对棉桃的无监督的采样和分割定位,每幅图像的平均分割时间为0.58 s,晴天和阴天状况下棉桃的平均识别率分别达到94.18%和97.56%。将该算法与经典分类算法SVM(support vector machine)和BP在增加纹理特征和采用RGB特征的情况下进行对比,并分析了该算法在分割速度和识别率上都有较大优势的原因。试验证明该算法在棉桃分割中有很好的实时性、准确性和适应性,可为智能采棉机的棉桃识别算法提供参考。 In order to segment cotton from background accurately and quickly under the unstructured natural environment, an improved scene adaptive optimization segmentation algorithm was proposed. Firstly, the traditional Otsu algorithm was adopted for cotton segmentation by analyzing the rule of gray histogram distribution of images acquired. During each segmentation of cotton image, the background pixel was set to zero and the gray histogram distribution of the target pixel was analyzed after segmented. The segmentation stopped when the histogram distribution appeared to be a steep unimodal distribution and the final segmentation threshold was the optimized threshold. There were still some background pixels in the segmentation results in the first step. Therefore, secondly, a threshold selection rule was utilized to segment cotton from background based on the statistical analysis of the RGB(red, green, blue) values. Cotton pixels were then extracted after morphology processing. Thirdly, cotton pixels with the RGB values were labeled as positive samples, and then background pixels with the same number of cotton were labeled as negative samples. The feature vectors which the positive and the negative samples shared in common were removed to improve the accuracy. A large-scale training samples were obtained automatically by using previous steps. In the fourth step, the samples were used to train the ELM(extreme learning machine) classification model for cotton segmentation. In the fifth step, the RGB values of all pixels in the test images were normalized and passed to the classification model and the output value will be 1 if the pixels belong to the cotton, or 0 if the pixels belong to the background. Another morphological procedure was employed to remove the noise segments of the output results. In the final step, the centroids of the connected regions which had a roundness of greater than 0.3 were extracted. If the pixel distance between 2 centroids was less than 40 pixels, a new centroid position was returned as a two-point center, so that some incomplete parts separated by the leaf were connected. Then the unsupervised sampling and segmentation of cotton under the unstructured light environment were achieved and the coordinates of cotton were derived. The algorithm proposed in this paper was verified by cotton segmentation experiment. The original datasets were collected on both sunny and cloudy days in the cotton breeding station located in Sanya, Hainan, China. The training datasets were composed of labeled samples generated from 20 images, 10 from sunny day and 10 from cloudy day. Another 60 images, 30 from sunny day and 30 from cloudy day, constituted the testing datasets. For each image in testing datasets, the accuracy is determined by the comparison of numbers between identified cotton and actual cotton. By applying the proposed method, the cotton was identified after segmentation. The average processing time is 0.58 s and the average recognition rates for the images on sunny and cloudy day were 94.18% and 97.56%, respectively. A comparison analysis was conducted between the proposed method and popular methods under Windows 10 64 bits with an Intel Core i7-7400 CPU 3.00 GHz, 12 GB RAM, and MATLAB R2016 a. For the segmentation of images on sunny day, accuracies by applying the proposed method, SVM(support vector machine) and BP(back propagation) with only RGB features as training samples were 94.57%, 77% and 83.35%, respectively. And the segmentation time was 0.58, 177.31 and 6.52 s, respectively. Accuracies by applying the proposed method, SVM and BP with Tamura texture and RGB features as training samples were 75.68%, 80.54% and 86.17%, respectively. And the segmentation time was 467.87, 678.35 and 551.01 s, respectively. The segmentation results of images on cloudy day were much like that on sunny days except slightly higher in accuracy. The results indicate that the proposed method exceeds popular methods in segmentation speed and accuracy. Moreover, the proposed method avoids texture feature extraction for each pixel, which guarantees better real-time performance.
作者 王见 周勤 尹爱军 Wang Jian;Zhou Qin;Yin Aijun(The State Key Laboratory of Mechanical Transmission,Chongqing University,Chongqing 400044,China;College of Mechanical Engineering,Chongqing University,Chongqing 400044,China)
出处 《农业工程学报》 EI CAS CSCD 北大核心 2018年第14期173-180,共8页 Transactions of the Chinese Society of Agricultural Engineering
基金 国家自然科学基金项目(51675064) 重庆科技计划-基于多目立体视觉的智能采棉机关键技术研究(cstc2016shmszx1245)
关键词 图像分割 作物 图像识别 改进的Otsu算法 极限学习机 自适应 image segmentation crops image recoginiton modified Otsu algorithm extreme learning machine self-adaptation
  • 相关文献

参考文献12

二级参考文献163

共引文献251

同被引文献190

引证文献13

二级引证文献113

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部