期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Integrating semantic features in fruit recognition based on perceptual color and semantic template
1
作者 Ema Rachmawati Iping Supriana +1 位作者 Masayu Leylia Khodra Fauzan Firdaus 《Information Processing in Agriculture》 EI 2022年第2期316-334,共19页
Many fruit recognition works have applied statistical approaches to make an exact correlation between low-level visual feature information and high-level semantic concepts givenby predefined text caption or keywords. ... Many fruit recognition works have applied statistical approaches to make an exact correlation between low-level visual feature information and high-level semantic concepts givenby predefined text caption or keywords. Two common fruit recognition models include bagof-features (BoF) and convolutional neural network (ConvNet), which achieve highperformance results. In most cases, the overfitting problem is unavoidable. This problemmakes it difficult to generalize new instances with only a slightly different appearance,although belonging to the same category. This article proposes a new fruit recognitionmodel by associating an object’s low-level features in an image with a high-level concept.We define a perceptual color for each fruit species to construct a relationship between fruitcolor and semantic color name. Furthermore, we develop our model by integrating the perceptual color and semantic template concept to solve the overfitting problem. The semantic template concept as a mapping between the high-level concept and the low-level visualfeature is adopted in this model. The experiment was conducted on three different fruitimage datasets, with one dataset as train data and the two others as test data. The experimental results demonstrate that the proposed model, called perceptual color on semantictemplate (PCoST), is significantly better than the BoF and ConvNet models in reducing theoverfitting problem. 展开更多
关键词 Multi-class fruit recognition Perceptual color Semantic template Bag-of-features
原文传递
Method for the fruit tree recognition and navigation in complex environment of an agricultural robot
2
作者 Xiaolin Xie Yuchao Li +3 位作者 Lijun Zhao Xin Jin Shengsheng Wang Xiaobing Han 《International Journal of Agricultural and Biological Engineering》 SCIE 2024年第2期221-229,共9页
To realize the visual navigation of agricultural robots in the complex environment of orchards,this study proposed a method for fruit tree recognition and navigation based on YOLOv5.The YOLOv5s model was selected and ... To realize the visual navigation of agricultural robots in the complex environment of orchards,this study proposed a method for fruit tree recognition and navigation based on YOLOv5.The YOLOv5s model was selected and trained to identify the trunks of the left and right rows of fruit trees;the quadratic curve was fitted to the bottom center of the fruit tree recognition box,and the identified fruit trees were divided into left and right columns by using the extreme value point of the quadratic curve to obtain the left and right rows of fruit trees;the straight-line equation of the left and right fruit tree rows was further solved,the median line of the two straight lines was taken as the expected navigation path of the robot,and the path tracing navigation experiment was carried out by using the improved LQR control algorithm.The experimental results show that under the guidance of the machine vision system and guided by the improved LQR control algorithm,the lateral error and heading error can converge quickly to the desired navigation path in the four initial states of[0 m,−0.34 rad],[0.10 m,0.34 rad],[0.15 m,0 rad]and[0.20 m,−0.34 rad].When the initial speed was 0.5 m/s,the average lateral error was 0.059 m and the average heading error was 0.2787 rad for the navigation trials in the four different initial states.Its average driving was 5.3 m into the steady state,the average value of steady state lateral error was 0.0102 m,the average value of steady state heading error was 0.0253 rad,and the average relative error of the robot driving along the desired navigation path was 4.6%.The results indicate that the navigation algorithm proposed in this study has good robustness,meets the operational requirements of robot autonomous navigation in orchard environment,and improves the reliability of robot driving in orchard. 展开更多
关键词 fruit tree recognition visual navigation YOLOv5 complex environments ORCHARDS
原文传递
Fast and accurate detection of kiwifruits in the natural environment using improved YOLOv4
3
作者 Jinpeng Wang Lei Xu +3 位作者 Song Mei Haoruo Hu Jialiang Zhou Qing Chen 《International Journal of Agricultural and Biological Engineering》 SCIE 2024年第5期222-230,I0001,共10页
Real-time detection of kiwifruits in natural environments is essential for automated kiwifruit harvesting. In this study, a lightweight convolutional neural network called the YOLOv4-GS algorithm was proposed for kiwi... Real-time detection of kiwifruits in natural environments is essential for automated kiwifruit harvesting. In this study, a lightweight convolutional neural network called the YOLOv4-GS algorithm was proposed for kiwifruit detection. The backbone network CSPDarknet-53 of YOLOv4 was replaced with GhostNet to improve accuracy and reduce network computation. To improve the detection accuracy of small targets, the upsampling of feature map fusion was performed for network layers 151 and 154, and the spatial pyramid pooling network was removed to reduce redundant computation. A total of 2766 kiwifruit images from different environments were used as the dataset for training and testing. The experiment results showed that the F1-score, average accuracy, and Intersection over Union (IoU) of YOLOv4-GS were 98.00%, 99.22%, and 88.92%, respectively. The average time taken to detect a 416×416 kiwifruit image was 11.95 ms, and the model’s weight was 28.8 MB. The average detection time of GhostNet was 31.44 ms less than that of CSPDarknet-53. In addition, the model weight of GhostNet was 227.2 MB less than that of CSPDarknet-53. YOLOv4-GS improved the detection accuracy by 8.39% over Faster R-CNN and 8.36% over SSD-300. The detection speed of YOLOv4-GS was 11.3 times and 2.6 times higher than Faster R-CNN and SSD-300, respectively. In the indoor picking experiment and the orchard picking experiment, the average speed of the YOLOv4-GS processing video was 28.4 fps. The recognition accuracy was above 90%. The average time spent for recognition and positioning was 6.09 s, accounting for about 29.03% of the total picking time. The overall results showed that the YOLOv4-GS proposed in this study can be applied for kiwifruit detection in natural environments because it improves the detection speed without compromising detection accuracy. 展开更多
关键词 KIWIfruitS fruit recognition natural environments YOLOv4
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部