Many fruit recognition works have applied statistical approaches to make an exact correlation between low-level visual feature information and high-level semantic concepts givenby predefined text caption or keywords. ...Many fruit recognition works have applied statistical approaches to make an exact correlation between low-level visual feature information and high-level semantic concepts givenby predefined text caption or keywords. Two common fruit recognition models include bagof-features (BoF) and convolutional neural network (ConvNet), which achieve highperformance results. In most cases, the overfitting problem is unavoidable. This problemmakes it difficult to generalize new instances with only a slightly different appearance,although belonging to the same category. This article proposes a new fruit recognitionmodel by associating an object’s low-level features in an image with a high-level concept.We define a perceptual color for each fruit species to construct a relationship between fruitcolor and semantic color name. Furthermore, we develop our model by integrating the perceptual color and semantic template concept to solve the overfitting problem. The semantic template concept as a mapping between the high-level concept and the low-level visualfeature is adopted in this model. The experiment was conducted on three different fruitimage datasets, with one dataset as train data and the two others as test data. The experimental results demonstrate that the proposed model, called perceptual color on semantictemplate (PCoST), is significantly better than the BoF and ConvNet models in reducing theoverfitting problem.展开更多
To realize the visual navigation of agricultural robots in the complex environment of orchards,this study proposed a method for fruit tree recognition and navigation based on YOLOv5.The YOLOv5s model was selected and ...To realize the visual navigation of agricultural robots in the complex environment of orchards,this study proposed a method for fruit tree recognition and navigation based on YOLOv5.The YOLOv5s model was selected and trained to identify the trunks of the left and right rows of fruit trees;the quadratic curve was fitted to the bottom center of the fruit tree recognition box,and the identified fruit trees were divided into left and right columns by using the extreme value point of the quadratic curve to obtain the left and right rows of fruit trees;the straight-line equation of the left and right fruit tree rows was further solved,the median line of the two straight lines was taken as the expected navigation path of the robot,and the path tracing navigation experiment was carried out by using the improved LQR control algorithm.The experimental results show that under the guidance of the machine vision system and guided by the improved LQR control algorithm,the lateral error and heading error can converge quickly to the desired navigation path in the four initial states of[0 m,−0.34 rad],[0.10 m,0.34 rad],[0.15 m,0 rad]and[0.20 m,−0.34 rad].When the initial speed was 0.5 m/s,the average lateral error was 0.059 m and the average heading error was 0.2787 rad for the navigation trials in the four different initial states.Its average driving was 5.3 m into the steady state,the average value of steady state lateral error was 0.0102 m,the average value of steady state heading error was 0.0253 rad,and the average relative error of the robot driving along the desired navigation path was 4.6%.The results indicate that the navigation algorithm proposed in this study has good robustness,meets the operational requirements of robot autonomous navigation in orchard environment,and improves the reliability of robot driving in orchard.展开更多
Real-time detection of kiwifruits in natural environments is essential for automated kiwifruit harvesting. In this study, a lightweight convolutional neural network called the YOLOv4-GS algorithm was proposed for kiwi...Real-time detection of kiwifruits in natural environments is essential for automated kiwifruit harvesting. In this study, a lightweight convolutional neural network called the YOLOv4-GS algorithm was proposed for kiwifruit detection. The backbone network CSPDarknet-53 of YOLOv4 was replaced with GhostNet to improve accuracy and reduce network computation. To improve the detection accuracy of small targets, the upsampling of feature map fusion was performed for network layers 151 and 154, and the spatial pyramid pooling network was removed to reduce redundant computation. A total of 2766 kiwifruit images from different environments were used as the dataset for training and testing. The experiment results showed that the F1-score, average accuracy, and Intersection over Union (IoU) of YOLOv4-GS were 98.00%, 99.22%, and 88.92%, respectively. The average time taken to detect a 416×416 kiwifruit image was 11.95 ms, and the model’s weight was 28.8 MB. The average detection time of GhostNet was 31.44 ms less than that of CSPDarknet-53. In addition, the model weight of GhostNet was 227.2 MB less than that of CSPDarknet-53. YOLOv4-GS improved the detection accuracy by 8.39% over Faster R-CNN and 8.36% over SSD-300. The detection speed of YOLOv4-GS was 11.3 times and 2.6 times higher than Faster R-CNN and SSD-300, respectively. In the indoor picking experiment and the orchard picking experiment, the average speed of the YOLOv4-GS processing video was 28.4 fps. The recognition accuracy was above 90%. The average time spent for recognition and positioning was 6.09 s, accounting for about 29.03% of the total picking time. The overall results showed that the YOLOv4-GS proposed in this study can be applied for kiwifruit detection in natural environments because it improves the detection speed without compromising detection accuracy.展开更多
基金We want to express our sincere thanks to the Ministry of Research,Technology,and Higher Education of the Republic of Indonesia(Kementerian Riset Teknologi dan Pendidikan Tinggi Republik Indonesia)for supporting the research grant for this doctoral dissertation research(contract number:1603/K4/KM/2017).
文摘Many fruit recognition works have applied statistical approaches to make an exact correlation between low-level visual feature information and high-level semantic concepts givenby predefined text caption or keywords. Two common fruit recognition models include bagof-features (BoF) and convolutional neural network (ConvNet), which achieve highperformance results. In most cases, the overfitting problem is unavoidable. This problemmakes it difficult to generalize new instances with only a slightly different appearance,although belonging to the same category. This article proposes a new fruit recognitionmodel by associating an object’s low-level features in an image with a high-level concept.We define a perceptual color for each fruit species to construct a relationship between fruitcolor and semantic color name. Furthermore, we develop our model by integrating the perceptual color and semantic template concept to solve the overfitting problem. The semantic template concept as a mapping between the high-level concept and the low-level visualfeature is adopted in this model. The experiment was conducted on three different fruitimage datasets, with one dataset as train data and the two others as test data. The experimental results demonstrate that the proposed model, called perceptual color on semantictemplate (PCoST), is significantly better than the BoF and ConvNet models in reducing theoverfitting problem.
基金funded by the National Key Research and Development Program of China Project(Grant No.2021YFD2000700)the National Natural Science Funds for Young Scholars of China(Grant No.51905154)the Luoyang Public Welfare Special Project(Grant No.2302031A).
文摘To realize the visual navigation of agricultural robots in the complex environment of orchards,this study proposed a method for fruit tree recognition and navigation based on YOLOv5.The YOLOv5s model was selected and trained to identify the trunks of the left and right rows of fruit trees;the quadratic curve was fitted to the bottom center of the fruit tree recognition box,and the identified fruit trees were divided into left and right columns by using the extreme value point of the quadratic curve to obtain the left and right rows of fruit trees;the straight-line equation of the left and right fruit tree rows was further solved,the median line of the two straight lines was taken as the expected navigation path of the robot,and the path tracing navigation experiment was carried out by using the improved LQR control algorithm.The experimental results show that under the guidance of the machine vision system and guided by the improved LQR control algorithm,the lateral error and heading error can converge quickly to the desired navigation path in the four initial states of[0 m,−0.34 rad],[0.10 m,0.34 rad],[0.15 m,0 rad]and[0.20 m,−0.34 rad].When the initial speed was 0.5 m/s,the average lateral error was 0.059 m and the average heading error was 0.2787 rad for the navigation trials in the four different initial states.Its average driving was 5.3 m into the steady state,the average value of steady state lateral error was 0.0102 m,the average value of steady state heading error was 0.0253 rad,and the average relative error of the robot driving along the desired navigation path was 4.6%.The results indicate that the navigation algorithm proposed in this study has good robustness,meets the operational requirements of robot autonomous navigation in orchard environment,and improves the reliability of robot driving in orchard.
基金funded by the Jiangsu Province Agricultural Science and Technology Independent Innovation Project (CX(22)3099)the Emergency Science and Technology Project of National Forestry and Grassland Administration (202202-3)+2 种基金the Key R&D Program of Jiangsu Modern Agricultural Machinery Equipment and Technology Promotion Project (Grant NJ2021-18)the Key R&D plan of Jiangsu Province (Grant BE2021016-2)the 2021 Self-made Experimental Teaching Instrument Project of Nanjing Forestry University (Grant nlzzyq202406).
文摘Real-time detection of kiwifruits in natural environments is essential for automated kiwifruit harvesting. In this study, a lightweight convolutional neural network called the YOLOv4-GS algorithm was proposed for kiwifruit detection. The backbone network CSPDarknet-53 of YOLOv4 was replaced with GhostNet to improve accuracy and reduce network computation. To improve the detection accuracy of small targets, the upsampling of feature map fusion was performed for network layers 151 and 154, and the spatial pyramid pooling network was removed to reduce redundant computation. A total of 2766 kiwifruit images from different environments were used as the dataset for training and testing. The experiment results showed that the F1-score, average accuracy, and Intersection over Union (IoU) of YOLOv4-GS were 98.00%, 99.22%, and 88.92%, respectively. The average time taken to detect a 416×416 kiwifruit image was 11.95 ms, and the model’s weight was 28.8 MB. The average detection time of GhostNet was 31.44 ms less than that of CSPDarknet-53. In addition, the model weight of GhostNet was 227.2 MB less than that of CSPDarknet-53. YOLOv4-GS improved the detection accuracy by 8.39% over Faster R-CNN and 8.36% over SSD-300. The detection speed of YOLOv4-GS was 11.3 times and 2.6 times higher than Faster R-CNN and SSD-300, respectively. In the indoor picking experiment and the orchard picking experiment, the average speed of the YOLOv4-GS processing video was 28.4 fps. The recognition accuracy was above 90%. The average time spent for recognition and positioning was 6.09 s, accounting for about 29.03% of the total picking time. The overall results showed that the YOLOv4-GS proposed in this study can be applied for kiwifruit detection in natural environments because it improves the detection speed without compromising detection accuracy.