The orchards usually have rough terrain,dense tree canopy and weeds.It is hard to use GNSS for autonomous navigation in orchard due to signal occlusion,multipath effect,and radio frequency interference.To achieve auto...The orchards usually have rough terrain,dense tree canopy and weeds.It is hard to use GNSS for autonomous navigation in orchard due to signal occlusion,multipath effect,and radio frequency interference.To achieve autonomous navigation in orchard,a visual navigation method based on multiple images at different shooting angles is proposed in this paper.A dynamic image capturing device is designed for camera installation and multiple images can be shot at different angles.Firstly,the obtained orchard images are classified into sky and soil detection stage.Each image is transformed to HSV space and initially segmented into sky,canopy and soil regions by median filtering and morphological processing.Secondly,the sky and soil regions are extracted by the maximum connected region algorithm,and the region edges are detected and filtered by the Canny operator.Thirdly,the navigation line in the current frame is extracted by fitting the region coordinate points.Then the dynamic weighted filtering algorithm is used to extract the navigation line for the soil and sky detection stage,respectively,and the navigation line for the sky detection stage is mirrored to the soil region.Finally,the Kalman filter algorithm is used to fuse and extract the final navigation path.The test results on 200 images show that the accuracy of visual navigation path fitting is 95.5%,and single frame image processing costs 60 ms,which meets the real-time and robustness requirements of navigation.The visual navigation experiments in Camellia oleifera orchard show that when the driving speed is 0.6 m/s,the maximum tracking offset of visual navigation in weed-free and weedy environments is 0.14 m and 0.24 m,respectively,and the RMSE is 30 mm and 55 mm,respectively.展开更多
基金National Key Research and Development Program of China(2022YFD2202103)National Natural Science Foundation of China(31971798)+2 种基金Zhejiang Provincial Key Research&Development Plan(2023C02049、2023C02053)SNJF Science and Technology Collaborative Program of Zhejiang Province(2022SNJF017)Hangzhou Agricultural and Social Development Research Project(202203A03)。
文摘The orchards usually have rough terrain,dense tree canopy and weeds.It is hard to use GNSS for autonomous navigation in orchard due to signal occlusion,multipath effect,and radio frequency interference.To achieve autonomous navigation in orchard,a visual navigation method based on multiple images at different shooting angles is proposed in this paper.A dynamic image capturing device is designed for camera installation and multiple images can be shot at different angles.Firstly,the obtained orchard images are classified into sky and soil detection stage.Each image is transformed to HSV space and initially segmented into sky,canopy and soil regions by median filtering and morphological processing.Secondly,the sky and soil regions are extracted by the maximum connected region algorithm,and the region edges are detected and filtered by the Canny operator.Thirdly,the navigation line in the current frame is extracted by fitting the region coordinate points.Then the dynamic weighted filtering algorithm is used to extract the navigation line for the soil and sky detection stage,respectively,and the navigation line for the sky detection stage is mirrored to the soil region.Finally,the Kalman filter algorithm is used to fuse and extract the final navigation path.The test results on 200 images show that the accuracy of visual navigation path fitting is 95.5%,and single frame image processing costs 60 ms,which meets the real-time and robustness requirements of navigation.The visual navigation experiments in Camellia oleifera orchard show that when the driving speed is 0.6 m/s,the maximum tracking offset of visual navigation in weed-free and weedy environments is 0.14 m and 0.24 m,respectively,and the RMSE is 30 mm and 55 mm,respectively.