The orchards usually have rough terrain,dense tree canopy and weeds.It is hard to use GNSS for autonomous navigation in orchard due to signal occlusion,multipath effect,and radio frequency interference.To achieve auto...The orchards usually have rough terrain,dense tree canopy and weeds.It is hard to use GNSS for autonomous navigation in orchard due to signal occlusion,multipath effect,and radio frequency interference.To achieve autonomous navigation in orchard,a visual navigation method based on multiple images at different shooting angles is proposed in this paper.A dynamic image capturing device is designed for camera installation and multiple images can be shot at different angles.Firstly,the obtained orchard images are classified into sky and soil detection stage.Each image is transformed to HSV space and initially segmented into sky,canopy and soil regions by median filtering and morphological processing.Secondly,the sky and soil regions are extracted by the maximum connected region algorithm,and the region edges are detected and filtered by the Canny operator.Thirdly,the navigation line in the current frame is extracted by fitting the region coordinate points.Then the dynamic weighted filtering algorithm is used to extract the navigation line for the soil and sky detection stage,respectively,and the navigation line for the sky detection stage is mirrored to the soil region.Finally,the Kalman filter algorithm is used to fuse and extract the final navigation path.The test results on 200 images show that the accuracy of visual navigation path fitting is 95.5%,and single frame image processing costs 60 ms,which meets the real-time and robustness requirements of navigation.The visual navigation experiments in Camellia oleifera orchard show that when the driving speed is 0.6 m/s,the maximum tracking offset of visual navigation in weed-free and weedy environments is 0.14 m and 0.24 m,respectively,and the RMSE is 30 mm and 55 mm,respectively.展开更多
In the technique of robot-assisted invasive surgery, high quality image is a key factor of the visual navigation system. In this paper, the authors have made a study of the image processing in visual system. Based on ...In the technique of robot-assisted invasive surgery, high quality image is a key factor of the visual navigation system. In this paper, the authors have made a study of the image processing in visual system. Based on the analysis of plentiful demising methods, they proposed a new method (S-AM-W) which oxnbines Adaptive Median filter and Wioaer filter to renmve the main noises (Salt & Pepper noise and Gattssian noise). The sinlflation results show that it is simple, well real time, and has high Peak Signal-to-Noise Ratio (PSNR). It was found that the new method is effective and efficient in dealing with medical image of background noise.展开更多
In aerial robots' visual navigation, it is essential yet very difficult to detect the attitude and position of the robots operated in real time. By introducing a new parametric model, the problem can be reduced from ...In aerial robots' visual navigation, it is essential yet very difficult to detect the attitude and position of the robots operated in real time. By introducing a new parametric model, the problem can be reduced from almost unmanageable to be partly solved, though not fully, as per the requirement. In this parametric approach, a multi-scale least square method is formulated first. By propagating as well as improving the parameters down from layer to layer of the image pyramid, a new global feature line can then be detected to parameterize the attitude of the robots. Furthermore, this approach paves the way for segmenting the image into distinct parts, which can be realized by deploying a Bayesian classifier on the picture cell level. Comparison with the Hough transform based method in terms of robustness and precision shows that this multi-scale least square algorithm is considerably more robust to noises. Some discussions are also given.展开更多
Abstract: There is a high demand for unmanned aerial vehicle (UAV) flight stability when using vi- sion as a detection method for navigation control. To meet such demand, a new path planning meth- od for controllin...Abstract: There is a high demand for unmanned aerial vehicle (UAV) flight stability when using vi- sion as a detection method for navigation control. To meet such demand, a new path planning meth- od for controlling multi-UAVs is studied to reach multi-waypoints simultaneously under the view of visual navigation technology. A model based on the stable-shortest pythagorean-hodograph (PH) curve is established, which could not only satisfy the demands of visual navigation and control law, but also be easy to compute. Based on the model, a planning algorithm to guide multi-UAVs to reach multi-waypoints at the same time without collisions is developed. The simulation results show that the paths have shorter distance and smaller curvature than traditional methods, which could help to avoid collisions.展开更多
In this paper, we propose the novel robot motion planning model based on the visual navigation and fuzzy control. A robot operating system can be viewed as the mechanical energy converter from the joint space to the g...In this paper, we propose the novel robot motion planning model based on the visual navigation and fuzzy control. A robot operating system can be viewed as the mechanical energy converter from the joint space to the global operation space, and the fiexibility of the robot system refi ects the global transformation ability of the whole system. Fuzzy control technology is a kind of fuzzy science, artificial intelligence, knowledge engineering and other disciplines interdisciplinary fields, the theory of strong science and technology, to achieve this fuzzy control technology theory, known as the fuzzy control theory. Besides this, this paper integrates the visual navigation system to construct the better robust methodology which is meaningful.展开更多
To realize automatic harvesting of the jujube,the jujube harvester was designed and manufactured.For achieving the jujube harvester autopilot,a novel algorithm for visual navigation path detection was proposed.The cen...To realize automatic harvesting of the jujube,the jujube harvester was designed and manufactured.For achieving the jujube harvester autopilot,a novel algorithm for visual navigation path detection was proposed.The centerline of tree row lines was taken as the navigation path.The method included four main parts:image preprocessing,image segmentation,tree row lines access,and navigation path access.The methods of threshold segmentation,noise removal,and border smoothing were utilized on the image in Lab color space for the image segmentation.The least square method was employed to fit the tree row lines,and the centerline was obtained as the navigation path.Experimental results indicated that the average false detection rate was 3.98%,and the average detection speed was 41 fps.The algorithm meets the requirements of the jujube harvester autopilot in terms of accuracy and speed.It also can lay the foundation for accomplishing the jujube harvester vision-based autopilot.展开更多
Visual navigation is imperative for successful asteroid exploration missions.In this study,an integrated visual navigation system was proposed based on angles-only measurements to robustly and accurately determine the...Visual navigation is imperative for successful asteroid exploration missions.In this study,an integrated visual navigation system was proposed based on angles-only measurements to robustly and accurately determine the pose of the lander during the final landing phase.The system used the lander's global pose information provided by an orbiter,which was deployed in space in advance,and its relative motion information in adjacent images to jointly estimate its optimal state.First,the landmarks on the asteroid surface and markers on the lander were identified from the images acquired by the orbiter.Subsequently,an angles-only measurement model concerning the landmarks and markers was constructed to estimate the orbiter's position and lander's pose.Subsequently,a method based on the epipolar constraint was proposed to estimate the lander's inter-frame motion.Then,the absolute pose and relative motion of the lander were fused using an extended Kalman filter.Additionally,the observability criterion and covariance of the state error were provided.Finally,synthetic image sequences were generated to validate the proposed navigation system,and numerical results demonstrated its advance in terms of robustness and accuracy.展开更多
Autonomous navigation for intelligent mobile robots has gained significant attention,with a focus on enabling robots to generate reliable policies based on maintenance of spatial memory.In this paper,we propose a lear...Autonomous navigation for intelligent mobile robots has gained significant attention,with a focus on enabling robots to generate reliable policies based on maintenance of spatial memory.In this paper,we propose a learning-based visual navigation pipeline that uses topological maps as memory configurations.We introduce a unique online topology construction approach that fuses odometry pose estimation and perceptual similarity estimation.This tackles the issues of topological node redundancy and incorrect edge connections,which stem from the distribution gap between the spatial and perceptual domains.Furthermore,we propose a differentiable graph extraction structure,the topology multi-factor transformer(TMFT).This structure utilizes graph neural networks to integrate global memory and incorporates a multi-factor attention mechanism to underscore elements closely related to relevant target cues for policy generation.Results from photorealistic simulations on image-goal navigation tasks highlight the superior navigation performance of our proposed pipeline compared to existing memory structures.Comprehensive validation through behavior visualization,interpretability tests,and real-world deployment further underscore the adapt-ability and efficacy of our method.展开更多
In order to meet the actual operation demand of visual navigation during cotton field management period,image detection algorithm of visual navigation route during this period was investigated in this research.Firstly...In order to meet the actual operation demand of visual navigation during cotton field management period,image detection algorithm of visual navigation route during this period was investigated in this research.Firstly,for the operation images under natural environment,the approach of color component difference,which is applicable for cotton field management,was adopted to extract the target characteristics of different regions inside and outside cotton field.Secondly,the median filtering method was employed to eliminate noise in the images and realize smoothing process of the images.Then,according to the regional vertical cumulative distribution graph of the images,the boundary characteristic of the cotton seedling region was obtained and the central position of the cotton seedling row was determined.Finally,the detection of the candidate points cluster was realized,and the navigation route was extracted by Hough transformation passing the known point.The testing results showed that the algorithms could rapidly and accurately detect the navigation route during cotton field management period.And the average processing time periods for each frame image at the emergence,strong seedling,budding and blooming stages were 41.43 ms,67.83 ms,68.80 ms and 74.05 ms,respectively.The detection has the advantage of high accuracy,strong robustness and fast speed,and is simultaneously less vulnerable to interference from external environment,which satisfies the practical operation requirements of cotton field management machinery.展开更多
To realize the visual navigation of agricultural robots in the complex environment of orchards,this study proposed a method for fruit tree recognition and navigation based on YOLOv5.The YOLOv5s model was selected and ...To realize the visual navigation of agricultural robots in the complex environment of orchards,this study proposed a method for fruit tree recognition and navigation based on YOLOv5.The YOLOv5s model was selected and trained to identify the trunks of the left and right rows of fruit trees;the quadratic curve was fitted to the bottom center of the fruit tree recognition box,and the identified fruit trees were divided into left and right columns by using the extreme value point of the quadratic curve to obtain the left and right rows of fruit trees;the straight-line equation of the left and right fruit tree rows was further solved,the median line of the two straight lines was taken as the expected navigation path of the robot,and the path tracing navigation experiment was carried out by using the improved LQR control algorithm.The experimental results show that under the guidance of the machine vision system and guided by the improved LQR control algorithm,the lateral error and heading error can converge quickly to the desired navigation path in the four initial states of[0 m,−0.34 rad],[0.10 m,0.34 rad],[0.15 m,0 rad]and[0.20 m,−0.34 rad].When the initial speed was 0.5 m/s,the average lateral error was 0.059 m and the average heading error was 0.2787 rad for the navigation trials in the four different initial states.Its average driving was 5.3 m into the steady state,the average value of steady state lateral error was 0.0102 m,the average value of steady state heading error was 0.0253 rad,and the average relative error of the robot driving along the desired navigation path was 4.6%.The results indicate that the navigation algorithm proposed in this study has good robustness,meets the operational requirements of robot autonomous navigation in orchard environment,and improves the reliability of robot driving in orchard.展开更多
Lycium barbarum commonly known as wolfberry or Goji is considered an important ingredient in Japanese,Korean,Vietnamese,and Chinese food and medicine.It is cultivated extensively in these countries and is usually harv...Lycium barbarum commonly known as wolfberry or Goji is considered an important ingredient in Japanese,Korean,Vietnamese,and Chinese food and medicine.It is cultivated extensively in these countries and is usually harvested manually,which is laborintensive and tedious task.To improve the efficiency of harvesting and reduce manual labor,automatic harvesting technology has been investigated by many researchers in recent years.In this paper,an autonomous navigation algorithm using visual cues and fuzzy control is proposed for Wolfberry orchards.At first,we propose a new weightage(2.4B-0.9G-R)to convert a color image into a grayscale image for better identification of the trunk of Lycium barbarum,the minimum positive circumscribed rectangle is used to describe the contours.Then,using the contour points the least square method is used to fit the navigation line and a region of interest(ROI)is computed that improves the realtime accuracy of the system.Finally,a set of fuzzy controllers,for pneumatic steering system,is designed to achieve real-time autonomous navigation in wolfberry orchard.Static image experiments show that the average accuracy rate of the algorithm is above 90%,and the average time consumption is approximately 162 ms,with good robustness and real-time performance.The experimental results show that when the speed is 1 km/h,the maximum lateral deviation is less than 6.2 cm and the average lateral deviation is 2.9 cm,which meets the requirements of automatic picking of wolfberry picking robot in real-world environments.展开更多
Autonomous landing has become a core technology of unmanned aerial vehicle(UAV)guidance,navigation,and control system in recent years.This paper discusses the vision⁃based relative position and attitude estimation bet...Autonomous landing has become a core technology of unmanned aerial vehicle(UAV)guidance,navigation,and control system in recent years.This paper discusses the vision⁃based relative position and attitude estimation between fixed⁃wing UAV and runway,which is a key issue in autonomous landing.Images taken by a airborne camera was used and a runway detection method based on long⁃line feature and gradient projection is proposed,which solved the problem that the traditional Hough transform requires much calculation time and easily detects end points by mistake.Under the premise of the known width and length of the runway,position and attitude estimation algorithm used the image processing results and adopted an estimation algorithm based on orthogonal iteration.The method took the objective space error as error function and effectively improved the accuracy of linear algorithm through iteration.The experimental results verified the effectiveness of the proposed algorithms.展开更多
Determining the navigation line is critical for the automatic navigation of agricultural robots in the farmland.In this research,considering a wheat field as the typical scenario,a novel navigation line extraction alg...Determining the navigation line is critical for the automatic navigation of agricultural robots in the farmland.In this research,considering a wheat field as the typical scenario,a novel navigation line extraction algorithm based on semantic segmentation is proposed.The data containing horizontal parallax,height,and grayscale information(HHG)is constructed by combining re-encoded depth data and red-green-blue(RGB)data.The HHG,RGB,and depth data are used to achieve scene recognition and navigation line extraction for a wheat field.The method includes two main steps.First,the semantic segmentation of the wheat,ground,and background are performed using a fully convolutional network(FCN).Second,the navigation line is fitted in the camera coordinate system on the basis of the semantic segmentation result and the principle of camera pinhole imaging.Our segmentation model is trained using 508 randomly selected images from a data set,and the model is tested on 199 images.When labelled data are used as the reference benchmark,the mean intersection over union(mIoU)of the HHG data is greater than 95%,which is the highest among the three types of data.The semantic segmentation methods based on the RGB and HHG data show higher navigation line extraction accuracy rates(with the absolute value of the angle deviation less than 5)than the compared methods.The mean and standard deviation of the angle deviation of the two methods are within 0.1and 2.0,while the mean and standard deviation of the distance deviation are less than 30 mm and 60 mm,respectively.These values meet the basic requirements of agricultural machinery field navigation.The novelty of this work is the proposal of a navigation line extraction algorithm based on semantic segmentation in wheat fields.This method is high in accuracy and robustness to interference from crop occlusion.展开更多
Autonomous navigation in farmlands is one of the key technologies for achieving autonomous management in maize fields.Among various navigation techniques,visual navigation using widely available RGB images is a cost-e...Autonomous navigation in farmlands is one of the key technologies for achieving autonomous management in maize fields.Among various navigation techniques,visual navigation using widely available RGB images is a cost-effective solution.However,current mainstream methods for maize crop row detection often rely on highly specialized,manually devised heuristic rules,limiting the scalability of these methods.To simplify the solution and enhance its universality,we propose an innovative crop row annotation strategy.This strategy,by simulating the strip-like structure of the crop row's central area,effectively avoids interference from lateral growth of crop leaves.Based on this,we developed a deep learning network with a dual-branch architecture,InstaCropNet,which achieves end-to-end segmentation of crop row instances.Subsequently,through the row anchor segmen-tation technique,we accurately locate the positions of different crop row instances and perform line fitting.Experimental results demonstrate that our method has an average angular deviation of no more than 2°,and the accuracy of crop row detection reaches 96.5%.展开更多
Inertial navigation system/visual navigation system(INS/VNS) integrated navigation is a commonly used autonomous navigation method for planetary rovers. Since visual measurements are related to the previous and curren...Inertial navigation system/visual navigation system(INS/VNS) integrated navigation is a commonly used autonomous navigation method for planetary rovers. Since visual measurements are related to the previous and current state vectors(position and attitude) of planetary rovers, the performance of the Kalman filter(KF) will be challenged by the time-correlation problem. A state augmentation method, which augments the previous state value to the state vector, is commonly used when dealing with this problem. However, the augmenting of state dimensions will result in an increase in computation load. In this paper, a state dimension reduced INS/VNS integrated navigation method based on coordinates of feature points is presented that utilizes the information obtained through INS/VNS integrated navigation at a previous moment to overcome the time relevance problem and reduce the dimensions of the state vector. Equations of extended Kalman filter(EKF) are used to demonstrate the equivalence of calculated results between the proposed method and traditional state augmented methods. Results of simulation and experimentation indicate that this method has less computational load but similar accuracy when compared with traditional methods.展开更多
The objective of this study was to develop a visual navigation system capable of navigating an unmanned ground vehicle(UGV)travelling between tree rows in the outdoor orchard.Thus far,while most research has developed...The objective of this study was to develop a visual navigation system capable of navigating an unmanned ground vehicle(UGV)travelling between tree rows in the outdoor orchard.Thus far,while most research has developed algorithms that deal with ground structures in the orchard,this study focused on the background of canopy plus sky to eliminate the interference factors such as inconsistent lighting,shadows,and color similarities in features.Aiming at the problem that the traditional Hough transform and the least square method are difficult to be applied under outdoor conditions,an algorithm combining Hough matrix and random sample consensus(RANSAC)was proposed to extract the navigation path.In the image segmentation stage,this study used an H-component that was adopted to extract the target path of the canopy plus sky.Then,after denoising and smoothing the image by morphological operation,line scanning was used to determine the midpoint of the target path.For navigation path extraction,this study extracted the feature points through Hough matrix to eliminate the redundant points,and RANSAC was used to reduce the impact of the noise points caused by different canopy shapes and fit the navigation path.The path acquisition experiment proved that the accuracy of Hough matrix and RANSAC method was 90.36%-96.81%and the time consumption of the program was within 0.55 s under different sunlight intensities.This method was superior to the traditional Hough transform in real-time and accuracy,and had higher accuracy,slightly worse real-time compared with the least square method.Furthermore,the OPENMV was used to capture the ground information of the orchard.The experiment proved that the recognition rate of OPENMV for identifying turning information was 100%,and the program running time was 0.17-0.19 s.Field experiments showed that the UGV could autonomously navigate the rows with a maximum lateral error of 0.118 m and realize the automatic turning of the UGV.The algorithm satisfied the practical operation requirements of autonomous vehicles in the orchard.So the UGV has the potential to guide multipurpose agricultural vehicles in outdoor orchards in the future.展开更多
In an asteroid sample-return mission,accurate position estimation of the spacecraft relative to the asteroid is essential for landing at the target point.During the missions of Hayabusa and Hayabusa2,the main part of ...In an asteroid sample-return mission,accurate position estimation of the spacecraft relative to the asteroid is essential for landing at the target point.During the missions of Hayabusa and Hayabusa2,the main part of the visual position estimation procedure was performed by human operators on the Earth based on a sequence of asteroid images acquired and sent by the spacecraft.Although this approach is still adopted in critical space missions,there is an increasing demand for automated visual position estimation,so that the time and cost of human intervention may be reduced.In this paper,we propose a method for estimating the relative position of the spacecraft and asteroid during the descent phase for touchdown from an image sequence using state-of-the-art techniques of image processing,feature extraction,and structure from motion.We apply this method to real Ryugu images that were taken by Hayabusa2 from altitudes of 20 km-500 m.It is demonstrated that the method has practical relevance for altitudes within the range of 5-1 km.This result indicates that our method could improve the efficiency of the ground operation in the global mapping and navigation during the touchdown sequence,whereas full automation and autonomous on-board estimation are beyond the scope of this study.Furthermore,we discuss the challenges of developing a completely automatic position estimation framework.展开更多
Currently,small payload and short endurance are the main problems of a single UAV in agricultural applications,especially in large-scale farmland.It is one of the important methods to solve the above problems of UAVs ...Currently,small payload and short endurance are the main problems of a single UAV in agricultural applications,especially in large-scale farmland.It is one of the important methods to solve the above problems of UAVs by improving operation efficiency through multi-UAV cooperative navigation.This study proposed a laser tracking leader-follower automatic cooperative navigation system for multi-UAVs.The leader in the cluster fires a laser beam to irradiate the follower,and the follower performs a visual tracking flight according to the light spot at the relative position of the laser tracking device.Based on the existing kernel correlation filter(KCF)tracking algorithm,an improved KCF real-time spot tracking method was proposed.Compared with the traditional KCF tracking algorithm,the recognition and tracking rate of the optimized algorithm was increased from 70%to 95%in indoor environment,and was increased from 20%to 90%in outdoor environment.The navigation control method was studied from two aspects:the distance coordinate transformation model based on micro-gyroscope and navigation control strategy.The error of spot position was reduced from the maximum(3.12,−3.66)cm to(0.14,0.12)cm by correcting the deviation distance of the spot at different angles through a coordinate correction algorithm.An image coordinate conversion model was established for a complementary metal-oxide-semiconductor(CMOS)camera and laser receiving device at different mounting distances.The laser receiving device was divided into four regions,S0-S3,and the speed of the four regions is calculated using an uncontrollable discrete Kalman filter.The outdoor flight experiments of two UAVs were carried out outdoors using this system.The experiment results show that the average flight error of the two UAVs on the X-axis is 5.2 cm,and the coefficient of variation is 0.0181.The average flight error on the Z-axis is 7.3 cm,and the coefficient of variation is 0.0414.This study demonstrated the possibility and adaptability of the developed system to achieve multi-UAVs cooperative navigation.展开更多
Road visual navigation relies on accurate road models.This study was aimed at proposing an improved scale-invariant feature transform(SIFT)algorithm for recovering depth information from farmland road images,which wou...Road visual navigation relies on accurate road models.This study was aimed at proposing an improved scale-invariant feature transform(SIFT)algorithm for recovering depth information from farmland road images,which would provide a reliable path for visual navigation.The mean image of pixel value in five channels(R,G,B,S and V)were treated as the inspected image and the feature points of the inspected image were extracted by the Canny algorithm,for achieving precise location of the feature points and ensuring the uniformity and density of the feature points.The mean value of the pixels in 5×5 neighborhood around the feature point at an interval of 45ºin eight directions was then treated as the feature vector,and the differences of the feature vectors were calculated for preliminary matching of the left and right image feature points.In order to achieve the depth information of farmland road images,the energy method of feature points was used for eliminating the mismatched points.Experiments with a binocular stereo vision system were conducted and the results showed that the matching accuracy and time consuming for depth recovery when using the improved SIFT algorithm were 96.48%and 5.6 s,respectively,with the accuracy for depth recovery of-7.17%-2.97%in a certain sight distance.The mean uniformity,time consuming and matching accuracy for all the 60 images under various climates and road conditions were 50%-70%,5.0-6.5 s,and higher than 88%,respectively,indicating that performance for achieving the feature points(e.g.,uniformity,matching accuracy,and algorithm real-time)of the improved SIFT algorithm were superior to that of conventional SIFT algorithm.This study provides an important reference for navigation technology of agricultural equipment based on machine vision.展开更多
This paper presents a new approach to the outdoor road scene understand-ing by using omni-view images and backpropagation networks. Both the road directions used for vehicle heading and the road categories used for ve...This paper presents a new approach to the outdoor road scene understand-ing by using omni-view images and backpropagation networks. Both the road directions used for vehicle heading and the road categories used for velilcle local-ization are determined by the integrated system. There are three main features about the work. First, an omni-view image sensor is used to extract image samples, and the original image is preprocessed so that the inputs of the net-work is rotation-invariant and simple. Second, the problem of the network size,especially the number of the hidden units, is decided by the analysis of system-atic experimental results. Finally, the internal representation, which reveals the properties of the neural network, is analyzed in the view point of visual signal processing. Experimental results with real scene images are encouraging.展开更多
基金National Key Research and Development Program of China(2022YFD2202103)National Natural Science Foundation of China(31971798)+2 种基金Zhejiang Provincial Key Research&Development Plan(2023C02049、2023C02053)SNJF Science and Technology Collaborative Program of Zhejiang Province(2022SNJF017)Hangzhou Agricultural and Social Development Research Project(202203A03)。
文摘The orchards usually have rough terrain,dense tree canopy and weeds.It is hard to use GNSS for autonomous navigation in orchard due to signal occlusion,multipath effect,and radio frequency interference.To achieve autonomous navigation in orchard,a visual navigation method based on multiple images at different shooting angles is proposed in this paper.A dynamic image capturing device is designed for camera installation and multiple images can be shot at different angles.Firstly,the obtained orchard images are classified into sky and soil detection stage.Each image is transformed to HSV space and initially segmented into sky,canopy and soil regions by median filtering and morphological processing.Secondly,the sky and soil regions are extracted by the maximum connected region algorithm,and the region edges are detected and filtered by the Canny operator.Thirdly,the navigation line in the current frame is extracted by fitting the region coordinate points.Then the dynamic weighted filtering algorithm is used to extract the navigation line for the soil and sky detection stage,respectively,and the navigation line for the sky detection stage is mirrored to the soil region.Finally,the Kalman filter algorithm is used to fuse and extract the final navigation path.The test results on 200 images show that the accuracy of visual navigation path fitting is 95.5%,and single frame image processing costs 60 ms,which meets the real-time and robustness requirements of navigation.The visual navigation experiments in Camellia oleifera orchard show that when the driving speed is 0.6 m/s,the maximum tracking offset of visual navigation in weed-free and weedy environments is 0.14 m and 0.24 m,respectively,and the RMSE is 30 mm and 55 mm,respectively.
基金supported by the Henan Province Innovation and Technology Fund for Outstanding Scholarship(0421000500)the Key Scientific Research Projects of Henan University of Technology(09XZD008)the support of the Chinese National Programs for Hi-Tech R&D(2007AA704339)
文摘In the technique of robot-assisted invasive surgery, high quality image is a key factor of the visual navigation system. In this paper, the authors have made a study of the image processing in visual system. Based on the analysis of plentiful demising methods, they proposed a new method (S-AM-W) which oxnbines Adaptive Median filter and Wioaer filter to renmve the main noises (Salt & Pepper noise and Gattssian noise). The sinlflation results show that it is simple, well real time, and has high Peak Signal-to-Noise Ratio (PSNR). It was found that the new method is effective and efficient in dealing with medical image of background noise.
文摘In aerial robots' visual navigation, it is essential yet very difficult to detect the attitude and position of the robots operated in real time. By introducing a new parametric model, the problem can be reduced from almost unmanageable to be partly solved, though not fully, as per the requirement. In this parametric approach, a multi-scale least square method is formulated first. By propagating as well as improving the parameters down from layer to layer of the image pyramid, a new global feature line can then be detected to parameterize the attitude of the robots. Furthermore, this approach paves the way for segmenting the image into distinct parts, which can be realized by deploying a Bayesian classifier on the picture cell level. Comparison with the Hough transform based method in terms of robustness and precision shows that this multi-scale least square algorithm is considerably more robust to noises. Some discussions are also given.
文摘Abstract: There is a high demand for unmanned aerial vehicle (UAV) flight stability when using vi- sion as a detection method for navigation control. To meet such demand, a new path planning meth- od for controlling multi-UAVs is studied to reach multi-waypoints simultaneously under the view of visual navigation technology. A model based on the stable-shortest pythagorean-hodograph (PH) curve is established, which could not only satisfy the demands of visual navigation and control law, but also be easy to compute. Based on the model, a planning algorithm to guide multi-UAVs to reach multi-waypoints at the same time without collisions is developed. The simulation results show that the paths have shorter distance and smaller curvature than traditional methods, which could help to avoid collisions.
文摘In this paper, we propose the novel robot motion planning model based on the visual navigation and fuzzy control. A robot operating system can be viewed as the mechanical energy converter from the joint space to the global operation space, and the fiexibility of the robot system refi ects the global transformation ability of the whole system. Fuzzy control technology is a kind of fuzzy science, artificial intelligence, knowledge engineering and other disciplines interdisciplinary fields, the theory of strong science and technology, to achieve this fuzzy control technology theory, known as the fuzzy control theory. Besides this, this paper integrates the visual navigation system to construct the better robust methodology which is meaningful.
基金supported by the National Key R&D Program of China(No.2016YFD0701504).
文摘To realize automatic harvesting of the jujube,the jujube harvester was designed and manufactured.For achieving the jujube harvester autopilot,a novel algorithm for visual navigation path detection was proposed.The centerline of tree row lines was taken as the navigation path.The method included four main parts:image preprocessing,image segmentation,tree row lines access,and navigation path access.The methods of threshold segmentation,noise removal,and border smoothing were utilized on the image in Lab color space for the image segmentation.The least square method was employed to fit the tree row lines,and the centerline was obtained as the navigation path.Experimental results indicated that the average false detection rate was 3.98%,and the average detection speed was 41 fps.The algorithm meets the requirements of the jujube harvester autopilot in terms of accuracy and speed.It also can lay the foundation for accomplishing the jujube harvester vision-based autopilot.
基金supported by the National Natural Science Foundation of China(Grant Nos.61673057 and 61803028)。
文摘Visual navigation is imperative for successful asteroid exploration missions.In this study,an integrated visual navigation system was proposed based on angles-only measurements to robustly and accurately determine the pose of the lander during the final landing phase.The system used the lander's global pose information provided by an orbiter,which was deployed in space in advance,and its relative motion information in adjacent images to jointly estimate its optimal state.First,the landmarks on the asteroid surface and markers on the lander were identified from the images acquired by the orbiter.Subsequently,an angles-only measurement model concerning the landmarks and markers was constructed to estimate the orbiter's position and lander's pose.Subsequently,a method based on the epipolar constraint was proposed to estimate the lander's inter-frame motion.Then,the absolute pose and relative motion of the lander were fused using an extended Kalman filter.Additionally,the observability criterion and covariance of the state error were provided.Finally,synthetic image sequences were generated to validate the proposed navigation system,and numerical results demonstrated its advance in terms of robustness and accuracy.
基金supported in part by the National Natural Science Foundation of China (62225309,62073222,U21A20480,62361166632)。
文摘Autonomous navigation for intelligent mobile robots has gained significant attention,with a focus on enabling robots to generate reliable policies based on maintenance of spatial memory.In this paper,we propose a learning-based visual navigation pipeline that uses topological maps as memory configurations.We introduce a unique online topology construction approach that fuses odometry pose estimation and perceptual similarity estimation.This tackles the issues of topological node redundancy and incorrect edge connections,which stem from the distribution gap between the spatial and perceptual domains.Furthermore,we propose a differentiable graph extraction structure,the topology multi-factor transformer(TMFT).This structure utilizes graph neural networks to integrate global memory and incorporates a multi-factor attention mechanism to underscore elements closely related to relevant target cues for policy generation.Results from photorealistic simulations on image-goal navigation tasks highlight the superior navigation performance of our proposed pipeline compared to existing memory structures.Comprehensive validation through behavior visualization,interpretability tests,and real-world deployment further underscore the adapt-ability and efficacy of our method.
基金This work has been financially supported by the National Natural Science Foundation of China(Grant No.31071329)the Team Construction of Young and Middle-aged Talents in Science and Technology Innovation of Xinjiang Corps(Grant No.2016BC001)。
文摘In order to meet the actual operation demand of visual navigation during cotton field management period,image detection algorithm of visual navigation route during this period was investigated in this research.Firstly,for the operation images under natural environment,the approach of color component difference,which is applicable for cotton field management,was adopted to extract the target characteristics of different regions inside and outside cotton field.Secondly,the median filtering method was employed to eliminate noise in the images and realize smoothing process of the images.Then,according to the regional vertical cumulative distribution graph of the images,the boundary characteristic of the cotton seedling region was obtained and the central position of the cotton seedling row was determined.Finally,the detection of the candidate points cluster was realized,and the navigation route was extracted by Hough transformation passing the known point.The testing results showed that the algorithms could rapidly and accurately detect the navigation route during cotton field management period.And the average processing time periods for each frame image at the emergence,strong seedling,budding and blooming stages were 41.43 ms,67.83 ms,68.80 ms and 74.05 ms,respectively.The detection has the advantage of high accuracy,strong robustness and fast speed,and is simultaneously less vulnerable to interference from external environment,which satisfies the practical operation requirements of cotton field management machinery.
基金funded by the National Key Research and Development Program of China Project(Grant No.2021YFD2000700)the National Natural Science Funds for Young Scholars of China(Grant No.51905154)the Luoyang Public Welfare Special Project(Grant No.2302031A).
文摘To realize the visual navigation of agricultural robots in the complex environment of orchards,this study proposed a method for fruit tree recognition and navigation based on YOLOv5.The YOLOv5s model was selected and trained to identify the trunks of the left and right rows of fruit trees;the quadratic curve was fitted to the bottom center of the fruit tree recognition box,and the identified fruit trees were divided into left and right columns by using the extreme value point of the quadratic curve to obtain the left and right rows of fruit trees;the straight-line equation of the left and right fruit tree rows was further solved,the median line of the two straight lines was taken as the expected navigation path of the robot,and the path tracing navigation experiment was carried out by using the improved LQR control algorithm.The experimental results show that under the guidance of the machine vision system and guided by the improved LQR control algorithm,the lateral error and heading error can converge quickly to the desired navigation path in the four initial states of[0 m,−0.34 rad],[0.10 m,0.34 rad],[0.15 m,0 rad]and[0.20 m,−0.34 rad].When the initial speed was 0.5 m/s,the average lateral error was 0.059 m and the average heading error was 0.2787 rad for the navigation trials in the four different initial states.Its average driving was 5.3 m into the steady state,the average value of steady state lateral error was 0.0102 m,the average value of steady state heading error was 0.0253 rad,and the average relative error of the robot driving along the desired navigation path was 4.6%.The results indicate that the navigation algorithm proposed in this study has good robustness,meets the operational requirements of robot autonomous navigation in orchard environment,and improves the reliability of robot driving in orchard.
基金This work was supported by The National Key Research and Development Program of China(2016YFD0701501)supported by the Fundamental Research Funds for the Central Universi-tiesThe National Science Foundation Program of China(51975574).
文摘Lycium barbarum commonly known as wolfberry or Goji is considered an important ingredient in Japanese,Korean,Vietnamese,and Chinese food and medicine.It is cultivated extensively in these countries and is usually harvested manually,which is laborintensive and tedious task.To improve the efficiency of harvesting and reduce manual labor,automatic harvesting technology has been investigated by many researchers in recent years.In this paper,an autonomous navigation algorithm using visual cues and fuzzy control is proposed for Wolfberry orchards.At first,we propose a new weightage(2.4B-0.9G-R)to convert a color image into a grayscale image for better identification of the trunk of Lycium barbarum,the minimum positive circumscribed rectangle is used to describe the contours.Then,using the contour points the least square method is used to fit the navigation line and a region of interest(ROI)is computed that improves the realtime accuracy of the system.Finally,a set of fuzzy controllers,for pneumatic steering system,is designed to achieve real-time autonomous navigation in wolfberry orchard.Static image experiments show that the average accuracy rate of the algorithm is above 90%,and the average time consumption is approximately 162 ms,with good robustness and real-time performance.The experimental results show that when the speed is 1 km/h,the maximum lateral deviation is less than 6.2 cm and the average lateral deviation is 2.9 cm,which meets the requirements of automatic picking of wolfberry picking robot in real-world environments.
基金Sponsored by the Fundamental Research Funds for the Central Universities(Grant No.NP2019105)the Funds from the Post⁃graduate Creative Base in Nanjing University of Aeronautics and Astronautics(Grant No.kfjj20190716).
文摘Autonomous landing has become a core technology of unmanned aerial vehicle(UAV)guidance,navigation,and control system in recent years.This paper discusses the vision⁃based relative position and attitude estimation between fixed⁃wing UAV and runway,which is a key issue in autonomous landing.Images taken by a airborne camera was used and a runway detection method based on long⁃line feature and gradient projection is proposed,which solved the problem that the traditional Hough transform requires much calculation time and easily detects end points by mistake.Under the premise of the known width and length of the runway,position and attitude estimation algorithm used the image processing results and adopted an estimation algorithm based on orthogonal iteration.The method took the objective space error as error function and effectively improved the accuracy of linear algorithm through iteration.The experimental results verified the effectiveness of the proposed algorithms.
基金supported by the National Natural Science Foundation of China(No.61503363).
文摘Determining the navigation line is critical for the automatic navigation of agricultural robots in the farmland.In this research,considering a wheat field as the typical scenario,a novel navigation line extraction algorithm based on semantic segmentation is proposed.The data containing horizontal parallax,height,and grayscale information(HHG)is constructed by combining re-encoded depth data and red-green-blue(RGB)data.The HHG,RGB,and depth data are used to achieve scene recognition and navigation line extraction for a wheat field.The method includes two main steps.First,the semantic segmentation of the wheat,ground,and background are performed using a fully convolutional network(FCN).Second,the navigation line is fitted in the camera coordinate system on the basis of the semantic segmentation result and the principle of camera pinhole imaging.Our segmentation model is trained using 508 randomly selected images from a data set,and the model is tested on 199 images.When labelled data are used as the reference benchmark,the mean intersection over union(mIoU)of the HHG data is greater than 95%,which is the highest among the three types of data.The semantic segmentation methods based on the RGB and HHG data show higher navigation line extraction accuracy rates(with the absolute value of the angle deviation less than 5)than the compared methods.The mean and standard deviation of the angle deviation of the two methods are within 0.1and 2.0,while the mean and standard deviation of the distance deviation are less than 30 mm and 60 mm,respectively.These values meet the basic requirements of agricultural machinery field navigation.The novelty of this work is the proposal of a navigation line extraction algorithm based on semantic segmentation in wheat fields.This method is high in accuracy and robustness to interference from crop occlusion.
基金Anhui Provincial University Research Program(2023AH040138)the National Natural Science Foundation of China(32271998)(52075092)for providing financial support for the research.
文摘Autonomous navigation in farmlands is one of the key technologies for achieving autonomous management in maize fields.Among various navigation techniques,visual navigation using widely available RGB images is a cost-effective solution.However,current mainstream methods for maize crop row detection often rely on highly specialized,manually devised heuristic rules,limiting the scalability of these methods.To simplify the solution and enhance its universality,we propose an innovative crop row annotation strategy.This strategy,by simulating the strip-like structure of the crop row's central area,effectively avoids interference from lateral growth of crop leaves.Based on this,we developed a deep learning network with a dual-branch architecture,InstaCropNet,which achieves end-to-end segmentation of crop row instances.Subsequently,through the row anchor segmen-tation technique,we accurately locate the positions of different crop row instances and perform line fitting.Experimental results demonstrate that our method has an average angular deviation of no more than 2°,and the accuracy of crop row detection reaches 96.5%.
基金supported by the National Natural Science Foundation of China (Nos. 61233005 and 61503013)the National Basic Research Program of China (No. 2014CB744202)+2 种基金Beijing Youth Talent ProgramFundamental Science on Novel Inertial Instrument & Navigation System Technology LaboratoryProgram for Changjiang Scholars and Innovative Research Team in University (IRT1203) for their valuable comments
文摘Inertial navigation system/visual navigation system(INS/VNS) integrated navigation is a commonly used autonomous navigation method for planetary rovers. Since visual measurements are related to the previous and current state vectors(position and attitude) of planetary rovers, the performance of the Kalman filter(KF) will be challenged by the time-correlation problem. A state augmentation method, which augments the previous state value to the state vector, is commonly used when dealing with this problem. However, the augmenting of state dimensions will result in an increase in computation load. In this paper, a state dimension reduced INS/VNS integrated navigation method based on coordinates of feature points is presented that utilizes the information obtained through INS/VNS integrated navigation at a previous moment to overcome the time relevance problem and reduce the dimensions of the state vector. Equations of extended Kalman filter(EKF) are used to demonstrate the equivalence of calculated results between the proposed method and traditional state augmented methods. Results of simulation and experimentation indicate that this method has less computational load but similar accuracy when compared with traditional methods.
基金supported by the Special Fund for Agro-scientific Research in the Public Interest(Grant No.201503136)the National Key Technology R&D Program(Grant No.2017YFD0301300).
文摘The objective of this study was to develop a visual navigation system capable of navigating an unmanned ground vehicle(UGV)travelling between tree rows in the outdoor orchard.Thus far,while most research has developed algorithms that deal with ground structures in the orchard,this study focused on the background of canopy plus sky to eliminate the interference factors such as inconsistent lighting,shadows,and color similarities in features.Aiming at the problem that the traditional Hough transform and the least square method are difficult to be applied under outdoor conditions,an algorithm combining Hough matrix and random sample consensus(RANSAC)was proposed to extract the navigation path.In the image segmentation stage,this study used an H-component that was adopted to extract the target path of the canopy plus sky.Then,after denoising and smoothing the image by morphological operation,line scanning was used to determine the midpoint of the target path.For navigation path extraction,this study extracted the feature points through Hough matrix to eliminate the redundant points,and RANSAC was used to reduce the impact of the noise points caused by different canopy shapes and fit the navigation path.The path acquisition experiment proved that the accuracy of Hough matrix and RANSAC method was 90.36%-96.81%and the time consumption of the program was within 0.55 s under different sunlight intensities.This method was superior to the traditional Hough transform in real-time and accuracy,and had higher accuracy,slightly worse real-time compared with the least square method.Furthermore,the OPENMV was used to capture the ground information of the orchard.The experiment proved that the recognition rate of OPENMV for identifying turning information was 100%,and the program running time was 0.17-0.19 s.Field experiments showed that the UGV could autonomously navigate the rows with a maximum lateral error of 0.118 m and realize the automatic turning of the UGV.The algorithm satisfied the practical operation requirements of autonomous vehicles in the orchard.So the UGV has the potential to guide multipurpose agricultural vehicles in outdoor orchards in the future.
基金This work was partially supported by JSPS KAKENHI Grant No.18H01628.
文摘In an asteroid sample-return mission,accurate position estimation of the spacecraft relative to the asteroid is essential for landing at the target point.During the missions of Hayabusa and Hayabusa2,the main part of the visual position estimation procedure was performed by human operators on the Earth based on a sequence of asteroid images acquired and sent by the spacecraft.Although this approach is still adopted in critical space missions,there is an increasing demand for automated visual position estimation,so that the time and cost of human intervention may be reduced.In this paper,we propose a method for estimating the relative position of the spacecraft and asteroid during the descent phase for touchdown from an image sequence using state-of-the-art techniques of image processing,feature extraction,and structure from motion.We apply this method to real Ryugu images that were taken by Hayabusa2 from altitudes of 20 km-500 m.It is demonstrated that the method has practical relevance for altitudes within the range of 5-1 km.This result indicates that our method could improve the efficiency of the ground operation in the global mapping and navigation during the touchdown sequence,whereas full automation and autonomous on-board estimation are beyond the scope of this study.Furthermore,we discuss the challenges of developing a completely automatic position estimation framework.
基金This work was supported in part by the Laboratory of Lingnan Modern Agriculture Project(Grant No.NT2021009)in part by the Science and Technology Plan of Jian City of China(Grant No.20211-055316)+3 种基金in part by the National Natural Science Foundation of China(Grant No.31871520)in part by the Science and Technology Plan of Guangdong Province of China(Grant No.2021B1212040009,2017B090903007)in part by the Guangdong Basic and Applied Basic Research Foundation under(Grant No.2020A1515110214)in part by Innovative Research Team of Agricultural and Rural Big Data in Guangdong Province of China under(Grant No.2019KJ138).
文摘Currently,small payload and short endurance are the main problems of a single UAV in agricultural applications,especially in large-scale farmland.It is one of the important methods to solve the above problems of UAVs by improving operation efficiency through multi-UAV cooperative navigation.This study proposed a laser tracking leader-follower automatic cooperative navigation system for multi-UAVs.The leader in the cluster fires a laser beam to irradiate the follower,and the follower performs a visual tracking flight according to the light spot at the relative position of the laser tracking device.Based on the existing kernel correlation filter(KCF)tracking algorithm,an improved KCF real-time spot tracking method was proposed.Compared with the traditional KCF tracking algorithm,the recognition and tracking rate of the optimized algorithm was increased from 70%to 95%in indoor environment,and was increased from 20%to 90%in outdoor environment.The navigation control method was studied from two aspects:the distance coordinate transformation model based on micro-gyroscope and navigation control strategy.The error of spot position was reduced from the maximum(3.12,−3.66)cm to(0.14,0.12)cm by correcting the deviation distance of the spot at different angles through a coordinate correction algorithm.An image coordinate conversion model was established for a complementary metal-oxide-semiconductor(CMOS)camera and laser receiving device at different mounting distances.The laser receiving device was divided into four regions,S0-S3,and the speed of the four regions is calculated using an uncontrollable discrete Kalman filter.The outdoor flight experiments of two UAVs were carried out outdoors using this system.The experiment results show that the average flight error of the two UAVs on the X-axis is 5.2 cm,and the coefficient of variation is 0.0181.The average flight error on the Z-axis is 7.3 cm,and the coefficient of variation is 0.0414.This study demonstrated the possibility and adaptability of the developed system to achieve multi-UAVs cooperative navigation.
基金This work was financially supported by the Zhejiang Science and Technology Department Basic Public Welfare Research Project(LGN18F030001)the Major Project of Zhejiang Science and Technology Department(2016C02G2100540).
文摘Road visual navigation relies on accurate road models.This study was aimed at proposing an improved scale-invariant feature transform(SIFT)algorithm for recovering depth information from farmland road images,which would provide a reliable path for visual navigation.The mean image of pixel value in five channels(R,G,B,S and V)were treated as the inspected image and the feature points of the inspected image were extracted by the Canny algorithm,for achieving precise location of the feature points and ensuring the uniformity and density of the feature points.The mean value of the pixels in 5×5 neighborhood around the feature point at an interval of 45ºin eight directions was then treated as the feature vector,and the differences of the feature vectors were calculated for preliminary matching of the left and right image feature points.In order to achieve the depth information of farmland road images,the energy method of feature points was used for eliminating the mismatched points.Experiments with a binocular stereo vision system were conducted and the results showed that the matching accuracy and time consuming for depth recovery when using the improved SIFT algorithm were 96.48%and 5.6 s,respectively,with the accuracy for depth recovery of-7.17%-2.97%in a certain sight distance.The mean uniformity,time consuming and matching accuracy for all the 60 images under various climates and road conditions were 50%-70%,5.0-6.5 s,and higher than 88%,respectively,indicating that performance for achieving the feature points(e.g.,uniformity,matching accuracy,and algorithm real-time)of the improved SIFT algorithm were superior to that of conventional SIFT algorithm.This study provides an important reference for navigation technology of agricultural equipment based on machine vision.
文摘This paper presents a new approach to the outdoor road scene understand-ing by using omni-view images and backpropagation networks. Both the road directions used for vehicle heading and the road categories used for velilcle local-ization are determined by the integrated system. There are three main features about the work. First, an omni-view image sensor is used to extract image samples, and the original image is preprocessed so that the inputs of the net-work is rotation-invariant and simple. Second, the problem of the network size,especially the number of the hidden units, is decided by the analysis of system-atic experimental results. Finally, the internal representation, which reveals the properties of the neural network, is analyzed in the view point of visual signal processing. Experimental results with real scene images are encouraging.