期刊文献+
共找到22篇文章
< 1 2 >
每页显示 20 50 100
Visual navigation in orchard based on multiple images at different shooting angles
1
作者 MA Zenghong YUE Jiawen +3 位作者 YIN Cheng ZHAO Runmao CHANDA Mulongoti DU Xiaoqiang 《智能化农业装备学报(中英文)》 2024年第4期51-65,共15页
The orchards usually have rough terrain,dense tree canopy and weeds.It is hard to use GNSS for autonomous navigation in orchard due to signal occlusion,multipath effect,and radio frequency interference.To achieve auto... The orchards usually have rough terrain,dense tree canopy and weeds.It is hard to use GNSS for autonomous navigation in orchard due to signal occlusion,multipath effect,and radio frequency interference.To achieve autonomous navigation in orchard,a visual navigation method based on multiple images at different shooting angles is proposed in this paper.A dynamic image capturing device is designed for camera installation and multiple images can be shot at different angles.Firstly,the obtained orchard images are classified into sky and soil detection stage.Each image is transformed to HSV space and initially segmented into sky,canopy and soil regions by median filtering and morphological processing.Secondly,the sky and soil regions are extracted by the maximum connected region algorithm,and the region edges are detected and filtered by the Canny operator.Thirdly,the navigation line in the current frame is extracted by fitting the region coordinate points.Then the dynamic weighted filtering algorithm is used to extract the navigation line for the soil and sky detection stage,respectively,and the navigation line for the sky detection stage is mirrored to the soil region.Finally,the Kalman filter algorithm is used to fuse and extract the final navigation path.The test results on 200 images show that the accuracy of visual navigation path fitting is 95.5%,and single frame image processing costs 60 ms,which meets the real-time and robustness requirements of navigation.The visual navigation experiments in Camellia oleifera orchard show that when the driving speed is 0.6 m/s,the maximum tracking offset of visual navigation in weed-free and weedy environments is 0.14 m and 0.24 m,respectively,and the RMSE is 30 mm and 55 mm,respectively. 展开更多
关键词 ORCHARD visual navigation multiple shooting angles region segmentation Kalman filter
下载PDF
Study of Image Denoising in Robot Visual Navigation System 被引量:1
2
作者 宁袆 马万军 《Journal of Measurement Science and Instrumentation》 CAS 2011年第1期21-24,共4页
In the technique of robot-assisted invasive surgery, high quality image is a key factor of the visual navigation system. In this paper, the authors have made a study of the image processing in visual system. Based on ... In the technique of robot-assisted invasive surgery, high quality image is a key factor of the visual navigation system. In this paper, the authors have made a study of the image processing in visual system. Based on the analysis of plentiful demising methods, they proposed a new method (S-AM-W) which oxnbines Adaptive Median filter and Wioaer filter to renmve the main noises (Salt & Pepper noise and Gattssian noise). The sinlflation results show that it is simple, well real time, and has high Peak Signal-to-Noise Ratio (PSNR). It was found that the new method is effective and efficient in dealing with medical image of background noise. 展开更多
关键词 visual navigation adaptite median filter siener filter PSNR
下载PDF
On parametric approach of aerial robots' visual navigation
3
作者 Zhou Yu Huang Xianlin Jie Ming Yin Hang 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2008年第5期1010-1016,共7页
In aerial robots' visual navigation, it is essential yet very difficult to detect the attitude and position of the robots operated in real time. By introducing a new parametric model, the problem can be reduced from ... In aerial robots' visual navigation, it is essential yet very difficult to detect the attitude and position of the robots operated in real time. By introducing a new parametric model, the problem can be reduced from almost unmanageable to be partly solved, though not fully, as per the requirement. In this parametric approach, a multi-scale least square method is formulated first. By propagating as well as improving the parameters down from layer to layer of the image pyramid, a new global feature line can then be detected to parameterize the attitude of the robots. Furthermore, this approach paves the way for segmenting the image into distinct parts, which can be realized by deploying a Bayesian classifier on the picture cell level. Comparison with the Hough transform based method in terms of robustness and precision shows that this multi-scale least square algorithm is considerably more robust to noises. Some discussions are also given. 展开更多
关键词 parametric model aerial robots visual navigation multi-scale least square.
下载PDF
Path planning method for controlling multi-UAVs to reach multi-waypoints simultaneously under the view of visual navigation
4
作者 杨东晓 李杰 +1 位作者 李大林 关震宇 《Journal of Beijing Institute of Technology》 EI CAS 2013年第3期308-312,共5页
Abstract: There is a high demand for unmanned aerial vehicle (UAV) flight stability when using vi- sion as a detection method for navigation control. To meet such demand, a new path planning meth- od for controllin... Abstract: There is a high demand for unmanned aerial vehicle (UAV) flight stability when using vi- sion as a detection method for navigation control. To meet such demand, a new path planning meth- od for controlling multi-UAVs is studied to reach multi-waypoints simultaneously under the view of visual navigation technology. A model based on the stable-shortest pythagorean-hodograph (PH) curve is established, which could not only satisfy the demands of visual navigation and control law, but also be easy to compute. Based on the model, a planning algorithm to guide multi-UAVs to reach multi-waypoints at the same time without collisions is developed. The simulation results show that the paths have shorter distance and smaller curvature than traditional methods, which could help to avoid collisions. 展开更多
关键词 path planning multi-UAVs visual navigation reaching multi-waypoints simultaneously
下载PDF
A Novel Robot Motion Planning Model based on Visual Navigation and Fuzzy Control
5
作者 Xiaomin Wang 《International Journal of Technology Management》 2016年第9期58-60,共3页
In this paper, we propose the novel robot motion planning model based on the visual navigation and fuzzy control. A robot operating system can be viewed as the mechanical energy converter from the joint space to the g... In this paper, we propose the novel robot motion planning model based on the visual navigation and fuzzy control. A robot operating system can be viewed as the mechanical energy converter from the joint space to the global operation space, and the fiexibility of the robot system refi ects the global transformation ability of the whole system. Fuzzy control technology is a kind of fuzzy science, artificial intelligence, knowledge engineering and other disciplines interdisciplinary fields, the theory of strong science and technology, to achieve this fuzzy control technology theory, known as the fuzzy control theory. Besides this, this paper integrates the visual navigation system to construct the better robust methodology which is meaningful. 展开更多
关键词 Robot Motion Planning Model visual navigation Fuzzy Control.
下载PDF
Novel method for the visual navigation path detection of jujube harvester autopilot based on image processing 被引量:1
6
作者 Xiongchu Zhang Bingqi Chen +4 位作者 Jingbin Li Xin Fang Congli Zhang Shubo Peng Yongzheng Li 《International Journal of Agricultural and Biological Engineering》 SCIE 2023年第5期189-197,共9页
To realize automatic harvesting of the jujube,the jujube harvester was designed and manufactured.For achieving the jujube harvester autopilot,a novel algorithm for visual navigation path detection was proposed.The cen... To realize automatic harvesting of the jujube,the jujube harvester was designed and manufactured.For achieving the jujube harvester autopilot,a novel algorithm for visual navigation path detection was proposed.The centerline of tree row lines was taken as the navigation path.The method included four main parts:image preprocessing,image segmentation,tree row lines access,and navigation path access.The methods of threshold segmentation,noise removal,and border smoothing were utilized on the image in Lab color space for the image segmentation.The least square method was employed to fit the tree row lines,and the centerline was obtained as the navigation path.Experimental results indicated that the average false detection rate was 3.98%,and the average detection speed was 41 fps.The algorithm meets the requirements of the jujube harvester autopilot in terms of accuracy and speed.It also can lay the foundation for accomplishing the jujube harvester vision-based autopilot. 展开更多
关键词 visual navigation path jujube orchards image processing Lab color space seed region growing
原文传递
Integrated visual navigation based on angles-onlymeasurements for asteroid final landing phase 被引量:1
7
作者 Ronghai Hu Xiangyu Huang Chao Xu 《Astrodynamics》 EI CSCD 2023年第1期69-82,共14页
Visual navigation is imperative for successful asteroid exploration missions.In this study,an integrated visual navigation system was proposed based on angles-only measurements to robustly and accurately determine the... Visual navigation is imperative for successful asteroid exploration missions.In this study,an integrated visual navigation system was proposed based on angles-only measurements to robustly and accurately determine the pose of the lander during the final landing phase.The system used the lander's global pose information provided by an orbiter,which was deployed in space in advance,and its relative motion information in adjacent images to jointly estimate its optimal state.First,the landmarks on the asteroid surface and markers on the lander were identified from the images acquired by the orbiter.Subsequently,an angles-only measurement model concerning the landmarks and markers was constructed to estimate the orbiter's position and lander's pose.Subsequently,a method based on the epipolar constraint was proposed to estimate the lander's inter-frame motion.Then,the absolute pose and relative motion of the lander were fused using an extended Kalman filter.Additionally,the observability criterion and covariance of the state error were provided.Finally,synthetic image sequences were generated to validate the proposed navigation system,and numerical results demonstrated its advance in terms of robustness and accuracy. 展开更多
关键词 asteroid final landing phase visual navigation angles-only measurements inter-frame motion estimation state fusion
原文传递
Cognitive Navigation for Intelligent Mobile Robots:A Learning-Based Approach With Topological Memory Configuration
8
作者 Qiming Liu Xinru Cui +1 位作者 Zhe Liu Hesheng Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第9期1933-1943,共11页
Autonomous navigation for intelligent mobile robots has gained significant attention,with a focus on enabling robots to generate reliable policies based on maintenance of spatial memory.In this paper,we propose a lear... Autonomous navigation for intelligent mobile robots has gained significant attention,with a focus on enabling robots to generate reliable policies based on maintenance of spatial memory.In this paper,we propose a learning-based visual navigation pipeline that uses topological maps as memory configurations.We introduce a unique online topology construction approach that fuses odometry pose estimation and perceptual similarity estimation.This tackles the issues of topological node redundancy and incorrect edge connections,which stem from the distribution gap between the spatial and perceptual domains.Furthermore,we propose a differentiable graph extraction structure,the topology multi-factor transformer(TMFT).This structure utilizes graph neural networks to integrate global memory and incorporates a multi-factor attention mechanism to underscore elements closely related to relevant target cues for policy generation.Results from photorealistic simulations on image-goal navigation tasks highlight the superior navigation performance of our proposed pipeline compared to existing memory structures.Comprehensive validation through behavior visualization,interpretability tests,and real-world deployment further underscore the adapt-ability and efficacy of our method. 展开更多
关键词 Graph neural networks(GNNs) spatial memory topological map visual navigation
下载PDF
Image detection and verification of visual navigation route during cotton field management period 被引量:5
9
作者 Jingbin Li Rongguang Zhu Bingqi Chen 《International Journal of Agricultural and Biological Engineering》 SCIE EI CAS 2018年第6期159-165,共7页
In order to meet the actual operation demand of visual navigation during cotton field management period,image detection algorithm of visual navigation route during this period was investigated in this research.Firstly... In order to meet the actual operation demand of visual navigation during cotton field management period,image detection algorithm of visual navigation route during this period was investigated in this research.Firstly,for the operation images under natural environment,the approach of color component difference,which is applicable for cotton field management,was adopted to extract the target characteristics of different regions inside and outside cotton field.Secondly,the median filtering method was employed to eliminate noise in the images and realize smoothing process of the images.Then,according to the regional vertical cumulative distribution graph of the images,the boundary characteristic of the cotton seedling region was obtained and the central position of the cotton seedling row was determined.Finally,the detection of the candidate points cluster was realized,and the navigation route was extracted by Hough transformation passing the known point.The testing results showed that the algorithms could rapidly and accurately detect the navigation route during cotton field management period.And the average processing time periods for each frame image at the emergence,strong seedling,budding and blooming stages were 41.43 ms,67.83 ms,68.80 ms and 74.05 ms,respectively.The detection has the advantage of high accuracy,strong robustness and fast speed,and is simultaneously less vulnerable to interference from external environment,which satisfies the practical operation requirements of cotton field management machinery. 展开更多
关键词 visual navigation route detection Hough transformation passing the known point cotton field management period
原文传递
Method for the fruit tree recognition and navigation in complex environment of an agricultural robot
10
作者 Xiaolin Xie Yuchao Li +3 位作者 Lijun Zhao Xin Jin Shengsheng Wang Xiaobing Han 《International Journal of Agricultural and Biological Engineering》 SCIE 2024年第2期221-229,共9页
To realize the visual navigation of agricultural robots in the complex environment of orchards,this study proposed a method for fruit tree recognition and navigation based on YOLOv5.The YOLOv5s model was selected and ... To realize the visual navigation of agricultural robots in the complex environment of orchards,this study proposed a method for fruit tree recognition and navigation based on YOLOv5.The YOLOv5s model was selected and trained to identify the trunks of the left and right rows of fruit trees;the quadratic curve was fitted to the bottom center of the fruit tree recognition box,and the identified fruit trees were divided into left and right columns by using the extreme value point of the quadratic curve to obtain the left and right rows of fruit trees;the straight-line equation of the left and right fruit tree rows was further solved,the median line of the two straight lines was taken as the expected navigation path of the robot,and the path tracing navigation experiment was carried out by using the improved LQR control algorithm.The experimental results show that under the guidance of the machine vision system and guided by the improved LQR control algorithm,the lateral error and heading error can converge quickly to the desired navigation path in the four initial states of[0 m,−0.34 rad],[0.10 m,0.34 rad],[0.15 m,0 rad]and[0.20 m,−0.34 rad].When the initial speed was 0.5 m/s,the average lateral error was 0.059 m and the average heading error was 0.2787 rad for the navigation trials in the four different initial states.Its average driving was 5.3 m into the steady state,the average value of steady state lateral error was 0.0102 m,the average value of steady state heading error was 0.0253 rad,and the average relative error of the robot driving along the desired navigation path was 4.6%.The results indicate that the navigation algorithm proposed in this study has good robustness,meets the operational requirements of robot autonomous navigation in orchard environment,and improves the reliability of robot driving in orchard. 展开更多
关键词 fruit tree recognition visual navigation YOLOv5 complex environments ORCHARDS
原文传递
Autonomous navigation for a wolfberry picking robot using visual cues and fuzzy control 被引量:14
11
作者 Yue Ma Wenqiang Zhang +3 位作者 Waqar S.Qureshi Chao Gao Chunlong Zhang Wei Li 《Information Processing in Agriculture》 EI 2021年第1期15-26,共12页
Lycium barbarum commonly known as wolfberry or Goji is considered an important ingredient in Japanese,Korean,Vietnamese,and Chinese food and medicine.It is cultivated extensively in these countries and is usually harv... Lycium barbarum commonly known as wolfberry or Goji is considered an important ingredient in Japanese,Korean,Vietnamese,and Chinese food and medicine.It is cultivated extensively in these countries and is usually harvested manually,which is laborintensive and tedious task.To improve the efficiency of harvesting and reduce manual labor,automatic harvesting technology has been investigated by many researchers in recent years.In this paper,an autonomous navigation algorithm using visual cues and fuzzy control is proposed for Wolfberry orchards.At first,we propose a new weightage(2.4B-0.9G-R)to convert a color image into a grayscale image for better identification of the trunk of Lycium barbarum,the minimum positive circumscribed rectangle is used to describe the contours.Then,using the contour points the least square method is used to fit the navigation line and a region of interest(ROI)is computed that improves the realtime accuracy of the system.Finally,a set of fuzzy controllers,for pneumatic steering system,is designed to achieve real-time autonomous navigation in wolfberry orchard.Static image experiments show that the average accuracy rate of the algorithm is above 90%,and the average time consumption is approximately 162 ms,with good robustness and real-time performance.The experimental results show that when the speed is 1 km/h,the maximum lateral deviation is less than 6.2 cm and the average lateral deviation is 2.9 cm,which meets the requirements of automatic picking of wolfberry picking robot in real-world environments. 展开更多
关键词 visual navigation Chinese wolfberry picking Fuzzy control
原文传递
Monocular Vision Based Relative Localization For Fixed⁃wing Unmanned Aerial Vehicle Landing
12
作者 Yuwen Xu Yunfeng Cao Zhouyu Zhang 《Journal of Harbin Institute of Technology(New Series)》 CAS 2022年第1期1-14,共14页
Autonomous landing has become a core technology of unmanned aerial vehicle(UAV)guidance,navigation,and control system in recent years.This paper discusses the vision⁃based relative position and attitude estimation bet... Autonomous landing has become a core technology of unmanned aerial vehicle(UAV)guidance,navigation,and control system in recent years.This paper discusses the vision⁃based relative position and attitude estimation between fixed⁃wing UAV and runway,which is a key issue in autonomous landing.Images taken by a airborne camera was used and a runway detection method based on long⁃line feature and gradient projection is proposed,which solved the problem that the traditional Hough transform requires much calculation time and easily detects end points by mistake.Under the premise of the known width and length of the runway,position and attitude estimation algorithm used the image processing results and adopted an estimation algorithm based on orthogonal iteration.The method took the objective space error as error function and effectively improved the accuracy of linear algorithm through iteration.The experimental results verified the effectiveness of the proposed algorithms. 展开更多
关键词 autonomous landing visual navigation Region of Interest(ROI) edge detection orthogonal iteration
下载PDF
Navigation algorithm based on semantic segmentation in wheat fields using an RGB-D camera
13
作者 Yan Song Feiyang Xu +2 位作者 Qi Yao Jialin Liu Shuai Yang 《Information Processing in Agriculture》 EI CSCD 2023年第4期475-490,共16页
Determining the navigation line is critical for the automatic navigation of agricultural robots in the farmland.In this research,considering a wheat field as the typical scenario,a novel navigation line extraction alg... Determining the navigation line is critical for the automatic navigation of agricultural robots in the farmland.In this research,considering a wheat field as the typical scenario,a novel navigation line extraction algorithm based on semantic segmentation is proposed.The data containing horizontal parallax,height,and grayscale information(HHG)is constructed by combining re-encoded depth data and red-green-blue(RGB)data.The HHG,RGB,and depth data are used to achieve scene recognition and navigation line extraction for a wheat field.The method includes two main steps.First,the semantic segmentation of the wheat,ground,and background are performed using a fully convolutional network(FCN).Second,the navigation line is fitted in the camera coordinate system on the basis of the semantic segmentation result and the principle of camera pinhole imaging.Our segmentation model is trained using 508 randomly selected images from a data set,and the model is tested on 199 images.When labelled data are used as the reference benchmark,the mean intersection over union(mIoU)of the HHG data is greater than 95%,which is the highest among the three types of data.The semantic segmentation methods based on the RGB and HHG data show higher navigation line extraction accuracy rates(with the absolute value of the angle deviation less than 5)than the compared methods.The mean and standard deviation of the angle deviation of the two methods are within 0.1and 2.0,while the mean and standard deviation of the distance deviation are less than 30 mm and 60 mm,respectively.These values meet the basic requirements of agricultural machinery field navigation.The novelty of this work is the proposal of a navigation line extraction algorithm based on semantic segmentation in wheat fields.This method is high in accuracy and robustness to interference from crop occlusion. 展开更多
关键词 Fully convolutional network navigation line extraction Semantic segmentation visual navigation
原文传递
InstaCropNet:An efficient Unet-Based architecture for precise crop row detection in agricultural applications
14
作者 Zhiming Guo Yuhang Geng +6 位作者 Chuan Wang Yi Xue Deng Sun Zhaoxia Lou Tianbao Chen Tianyu Geng Longzhe Quan 《Artificial Intelligence in Agriculture》 2024年第2期85-96,共12页
Autonomous navigation in farmlands is one of the key technologies for achieving autonomous management in maize fields.Among various navigation techniques,visual navigation using widely available RGB images is a cost-e... Autonomous navigation in farmlands is one of the key technologies for achieving autonomous management in maize fields.Among various navigation techniques,visual navigation using widely available RGB images is a cost-effective solution.However,current mainstream methods for maize crop row detection often rely on highly specialized,manually devised heuristic rules,limiting the scalability of these methods.To simplify the solution and enhance its universality,we propose an innovative crop row annotation strategy.This strategy,by simulating the strip-like structure of the crop row's central area,effectively avoids interference from lateral growth of crop leaves.Based on this,we developed a deep learning network with a dual-branch architecture,InstaCropNet,which achieves end-to-end segmentation of crop row instances.Subsequently,through the row anchor segmen-tation technique,we accurately locate the positions of different crop row instances and perform line fitting.Experimental results demonstrate that our method has an average angular deviation of no more than 2°,and the accuracy of crop row detection reaches 96.5%. 展开更多
关键词 visual navigation Deep learning Maize crop row detection Early corn crops Row clustering Crop row segmentation
原文传递
A dimension reduced INS/VNS integrated navigation method for planetary rovers 被引量:5
15
作者 Ning Xiaolin Gui Mingzhen +1 位作者 Zhang Jie Fang Jiancheng 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2016年第6期1695-1708,共14页
Inertial navigation system/visual navigation system(INS/VNS) integrated navigation is a commonly used autonomous navigation method for planetary rovers. Since visual measurements are related to the previous and curren... Inertial navigation system/visual navigation system(INS/VNS) integrated navigation is a commonly used autonomous navigation method for planetary rovers. Since visual measurements are related to the previous and current state vectors(position and attitude) of planetary rovers, the performance of the Kalman filter(KF) will be challenged by the time-correlation problem. A state augmentation method, which augments the previous state value to the state vector, is commonly used when dealing with this problem. However, the augmenting of state dimensions will result in an increase in computation load. In this paper, a state dimension reduced INS/VNS integrated navigation method based on coordinates of feature points is presented that utilizes the information obtained through INS/VNS integrated navigation at a previous moment to overcome the time relevance problem and reduce the dimensions of the state vector. Equations of extended Kalman filter(EKF) are used to demonstrate the equivalence of calculated results between the proposed method and traditional state augmented methods. Results of simulation and experimentation indicate that this method has less computational load but similar accuracy when compared with traditional methods. 展开更多
关键词 Computational complexity analysis Inertial navigation system INS/VNS integrated navigation Planetary exploration rover visual navigation system
原文传递
Design and experiment of visual navigated UGV for orchard based on Hough matrix and RANSAC 被引量:2
16
作者 Mingkuan Zhou Junfang Xia +4 位作者 Fang Yang Kan Zheng Mengjie Hu Dong Li Shuai Zhang 《International Journal of Agricultural and Biological Engineering》 SCIE EI CAS 2021年第6期176-184,共9页
The objective of this study was to develop a visual navigation system capable of navigating an unmanned ground vehicle(UGV)travelling between tree rows in the outdoor orchard.Thus far,while most research has developed... The objective of this study was to develop a visual navigation system capable of navigating an unmanned ground vehicle(UGV)travelling between tree rows in the outdoor orchard.Thus far,while most research has developed algorithms that deal with ground structures in the orchard,this study focused on the background of canopy plus sky to eliminate the interference factors such as inconsistent lighting,shadows,and color similarities in features.Aiming at the problem that the traditional Hough transform and the least square method are difficult to be applied under outdoor conditions,an algorithm combining Hough matrix and random sample consensus(RANSAC)was proposed to extract the navigation path.In the image segmentation stage,this study used an H-component that was adopted to extract the target path of the canopy plus sky.Then,after denoising and smoothing the image by morphological operation,line scanning was used to determine the midpoint of the target path.For navigation path extraction,this study extracted the feature points through Hough matrix to eliminate the redundant points,and RANSAC was used to reduce the impact of the noise points caused by different canopy shapes and fit the navigation path.The path acquisition experiment proved that the accuracy of Hough matrix and RANSAC method was 90.36%-96.81%and the time consumption of the program was within 0.55 s under different sunlight intensities.This method was superior to the traditional Hough transform in real-time and accuracy,and had higher accuracy,slightly worse real-time compared with the least square method.Furthermore,the OPENMV was used to capture the ground information of the orchard.The experiment proved that the recognition rate of OPENMV for identifying turning information was 100%,and the program running time was 0.17-0.19 s.Field experiments showed that the UGV could autonomously navigate the rows with a maximum lateral error of 0.118 m and realize the automatic turning of the UGV.The algorithm satisfied the practical operation requirements of autonomous vehicles in the orchard.So the UGV has the potential to guide multipurpose agricultural vehicles in outdoor orchards in the future. 展开更多
关键词 ORCHARD visual navigation unmanned ground vehicle Hough matrix RANSAC algorithm H-component
原文传递
Visual localization for asteroid touchdown operation based on local image features 被引量:3
17
作者 Yoshiyuki Anzai Takehisa Yairi +2 位作者 Naoya Takeishi Yuichi Tsuda Naoko Ogawa 《Astrodynamics》 CSCD 2020年第2期149-161,共13页
In an asteroid sample-return mission,accurate position estimation of the spacecraft relative to the asteroid is essential for landing at the target point.During the missions of Hayabusa and Hayabusa2,the main part of ... In an asteroid sample-return mission,accurate position estimation of the spacecraft relative to the asteroid is essential for landing at the target point.During the missions of Hayabusa and Hayabusa2,the main part of the visual position estimation procedure was performed by human operators on the Earth based on a sequence of asteroid images acquired and sent by the spacecraft.Although this approach is still adopted in critical space missions,there is an increasing demand for automated visual position estimation,so that the time and cost of human intervention may be reduced.In this paper,we propose a method for estimating the relative position of the spacecraft and asteroid during the descent phase for touchdown from an image sequence using state-of-the-art techniques of image processing,feature extraction,and structure from motion.We apply this method to real Ryugu images that were taken by Hayabusa2 from altitudes of 20 km-500 m.It is demonstrated that the method has practical relevance for altitudes within the range of 5-1 km.This result indicates that our method could improve the efficiency of the ground operation in the global mapping and navigation during the touchdown sequence,whereas full automation and autonomous on-board estimation are beyond the scope of this study.Furthermore,we discuss the challenges of developing a completely automatic position estimation framework. 展开更多
关键词 visual navigation structure from motion ASTEROID touchdown Hayabusa2
原文传递
Laser tracking leader-follower automatic cooperative navigation system for UAVs 被引量:1
18
作者 Rui Ming Zhiyan Zhou +6 位作者 Zichen Lyu Xiwen Luo Le Zi Cancan Song Yu Zang Wei Liu Rui Jiang 《International Journal of Agricultural and Biological Engineering》 SCIE CAS 2022年第2期165-176,共12页
Currently,small payload and short endurance are the main problems of a single UAV in agricultural applications,especially in large-scale farmland.It is one of the important methods to solve the above problems of UAVs ... Currently,small payload and short endurance are the main problems of a single UAV in agricultural applications,especially in large-scale farmland.It is one of the important methods to solve the above problems of UAVs by improving operation efficiency through multi-UAV cooperative navigation.This study proposed a laser tracking leader-follower automatic cooperative navigation system for multi-UAVs.The leader in the cluster fires a laser beam to irradiate the follower,and the follower performs a visual tracking flight according to the light spot at the relative position of the laser tracking device.Based on the existing kernel correlation filter(KCF)tracking algorithm,an improved KCF real-time spot tracking method was proposed.Compared with the traditional KCF tracking algorithm,the recognition and tracking rate of the optimized algorithm was increased from 70%to 95%in indoor environment,and was increased from 20%to 90%in outdoor environment.The navigation control method was studied from two aspects:the distance coordinate transformation model based on micro-gyroscope and navigation control strategy.The error of spot position was reduced from the maximum(3.12,−3.66)cm to(0.14,0.12)cm by correcting the deviation distance of the spot at different angles through a coordinate correction algorithm.An image coordinate conversion model was established for a complementary metal-oxide-semiconductor(CMOS)camera and laser receiving device at different mounting distances.The laser receiving device was divided into four regions,S0-S3,and the speed of the four regions is calculated using an uncontrollable discrete Kalman filter.The outdoor flight experiments of two UAVs were carried out outdoors using this system.The experiment results show that the average flight error of the two UAVs on the X-axis is 5.2 cm,and the coefficient of variation is 0.0181.The average flight error on the Z-axis is 7.3 cm,and the coefficient of variation is 0.0414.This study demonstrated the possibility and adaptability of the developed system to achieve multi-UAVs cooperative navigation. 展开更多
关键词 two-UAVs cooperative visual navigation laser tracking
原文传递
Depth recovery for unstructured farmland road image using an improved SIFT algorithm 被引量:3
19
作者 Lijian Yao Dong Hu +2 位作者 Zidong Yang Haibin Li Mengbo Qian 《International Journal of Agricultural and Biological Engineering》 SCIE EI CAS 2019年第4期141-147,共7页
Road visual navigation relies on accurate road models.This study was aimed at proposing an improved scale-invariant feature transform(SIFT)algorithm for recovering depth information from farmland road images,which wou... Road visual navigation relies on accurate road models.This study was aimed at proposing an improved scale-invariant feature transform(SIFT)algorithm for recovering depth information from farmland road images,which would provide a reliable path for visual navigation.The mean image of pixel value in five channels(R,G,B,S and V)were treated as the inspected image and the feature points of the inspected image were extracted by the Canny algorithm,for achieving precise location of the feature points and ensuring the uniformity and density of the feature points.The mean value of the pixels in 5×5 neighborhood around the feature point at an interval of 45ºin eight directions was then treated as the feature vector,and the differences of the feature vectors were calculated for preliminary matching of the left and right image feature points.In order to achieve the depth information of farmland road images,the energy method of feature points was used for eliminating the mismatched points.Experiments with a binocular stereo vision system were conducted and the results showed that the matching accuracy and time consuming for depth recovery when using the improved SIFT algorithm were 96.48%and 5.6 s,respectively,with the accuracy for depth recovery of-7.17%-2.97%in a certain sight distance.The mean uniformity,time consuming and matching accuracy for all the 60 images under various climates and road conditions were 50%-70%,5.0-6.5 s,and higher than 88%,respectively,indicating that performance for achieving the feature points(e.g.,uniformity,matching accuracy,and algorithm real-time)of the improved SIFT algorithm were superior to that of conventional SIFT algorithm.This study provides an important reference for navigation technology of agricultural equipment based on machine vision. 展开更多
关键词 scale-invariant feature transform(sift) feature matching canny operator energy method of feature point farmland road depth recovery visual navigation
原文传递
Neural Networks for Omni-View Road Image Understanding
20
作者 朱志刚 徐光祐 《Journal of Computer Science & Technology》 SCIE EI CSCD 1996年第6期570-580,共11页
This paper presents a new approach to the outdoor road scene understand-ing by using omni-view images and backpropagation networks. Both the road directions used for vehicle heading and the road categories used for ve... This paper presents a new approach to the outdoor road scene understand-ing by using omni-view images and backpropagation networks. Both the road directions used for vehicle heading and the road categories used for velilcle local-ization are determined by the integrated system. There are three main features about the work. First, an omni-view image sensor is used to extract image samples, and the original image is preprocessed so that the inputs of the net-work is rotation-invariant and simple. Second, the problem of the network size,especially the number of the hidden units, is decided by the analysis of system-atic experimental results. Finally, the internal representation, which reveals the properties of the neural network, is analyzed in the view point of visual signal processing. Experimental results with real scene images are encouraging. 展开更多
关键词 Neural network image understanding visual navigation rotationinvariant omni-view image
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部