A new landing region selection algorithm for an unmanned helicopter is proposed based on an attention model.Different from the original attention model,some properties of the possible safe landing regions(e.g.,depth,...A new landing region selection algorithm for an unmanned helicopter is proposed based on an attention model.Different from the original attention model,some properties of the possible safe landing regions(e.g.,depth,regional color and motion features)are included in the selection algorithm.Furthermore,regional color and motion features are fused directly into the saliency map because these features do not have the "central-peripheral"property.Experimental results validate the feasibility and efficiency of this approach.展开更多
This article concentrates on ground vision guided autonomous landing of a fixed-wing Unmanned Aerial Vehicle(UAV)within Global Navigation Satellite System(GNSS)denied environments.Cascaded deep learning models are dev...This article concentrates on ground vision guided autonomous landing of a fixed-wing Unmanned Aerial Vehicle(UAV)within Global Navigation Satellite System(GNSS)denied environments.Cascaded deep learning models are developed and employed into image detection and its accuracy promoting for UAV autolanding,respectively.Firstly,we design a target bounding box detection network Bbox Locate-Net to extract its image coordinate of the flying object.Secondly,the detected coordinate is fused into spatial localization with an extended Kalman filter estimator.Thirdly,a point regression network Point Refine-Net is developed for promoting detection accuracy once the flying vehicle’s motion continuity is checked unacceptable.The proposed approach definitely accomplishes the closed-loop mutual inspection of spatial positioning and image detection,and automatically improves the inaccurate coordinates within a certain range.Experimental results demonstrate and verify that our method outperforms the previous works in terms of accuracy,robustness and real-time criterions.Specifically,the newly developed Bbox Locate-Net attaches over 500 fps,almost five times the published state-of-the-art in this field,with comparable localization accuracy.展开更多
基金Supported by Aeronautical Science Foundation of China(20130542025)
文摘A new landing region selection algorithm for an unmanned helicopter is proposed based on an attention model.Different from the original attention model,some properties of the possible safe landing regions(e.g.,depth,regional color and motion features)are included in the selection algorithm.Furthermore,regional color and motion features are fused directly into the saliency map because these features do not have the "central-peripheral"property.Experimental results validate the feasibility and efficiency of this approach.
基金supported by the National Natural Science Foundation of China(No.61973327)。
文摘This article concentrates on ground vision guided autonomous landing of a fixed-wing Unmanned Aerial Vehicle(UAV)within Global Navigation Satellite System(GNSS)denied environments.Cascaded deep learning models are developed and employed into image detection and its accuracy promoting for UAV autolanding,respectively.Firstly,we design a target bounding box detection network Bbox Locate-Net to extract its image coordinate of the flying object.Secondly,the detected coordinate is fused into spatial localization with an extended Kalman filter estimator.Thirdly,a point regression network Point Refine-Net is developed for promoting detection accuracy once the flying vehicle’s motion continuity is checked unacceptable.The proposed approach definitely accomplishes the closed-loop mutual inspection of spatial positioning and image detection,and automatically improves the inaccurate coordinates within a certain range.Experimental results demonstrate and verify that our method outperforms the previous works in terms of accuracy,robustness and real-time criterions.Specifically,the newly developed Bbox Locate-Net attaches over 500 fps,almost five times the published state-of-the-art in this field,with comparable localization accuracy.