This article concentrates on ground vision guided autonomous landing of a fixed-wing Unmanned Aerial Vehicle(UAV)within Global Navigation Satellite System(GNSS)denied environments.Cascaded deep learning models are dev...This article concentrates on ground vision guided autonomous landing of a fixed-wing Unmanned Aerial Vehicle(UAV)within Global Navigation Satellite System(GNSS)denied environments.Cascaded deep learning models are developed and employed into image detection and its accuracy promoting for UAV autolanding,respectively.Firstly,we design a target bounding box detection network Bbox Locate-Net to extract its image coordinate of the flying object.Secondly,the detected coordinate is fused into spatial localization with an extended Kalman filter estimator.Thirdly,a point regression network Point Refine-Net is developed for promoting detection accuracy once the flying vehicle’s motion continuity is checked unacceptable.The proposed approach definitely accomplishes the closed-loop mutual inspection of spatial positioning and image detection,and automatically improves the inaccurate coordinates within a certain range.Experimental results demonstrate and verify that our method outperforms the previous works in terms of accuracy,robustness and real-time criterions.Specifically,the newly developed Bbox Locate-Net attaches over 500 fps,almost five times the published state-of-the-art in this field,with comparable localization accuracy.展开更多
In this paper,a novel deep learning dataset,called Air2Land,is presented for advancing the state‐of‐the‐art object detection and pose estimation in the context of one fixed‐wing unmanned aerial vehicle autolanding...In this paper,a novel deep learning dataset,called Air2Land,is presented for advancing the state‐of‐the‐art object detection and pose estimation in the context of one fixed‐wing unmanned aerial vehicle autolanding scenarios.It bridges vision and control for ground‐based vision guidance systems having the multi‐modal data obtained by diverse sensors and pushes forward the development of computer vision and autopilot algorithms tar-geted at visually assisted landing of one fixed‐wing vehicle.The dataset is composed of sequential stereo images and synchronised sensor data,in terms of the flying vehicle pose and Pan‐Tilt Unit angles,simulated in various climate conditions and landing scenarios.Since real‐world automated landing data is very limited,the proposed dataset provides the necessary foundation for vision‐based tasks such as flying vehicle detection,key point localisation,pose estimation etc.Hereafter,in addition to providing plentiful and scene‐rich data,the developed dataset covers high‐risk scenarios that are hardly accessible in reality.The dataset is also open and available at https://github.com/micros‐uav/micros_air2land as well.展开更多
基金supported by the National Natural Science Foundation of China(No.61973327)。
文摘This article concentrates on ground vision guided autonomous landing of a fixed-wing Unmanned Aerial Vehicle(UAV)within Global Navigation Satellite System(GNSS)denied environments.Cascaded deep learning models are developed and employed into image detection and its accuracy promoting for UAV autolanding,respectively.Firstly,we design a target bounding box detection network Bbox Locate-Net to extract its image coordinate of the flying object.Secondly,the detected coordinate is fused into spatial localization with an extended Kalman filter estimator.Thirdly,a point regression network Point Refine-Net is developed for promoting detection accuracy once the flying vehicle’s motion continuity is checked unacceptable.The proposed approach definitely accomplishes the closed-loop mutual inspection of spatial positioning and image detection,and automatically improves the inaccurate coordinates within a certain range.Experimental results demonstrate and verify that our method outperforms the previous works in terms of accuracy,robustness and real-time criterions.Specifically,the newly developed Bbox Locate-Net attaches over 500 fps,almost five times the published state-of-the-art in this field,with comparable localization accuracy.
文摘In this paper,a novel deep learning dataset,called Air2Land,is presented for advancing the state‐of‐the‐art object detection and pose estimation in the context of one fixed‐wing unmanned aerial vehicle autolanding scenarios.It bridges vision and control for ground‐based vision guidance systems having the multi‐modal data obtained by diverse sensors and pushes forward the development of computer vision and autopilot algorithms tar-geted at visually assisted landing of one fixed‐wing vehicle.The dataset is composed of sequential stereo images and synchronised sensor data,in terms of the flying vehicle pose and Pan‐Tilt Unit angles,simulated in various climate conditions and landing scenarios.Since real‐world automated landing data is very limited,the proposed dataset provides the necessary foundation for vision‐based tasks such as flying vehicle detection,key point localisation,pose estimation etc.Hereafter,in addition to providing plentiful and scene‐rich data,the developed dataset covers high‐risk scenarios that are hardly accessible in reality.The dataset is also open and available at https://github.com/micros‐uav/micros_air2land as well.