Bird flocking is a paradigmatic case of self‐organised collective behaviours in biology.Stereo camera systems are employed to observe flocks of starlings,jackdaws,and chimney swifts,mainly on a spot‐fixed basis.A po...Bird flocking is a paradigmatic case of self‐organised collective behaviours in biology.Stereo camera systems are employed to observe flocks of starlings,jackdaws,and chimney swifts,mainly on a spot‐fixed basis.A portable non‐fixed stereo vision‐based flocking observation system,namely FlockSeer,is developed by the authors for observing more species of bird flocks within field scenarios.The portable flocking observer,FlockSeer,responds to the challenges in extrinsic calibration,camera synchronisation and field movability compared to existing spot‐fixed observing systems.A measurement and sensor fusion approach is utilised for rapid calibration,and a light‐based synchronisation approach is used to simplify hardware configuration.FlockSeer has been implemented and tested across six cities in three provinces and has accomplished diverse flock‐tracking tasks,accumulating behavioural data of four species,including egrets,with up to 300 resolvable trajectories.The authors reconstructed the trajectories of a flock of egrets under disturbed conditions to verify the practicality and reliability.In addition,we analysed the accuracy of identifying nearest neighbours,and then examined the similarity between the trajectories and the Couzin model.Experimental results demonstrate that the developed flocking observing system is highly portable,more convenient and swift to deploy in wetland‐like or coast‐like fields.Its observation process is reliable and practical and can effectively support the study of understanding and modelling of bird flocking behaviours.展开更多
In nature,various animal groups like bird flocks display proficient collective navigation achieved by maintaining high consistency and cohesion simultaneously.Both metric and topological interactions have been explore...In nature,various animal groups like bird flocks display proficient collective navigation achieved by maintaining high consistency and cohesion simultaneously.Both metric and topological interactions have been explored to ensure high consistency among groups.The topological interactions found in bird flocks are more cohesive than metric in-teractions against external perturbations,especially the spatially balanced topological interaction(SBTI).However,it is revealed that in complex environments,pursuing cohesion via existing interactions compromises consistency.The authors introduce an innovative solution,assemble topological interaction,to address this challenge.Con-trasting with static interaction rules,the new interaction empowers individuals with self-awareness to adapt to the complex environment by switching between interactions through visual cues.Most individuals employ high-consistency k-nearest topological interaction when not facing splitting threats.In the presence of such threats,some switch to the high-cohesion SBTI to avert splitting.The assemble topological interaction thus transcends the limit of the trade-off between consistency and cohesion.In addition,by comparing groups with varying degrees of these two features,the authors demonstrate that group effects are vital for efficient navigation led by a minority of informed agents.Finally,the real-world drone-swarm experiments validate the applicability of the proposed interaction to artificial robotic collectives.展开更多
The problem of triangular lattice formation in robot swarms has been investigated extensively in the literature,but the existing algorithms can hardly keep comparative performance from swarm simulation to real multi‐...The problem of triangular lattice formation in robot swarms has been investigated extensively in the literature,but the existing algorithms can hardly keep comparative performance from swarm simulation to real multi‐robot scenarios,due to the limited computation power or the restricted field of view(FOV)of robot sensors.Eventually,a distributed solution for triangular lattice formation in robot swarms with minimal sensing and computation is proposed and developed in this study.Each robot is equipped with a sensor with a limited FOV providing only a ternary digit of information about its neighbouring environment.At each time step,the motion command is directly determined by using only the ternary sensing result.The circular motions with a certain level of randomness lead the robot swarms to stable triangular lattice formation with high quality and robustness.Extensive numerical simulations and multi‐robot experiments are conducted.The results have demonstrated and validated the efficiency of the proposed approach.The minimised sensing and computation requirements pave the way for massive deployment at a low cost and implementation within swarms of miniature robots.展开更多
Feedback flow information is of significance to enable underwater locomotion controllers with higher adaptability and efficiency within varying environments. Inspired from fish sensing their external flow via near-bod...Feedback flow information is of significance to enable underwater locomotion controllers with higher adaptability and efficiency within varying environments. Inspired from fish sensing their external flow via near-body pressure, a computational scheme is proposed and developed in this paper. In conjunction with the scheme, Computational Fluid Dynamics (CFD) is employed to study the bio-inspired fish swimming hydrodynamics. The spatial distribution and temporal variation of the near-body pressure of fish are studied over the whole computational domain. Furthermore, a filtering algorithm is designed and implemented to fuse near-body pressure of one or multiple points for the estimation on the external flow. The simulation results demonstrate that the proposed computational scheme and its corresponding algorithm are both effective to predict the inlet flow velocity by using near-body pressure at distributed spatial points.展开更多
This article concentrates on ground vision guided autonomous landing of a fixed-wing Unmanned Aerial Vehicle(UAV)within Global Navigation Satellite System(GNSS)denied environments.Cascaded deep learning models are dev...This article concentrates on ground vision guided autonomous landing of a fixed-wing Unmanned Aerial Vehicle(UAV)within Global Navigation Satellite System(GNSS)denied environments.Cascaded deep learning models are developed and employed into image detection and its accuracy promoting for UAV autolanding,respectively.Firstly,we design a target bounding box detection network Bbox Locate-Net to extract its image coordinate of the flying object.Secondly,the detected coordinate is fused into spatial localization with an extended Kalman filter estimator.Thirdly,a point regression network Point Refine-Net is developed for promoting detection accuracy once the flying vehicle’s motion continuity is checked unacceptable.The proposed approach definitely accomplishes the closed-loop mutual inspection of spatial positioning and image detection,and automatically improves the inaccurate coordinates within a certain range.Experimental results demonstrate and verify that our method outperforms the previous works in terms of accuracy,robustness and real-time criterions.Specifically,the newly developed Bbox Locate-Net attaches over 500 fps,almost five times the published state-of-the-art in this field,with comparable localization accuracy.展开更多
In this paper,a novel deep learning dataset,called Air2Land,is presented for advancing the state‐of‐the‐art object detection and pose estimation in the context of one fixed‐wing unmanned aerial vehicle autolanding...In this paper,a novel deep learning dataset,called Air2Land,is presented for advancing the state‐of‐the‐art object detection and pose estimation in the context of one fixed‐wing unmanned aerial vehicle autolanding scenarios.It bridges vision and control for ground‐based vision guidance systems having the multi‐modal data obtained by diverse sensors and pushes forward the development of computer vision and autopilot algorithms tar-geted at visually assisted landing of one fixed‐wing vehicle.The dataset is composed of sequential stereo images and synchronised sensor data,in terms of the flying vehicle pose and Pan‐Tilt Unit angles,simulated in various climate conditions and landing scenarios.Since real‐world automated landing data is very limited,the proposed dataset provides the necessary foundation for vision‐based tasks such as flying vehicle detection,key point localisation,pose estimation etc.Hereafter,in addition to providing plentiful and scene‐rich data,the developed dataset covers high‐risk scenarios that are hardly accessible in reality.The dataset is also open and available at https://github.com/micros‐uav/micros_air2land as well.展开更多
基金National Natural Science Foundation of China,Grant/Award Number:62103451。
文摘Bird flocking is a paradigmatic case of self‐organised collective behaviours in biology.Stereo camera systems are employed to observe flocks of starlings,jackdaws,and chimney swifts,mainly on a spot‐fixed basis.A portable non‐fixed stereo vision‐based flocking observation system,namely FlockSeer,is developed by the authors for observing more species of bird flocks within field scenarios.The portable flocking observer,FlockSeer,responds to the challenges in extrinsic calibration,camera synchronisation and field movability compared to existing spot‐fixed observing systems.A measurement and sensor fusion approach is utilised for rapid calibration,and a light‐based synchronisation approach is used to simplify hardware configuration.FlockSeer has been implemented and tested across six cities in three provinces and has accomplished diverse flock‐tracking tasks,accumulating behavioural data of four species,including egrets,with up to 300 resolvable trajectories.The authors reconstructed the trajectories of a flock of egrets under disturbed conditions to verify the practicality and reliability.In addition,we analysed the accuracy of identifying nearest neighbours,and then examined the similarity between the trajectories and the Couzin model.Experimental results demonstrate that the developed flocking observing system is highly portable,more convenient and swift to deploy in wetland‐like or coast‐like fields.Its observation process is reliable and practical and can effectively support the study of understanding and modelling of bird flocking behaviours.
基金This research was supported by the National Natural Science Foundation of China,Grant/Award Number:61973327.
文摘In nature,various animal groups like bird flocks display proficient collective navigation achieved by maintaining high consistency and cohesion simultaneously.Both metric and topological interactions have been explored to ensure high consistency among groups.The topological interactions found in bird flocks are more cohesive than metric in-teractions against external perturbations,especially the spatially balanced topological interaction(SBTI).However,it is revealed that in complex environments,pursuing cohesion via existing interactions compromises consistency.The authors introduce an innovative solution,assemble topological interaction,to address this challenge.Con-trasting with static interaction rules,the new interaction empowers individuals with self-awareness to adapt to the complex environment by switching between interactions through visual cues.Most individuals employ high-consistency k-nearest topological interaction when not facing splitting threats.In the presence of such threats,some switch to the high-cohesion SBTI to avert splitting.The assemble topological interaction thus transcends the limit of the trade-off between consistency and cohesion.In addition,by comparing groups with varying degrees of these two features,the authors demonstrate that group effects are vital for efficient navigation led by a minority of informed agents.Finally,the real-world drone-swarm experiments validate the applicability of the proposed interaction to artificial robotic collectives.
基金This work was jointly supported by the National Natural Science Foundation of China with Granted No.62103451.
文摘The problem of triangular lattice formation in robot swarms has been investigated extensively in the literature,but the existing algorithms can hardly keep comparative performance from swarm simulation to real multi‐robot scenarios,due to the limited computation power or the restricted field of view(FOV)of robot sensors.Eventually,a distributed solution for triangular lattice formation in robot swarms with minimal sensing and computation is proposed and developed in this study.Each robot is equipped with a sensor with a limited FOV providing only a ternary digit of information about its neighbouring environment.At each time step,the motion command is directly determined by using only the ternary sensing result.The circular motions with a certain level of randomness lead the robot swarms to stable triangular lattice formation with high quality and robustness.Extensive numerical simulations and multi‐robot experiments are conducted.The results have demonstrated and validated the efficiency of the proposed approach.The minimised sensing and computation requirements pave the way for massive deployment at a low cost and implementation within swarms of miniature robots.
基金This work was supported in part by the National Science Foundation of China under Grant nos. 61005077, 51105365 and 61273347, in part by Research Fund for the Doctoral Programme of Higher Education of China under Grant no. 20124307110002, and in part by the Foundation for the Author of Excellent Doctoral Dissertation of HunanProvince under Grant no. YB2011B0001. The authors would like to thank Daibing Zhang for his sincere guidance and constructive comments. The corresponding author (Tianjiang hu) would like to thank Dr. Xue-feng Yuan of University of Manchester, UK for the collaboration during Dr. Hu's academic visit from February 2013 to August 2013 in Manchester Institute of Biotechnology.
文摘Feedback flow information is of significance to enable underwater locomotion controllers with higher adaptability and efficiency within varying environments. Inspired from fish sensing their external flow via near-body pressure, a computational scheme is proposed and developed in this paper. In conjunction with the scheme, Computational Fluid Dynamics (CFD) is employed to study the bio-inspired fish swimming hydrodynamics. The spatial distribution and temporal variation of the near-body pressure of fish are studied over the whole computational domain. Furthermore, a filtering algorithm is designed and implemented to fuse near-body pressure of one or multiple points for the estimation on the external flow. The simulation results demonstrate that the proposed computational scheme and its corresponding algorithm are both effective to predict the inlet flow velocity by using near-body pressure at distributed spatial points.
基金supported by the National Natural Science Foundation of China(No.61973327)。
文摘This article concentrates on ground vision guided autonomous landing of a fixed-wing Unmanned Aerial Vehicle(UAV)within Global Navigation Satellite System(GNSS)denied environments.Cascaded deep learning models are developed and employed into image detection and its accuracy promoting for UAV autolanding,respectively.Firstly,we design a target bounding box detection network Bbox Locate-Net to extract its image coordinate of the flying object.Secondly,the detected coordinate is fused into spatial localization with an extended Kalman filter estimator.Thirdly,a point regression network Point Refine-Net is developed for promoting detection accuracy once the flying vehicle’s motion continuity is checked unacceptable.The proposed approach definitely accomplishes the closed-loop mutual inspection of spatial positioning and image detection,and automatically improves the inaccurate coordinates within a certain range.Experimental results demonstrate and verify that our method outperforms the previous works in terms of accuracy,robustness and real-time criterions.Specifically,the newly developed Bbox Locate-Net attaches over 500 fps,almost five times the published state-of-the-art in this field,with comparable localization accuracy.
文摘In this paper,a novel deep learning dataset,called Air2Land,is presented for advancing the state‐of‐the‐art object detection and pose estimation in the context of one fixed‐wing unmanned aerial vehicle autolanding scenarios.It bridges vision and control for ground‐based vision guidance systems having the multi‐modal data obtained by diverse sensors and pushes forward the development of computer vision and autopilot algorithms tar-geted at visually assisted landing of one fixed‐wing vehicle.The dataset is composed of sequential stereo images and synchronised sensor data,in terms of the flying vehicle pose and Pan‐Tilt Unit angles,simulated in various climate conditions and landing scenarios.Since real‐world automated landing data is very limited,the proposed dataset provides the necessary foundation for vision‐based tasks such as flying vehicle detection,key point localisation,pose estimation etc.Hereafter,in addition to providing plentiful and scene‐rich data,the developed dataset covers high‐risk scenarios that are hardly accessible in reality.The dataset is also open and available at https://github.com/micros‐uav/micros_air2land as well.