Combining the heuristic algorithm (HA) developed based on the specific knowledge of the cooperative multiple target attack (CMTA) tactics and the particle swarm optimization (PSO), a heuristic particle swarm opt...Combining the heuristic algorithm (HA) developed based on the specific knowledge of the cooperative multiple target attack (CMTA) tactics and the particle swarm optimization (PSO), a heuristic particle swarm optimization (HPSO) algorithm is proposed to solve the decision-making (DM) problem. HA facilitates to search the local optimum in the neighborhood of a solution, while the PSO algorithm tends to explore the search space for possible solutions. Combining the advantages of HA and PSO, HPSO algorithms can find out the global optimum quickly and efficiently. It obtains the DM solution by seeking for the optimal assignment of missiles of friendly fighter aircrafts (FAs) to hostile FAs. Simulation results show that the proposed algorithm is superior to the general PSO algorithm and two GA based algorithms in searching for the best solution to the DM problem.展开更多
Game theory can be applied to the air combat decision-making problem of multiple unmanned combat air vehicles(UCAVs).However,it is difficult to have satisfactory decision-making results completely relying on air comba...Game theory can be applied to the air combat decision-making problem of multiple unmanned combat air vehicles(UCAVs).However,it is difficult to have satisfactory decision-making results completely relying on air combat situation information,because there is a lot of time-sensitive information in a complex air combat environment.In this paper,a constraint strategy game approach is developed to generate intelligent decision-making for multiple UCAVs in complex air combat environment with air combat situation information and time-sensitive information.Initially,a constraint strategy game is employed to model attack-defense decision-making problem in complex air combat environment.Then,an algorithm is proposed for solving the constraint strategy game based on linear programming and linear inequality(CSG-LL).Finally,an example is given to illustrate the effectiveness of the proposed approach.展开更多
This paper considers the problem of applying data mining techniques to aeronautical field.The truncation method,which is one of the techniques in the aeronautical data mining,can be used to efficiently handle the air-...This paper considers the problem of applying data mining techniques to aeronautical field.The truncation method,which is one of the techniques in the aeronautical data mining,can be used to efficiently handle the air-combat behavior data.The technique of air-combat behavior data mining based on the truncation method is proposed to discover the air-combat rules or patterns.The simulation platform of the air-combat behavior data mining that supports two fighters is implemented.The simulation experimental results show that the proposed air-combat behavior data mining technique based on the truncation method is feasible whether in efficiency or in effectiveness.展开更多
In order to improve the autonomous ability of unmanned aerial vehicles(UAV)to implement air combat mission,many artificial intelligence-based autonomous air combat maneuver decision-making studies have been carried ou...In order to improve the autonomous ability of unmanned aerial vehicles(UAV)to implement air combat mission,many artificial intelligence-based autonomous air combat maneuver decision-making studies have been carried out,but these studies are often aimed at individual decision-making in 1 v1 scenarios which rarely happen in actual air combat.Based on the research of the 1 v1 autonomous air combat maneuver decision,this paper builds a multi-UAV cooperative air combat maneuver decision model based on multi-agent reinforcement learning.Firstly,a bidirectional recurrent neural network(BRNN)is used to achieve communication between UAV individuals,and the multi-UAV cooperative air combat maneuver decision model under the actor-critic architecture is established.Secondly,through combining with target allocation and air combat situation assessment,the tactical goal of the formation is merged with the reinforcement learning goal of every UAV,and a cooperative tactical maneuver policy is generated.The simulation results prove that the multi-UAV cooperative air combat maneuver decision model established in this paper can obtain the cooperative maneuver policy through reinforcement learning,the cooperative maneuver policy can guide UAVs to obtain the overall situational advantage and defeat the opponents under tactical cooperation.展开更多
Aiming at intelligent decision-making of unmanned aerial vehicle(UAV)based on situation information in air combat,a novelmaneuvering decision method based on deep reinforcement learning is proposed in this paper.The a...Aiming at intelligent decision-making of unmanned aerial vehicle(UAV)based on situation information in air combat,a novelmaneuvering decision method based on deep reinforcement learning is proposed in this paper.The autonomous maneuvering model ofUAV is established byMarkovDecision Process.The Twin DelayedDeep Deterministic Policy Gradient(TD3)algorithm and the Deep Deterministic Policy Gradient(DDPG)algorithm in deep reinforcement learning are used to train the model,and the experimental results of the two algorithms are analyzed and compared.The simulation experiment results show that compared with the DDPG algorithm,the TD3 algorithm has stronger decision-making performance and faster convergence speed and is more suitable for solving combat problems.The algorithm proposed in this paper enables UAVs to autonomously make maneuvering decisions based on situation information such as position,speed,and relative azimuth,adjust their actions to approach,and successfully strike the enemy,providing a new method for UAVs to make intelligent maneuvering decisions during air combat.展开更多
Highly intelligent Unmanned Combat Aerial Vehicle(UCAV)formation is expected to bring out strengths in Beyond-Visual-Range(BVR)air combat.Although Multi-Agent Reinforcement Learning(MARL)shows outstanding performance ...Highly intelligent Unmanned Combat Aerial Vehicle(UCAV)formation is expected to bring out strengths in Beyond-Visual-Range(BVR)air combat.Although Multi-Agent Reinforcement Learning(MARL)shows outstanding performance in cooperative decision-making,it is challenging for existing MARL algorithms to quickly converge to an optimal strategy for UCAV formation in BVR air combat where confrontation is complicated and reward is extremely sparse and delayed.Aiming to solve this problem,this paper proposes an Advantage Highlight Multi-Agent Proximal Policy Optimization(AHMAPPO)algorithm.First,at every step,the AHMAPPO records the degree to which the best formation exceeds the average of formations in parallel environments and carries out additional advantage sampling according to it.Then,the sampling result is introduced into the updating process of the actor network to improve its optimization efficiency.Finally,the simulation results reveal that compared with some state-of-the-art MARL algorithms,the AHMAPPO can obtain a more excellent strategy utilizing fewer sample episodes in the UCAV formation BVR air combat simulation environment built in this paper,which can reflect the critical features of BVR air combat.The AHMAPPO can significantly increase the convergence efficiency of the strategy for UCAV formation in BVR air combat,with a maximum increase of 81.5%relative to other algorithms.展开更多
This paper presents a rule-based framework for addressing decision-making problems within the context of the "UI-STRIVE"Competition.First,two distinct autonomous confrontation scenarios are described:autonom...This paper presents a rule-based framework for addressing decision-making problems within the context of the "UI-STRIVE"Competition.First,two distinct autonomous confrontation scenarios are described:autonomous air combat and cooperative interception.Second,a State-Event-Condition-Action(SECA)decision-making framework is developed,which integrates thefinite state machine and event-condition-action frameworks.This framework provides three products to describe rules,i.e.the SECA model,the SECA state chart,and the SECA rule description.Third,the situation assessment and target assignment during autonomous air combat are investigated,and the mathematical models are established.Finally,the decisionmaking model's rationality and feasibility are verified through data simulation and analysis.展开更多
文摘Combining the heuristic algorithm (HA) developed based on the specific knowledge of the cooperative multiple target attack (CMTA) tactics and the particle swarm optimization (PSO), a heuristic particle swarm optimization (HPSO) algorithm is proposed to solve the decision-making (DM) problem. HA facilitates to search the local optimum in the neighborhood of a solution, while the PSO algorithm tends to explore the search space for possible solutions. Combining the advantages of HA and PSO, HPSO algorithms can find out the global optimum quickly and efficiently. It obtains the DM solution by seeking for the optimal assignment of missiles of friendly fighter aircrafts (FAs) to hostile FAs. Simulation results show that the proposed algorithm is superior to the general PSO algorithm and two GA based algorithms in searching for the best solution to the DM problem.
基金supported by Major Projects for Science and Technology Innovation 2030(Grant No.2018AA0100800)Equipment Pre-research Foundation of Laboratory(Grant No.61425040104)in part by Jiangsu Province“333”project under Grant BRA2019051.
文摘Game theory can be applied to the air combat decision-making problem of multiple unmanned combat air vehicles(UCAVs).However,it is difficult to have satisfactory decision-making results completely relying on air combat situation information,because there is a lot of time-sensitive information in a complex air combat environment.In this paper,a constraint strategy game approach is developed to generate intelligent decision-making for multiple UCAVs in complex air combat environment with air combat situation information and time-sensitive information.Initially,a constraint strategy game is employed to model attack-defense decision-making problem in complex air combat environment.Then,an algorithm is proposed for solving the constraint strategy game based on linear programming and linear inequality(CSG-LL).Finally,an example is given to illustrate the effectiveness of the proposed approach.
文摘This paper considers the problem of applying data mining techniques to aeronautical field.The truncation method,which is one of the techniques in the aeronautical data mining,can be used to efficiently handle the air-combat behavior data.The technique of air-combat behavior data mining based on the truncation method is proposed to discover the air-combat rules or patterns.The simulation platform of the air-combat behavior data mining that supports two fighters is implemented.The simulation experimental results show that the proposed air-combat behavior data mining technique based on the truncation method is feasible whether in efficiency or in effectiveness.
基金supported by the Aeronautical Science Foundation of China(2017ZC53033)the Seed Foundation of Innovation and Creation for Graduate Students in Northwestern Polytechnical University(CX2020156)。
文摘In order to improve the autonomous ability of unmanned aerial vehicles(UAV)to implement air combat mission,many artificial intelligence-based autonomous air combat maneuver decision-making studies have been carried out,but these studies are often aimed at individual decision-making in 1 v1 scenarios which rarely happen in actual air combat.Based on the research of the 1 v1 autonomous air combat maneuver decision,this paper builds a multi-UAV cooperative air combat maneuver decision model based on multi-agent reinforcement learning.Firstly,a bidirectional recurrent neural network(BRNN)is used to achieve communication between UAV individuals,and the multi-UAV cooperative air combat maneuver decision model under the actor-critic architecture is established.Secondly,through combining with target allocation and air combat situation assessment,the tactical goal of the formation is merged with the reinforcement learning goal of every UAV,and a cooperative tactical maneuver policy is generated.The simulation results prove that the multi-UAV cooperative air combat maneuver decision model established in this paper can obtain the cooperative maneuver policy through reinforcement learning,the cooperative maneuver policy can guide UAVs to obtain the overall situational advantage and defeat the opponents under tactical cooperation.
基金acknowledge National Natural Science Foundation of China(Grant No.61573285,No.62003267)Open Fund of Key Laboratory of Data Link Technology of China Electronics Technology Group Corporation(Grant No.CLDL-20182101)Natural Science Foundation of Shaanxi Province(Grant No.2020JQ220)to provide fund for conducting experiments.
文摘Aiming at intelligent decision-making of unmanned aerial vehicle(UAV)based on situation information in air combat,a novelmaneuvering decision method based on deep reinforcement learning is proposed in this paper.The autonomous maneuvering model ofUAV is established byMarkovDecision Process.The Twin DelayedDeep Deterministic Policy Gradient(TD3)algorithm and the Deep Deterministic Policy Gradient(DDPG)algorithm in deep reinforcement learning are used to train the model,and the experimental results of the two algorithms are analyzed and compared.The simulation experiment results show that compared with the DDPG algorithm,the TD3 algorithm has stronger decision-making performance and faster convergence speed and is more suitable for solving combat problems.The algorithm proposed in this paper enables UAVs to autonomously make maneuvering decisions based on situation information such as position,speed,and relative azimuth,adjust their actions to approach,and successfully strike the enemy,providing a new method for UAVs to make intelligent maneuvering decisions during air combat.
基金co-supported by the National Natural Science Foundation of China(No.52272382)the Aeronautical Science Foundation of China(No.20200017051001)the Fundamental Research Funds for the Central Universities,China.
文摘Highly intelligent Unmanned Combat Aerial Vehicle(UCAV)formation is expected to bring out strengths in Beyond-Visual-Range(BVR)air combat.Although Multi-Agent Reinforcement Learning(MARL)shows outstanding performance in cooperative decision-making,it is challenging for existing MARL algorithms to quickly converge to an optimal strategy for UCAV formation in BVR air combat where confrontation is complicated and reward is extremely sparse and delayed.Aiming to solve this problem,this paper proposes an Advantage Highlight Multi-Agent Proximal Policy Optimization(AHMAPPO)algorithm.First,at every step,the AHMAPPO records the degree to which the best formation exceeds the average of formations in parallel environments and carries out additional advantage sampling according to it.Then,the sampling result is introduced into the updating process of the actor network to improve its optimization efficiency.Finally,the simulation results reveal that compared with some state-of-the-art MARL algorithms,the AHMAPPO can obtain a more excellent strategy utilizing fewer sample episodes in the UCAV formation BVR air combat simulation environment built in this paper,which can reflect the critical features of BVR air combat.The AHMAPPO can significantly increase the convergence efficiency of the strategy for UCAV formation in BVR air combat,with a maximum increase of 81.5%relative to other algorithms.
文摘This paper presents a rule-based framework for addressing decision-making problems within the context of the "UI-STRIVE"Competition.First,two distinct autonomous confrontation scenarios are described:autonomous air combat and cooperative interception.Second,a State-Event-Condition-Action(SECA)decision-making framework is developed,which integrates thefinite state machine and event-condition-action frameworks.This framework provides three products to describe rules,i.e.the SECA model,the SECA state chart,and the SECA rule description.Third,the situation assessment and target assignment during autonomous air combat are investigated,and the mathematical models are established.Finally,the decisionmaking model's rationality and feasibility are verified through data simulation and analysis.