Aiming at the problem of multi-UAV pursuit-evasion confrontation, a UAV cooperative maneuver method based on an improved multi-agent deep reinforcement learning(MADRL) is proposed. In this method, an improved Comm Net...Aiming at the problem of multi-UAV pursuit-evasion confrontation, a UAV cooperative maneuver method based on an improved multi-agent deep reinforcement learning(MADRL) is proposed. In this method, an improved Comm Net network based on a communication mechanism is introduced into a deep reinforcement learning algorithm to solve the multi-agent problem. A layer of gated recurrent unit(GRU) is added to the actor-network structure to remember historical environmental states. Subsequently,another GRU is designed as a communication channel in the Comm Net core network layer to refine communication information between UAVs. Finally, the simulation results of the algorithm in two sets of scenarios are given, and the results show that the method has good effectiveness and applicability.展开更多
To reach a higher level of autonomy for unmanned combat aerial vehicle(UCAV) in air combat games, this paper builds an autonomous maneuver decision system. In this system,the air combat game is regarded as a Markov pr...To reach a higher level of autonomy for unmanned combat aerial vehicle(UCAV) in air combat games, this paper builds an autonomous maneuver decision system. In this system,the air combat game is regarded as a Markov process, so that the air combat situation can be effectively calculated via Bayesian inference theory. According to the situation assessment result,adaptively adjusts the weights of maneuver decision factors, which makes the objective function more reasonable and ensures the superiority situation for UCAV. As the air combat game is characterized by highly dynamic and a significant amount of uncertainty,to enhance the robustness and effectiveness of maneuver decision results, fuzzy logic is used to build the functions of four maneuver decision factors. Accuracy prediction of opponent aircraft is also essential to ensure making a good decision; therefore, a prediction model of opponent aircraft is designed based on the elementary maneuver method. Finally, the moving horizon optimization strategy is used to effectively model the whole air combat maneuver decision process. Various simulations are performed on typical scenario test and close-in dogfight, the results sufficiently demonstrate the superiority of the designed maneuver decision method.展开更多
This paper proposes an autonomous maneuver decision method using transfer learning pigeon-inspired optimization(TLPIO)for unmanned combat aerial vehicles(UCAVs)in dogfight engagements.Firstly,a nonlinear F-16 aircraft...This paper proposes an autonomous maneuver decision method using transfer learning pigeon-inspired optimization(TLPIO)for unmanned combat aerial vehicles(UCAVs)in dogfight engagements.Firstly,a nonlinear F-16 aircraft model and automatic control system are constructed by a MATLAB/Simulink platform.Secondly,a 3-degrees-of-freedom(3-DOF)aircraft model is used as a maneuvering command generator,and the expanded elemental maneuver library is designed,so that the aircraft state reachable set can be obtained.Then,the game matrix is composed with the air combat situation evaluation function calculated according to the angle and range threats.Finally,a key point is that the objective function to be optimized is designed using the game mixed strategy,and the optimal mixed strategy is obtained by TLPIO.Significantly,the proposed TLPIO does not initialize the population randomly,but adopts the transfer learning method based on Kullback-Leibler(KL)divergence to initialize the population,which improves the search accuracy of the optimization algorithm.Besides,the convergence and time complexity of TLPIO are discussed.Comparison analysis with other classical optimization algorithms highlights the advantage of TLPIO.In the simulation of air combat,three initial scenarios are set,namely,opposite,offensive and defensive conditions.The effectiveness performance of the proposed autonomous maneuver decision method is verified by simulation results.展开更多
Cooperative autonomous air combat of multiple unmanned aerial vehicles(UAVs)is one of the main combat modes in future air warfare,which becomes even more complicated with highly changeable situation and uncertain info...Cooperative autonomous air combat of multiple unmanned aerial vehicles(UAVs)is one of the main combat modes in future air warfare,which becomes even more complicated with highly changeable situation and uncertain information of the opponents.As such,this paper presents a cooperative decision-making method based on incomplete information dynamic game to generate maneuver strategies for multiple UAVs in air combat.Firstly,a cooperative situation assessment model is presented to measure the overall combat situation.Secondly,an incomplete information dynamic game model is proposed to model the dynamic process of air combat,and a dynamic Bayesian network is designed to infer the tactical intention of the opponent.Then a reinforcement learning framework based on multiagent deep deterministic policy gradient is established to obtain the perfect Bayes-Nash equilibrium solution of the air combat game model.Finally,a series of simulations are conducted to verify the effectiveness of the proposed method,and the simulation results show effective synergies and cooperative tactics.展开更多
Reinforcement Learning(RL)algorithms enhance intelligence of air combat AutonomousManeuver Decision(AMD)policy,but they may underperform in target combat environmentswith disturbances.To enhance the robustness of the ...Reinforcement Learning(RL)algorithms enhance intelligence of air combat AutonomousManeuver Decision(AMD)policy,but they may underperform in target combat environmentswith disturbances.To enhance the robustness of the AMD strategy learned by RL,thisstudy proposes a Tube-based Robust RL(TRRL)method.First,this study introduces a tube todescribe reachable trajectories under disturbances,formulates a method for calculating tubes basedon sum-of-squares programming,and proposes the TRRL algorithm that enhances robustness byutilizing tube size as a quantitative indicator.Second,this study introduces offline techniques forregressing the tube size function and establishing a tube library before policy learning,aiming toeliminate complex online tube solving and reduce the computational burden during training.Furthermore,an analysis of the tube library demonstrates that the mitigated AMD strategy achievesgreater robustness,as smaller tube sizes correspond to more cautious actions.This finding highlightsthat TRRL enhances robustness by promoting a conservative policy.To effectively balanceaggressiveness and robustness,the proposed TRRL algorithm introduces a“laziness factor”as aweight of robustness.Finally,combat simulations in an environment with disturbances confirm thatthe AMD policy learned by the TRRL algorithm exhibits superior air combat performance comparedto selected robust RL baselines.展开更多
A multi-stage influence diagram is used to model the pilot's sequential decision making in one on one air combat. The model based on the multi-stage influence diagram graphically describes the elements of decision pr...A multi-stage influence diagram is used to model the pilot's sequential decision making in one on one air combat. The model based on the multi-stage influence diagram graphically describes the elements of decision process, and contains a point-mass model for the dynamics of an aircraft and takes into account the decision maker's preferences under uncertain conditions. Considering an active opponent, the opponent's maneuvers can be modeled stochastically. The solution of multistage influence diagram can be obtained by converting the multistage influence diagram into a two-level optimization problem. The simulation results show the model is effective.展开更多
基金supported in part by the National Key Laboratory of Air-based Information Perception and Fusion and the Aeronautical Science Foundation of China (Grant No. 20220001068001)National Natural Science Foundation of China (Grant No.61673327)+1 种基金Natural Science Basic Research Plan in Shaanxi Province,China (Grant No. 2023-JC-QN-0733)China IndustryUniversity-Research Innovation Foundation (Grant No. 2022IT188)。
文摘Aiming at the problem of multi-UAV pursuit-evasion confrontation, a UAV cooperative maneuver method based on an improved multi-agent deep reinforcement learning(MADRL) is proposed. In this method, an improved Comm Net network based on a communication mechanism is introduced into a deep reinforcement learning algorithm to solve the multi-agent problem. A layer of gated recurrent unit(GRU) is added to the actor-network structure to remember historical environmental states. Subsequently,another GRU is designed as a communication channel in the Comm Net core network layer to refine communication information between UAVs. Finally, the simulation results of the algorithm in two sets of scenarios are given, and the results show that the method has good effectiveness and applicability.
基金supported by the National Natural Science Foundation of China(61601505)the Aeronautical Science Foundation of China(20155196022)the Shaanxi Natural Science Foundation of China(2016JQ6050)
文摘To reach a higher level of autonomy for unmanned combat aerial vehicle(UCAV) in air combat games, this paper builds an autonomous maneuver decision system. In this system,the air combat game is regarded as a Markov process, so that the air combat situation can be effectively calculated via Bayesian inference theory. According to the situation assessment result,adaptively adjusts the weights of maneuver decision factors, which makes the objective function more reasonable and ensures the superiority situation for UCAV. As the air combat game is characterized by highly dynamic and a significant amount of uncertainty,to enhance the robustness and effectiveness of maneuver decision results, fuzzy logic is used to build the functions of four maneuver decision factors. Accuracy prediction of opponent aircraft is also essential to ensure making a good decision; therefore, a prediction model of opponent aircraft is designed based on the elementary maneuver method. Finally, the moving horizon optimization strategy is used to effectively model the whole air combat maneuver decision process. Various simulations are performed on typical scenario test and close-in dogfight, the results sufficiently demonstrate the superiority of the designed maneuver decision method.
基金the Science and Technology Innovation 2030-Key Project of“New Generation Artificial Intelligence”(2018AAA0100803)the National Natural Science Foundation of China(U20B2071,91948204,T2121003,U1913602)。
文摘This paper proposes an autonomous maneuver decision method using transfer learning pigeon-inspired optimization(TLPIO)for unmanned combat aerial vehicles(UCAVs)in dogfight engagements.Firstly,a nonlinear F-16 aircraft model and automatic control system are constructed by a MATLAB/Simulink platform.Secondly,a 3-degrees-of-freedom(3-DOF)aircraft model is used as a maneuvering command generator,and the expanded elemental maneuver library is designed,so that the aircraft state reachable set can be obtained.Then,the game matrix is composed with the air combat situation evaluation function calculated according to the angle and range threats.Finally,a key point is that the objective function to be optimized is designed using the game mixed strategy,and the optimal mixed strategy is obtained by TLPIO.Significantly,the proposed TLPIO does not initialize the population randomly,but adopts the transfer learning method based on Kullback-Leibler(KL)divergence to initialize the population,which improves the search accuracy of the optimization algorithm.Besides,the convergence and time complexity of TLPIO are discussed.Comparison analysis with other classical optimization algorithms highlights the advantage of TLPIO.In the simulation of air combat,three initial scenarios are set,namely,opposite,offensive and defensive conditions.The effectiveness performance of the proposed autonomous maneuver decision method is verified by simulation results.
基金supported by the National Natural Science Foundation of China(Grant No.61933010 and 61903301)Shaanxi Aerospace Flight Vehicle Design Key Laboratory。
文摘Cooperative autonomous air combat of multiple unmanned aerial vehicles(UAVs)is one of the main combat modes in future air warfare,which becomes even more complicated with highly changeable situation and uncertain information of the opponents.As such,this paper presents a cooperative decision-making method based on incomplete information dynamic game to generate maneuver strategies for multiple UAVs in air combat.Firstly,a cooperative situation assessment model is presented to measure the overall combat situation.Secondly,an incomplete information dynamic game model is proposed to model the dynamic process of air combat,and a dynamic Bayesian network is designed to infer the tactical intention of the opponent.Then a reinforcement learning framework based on multiagent deep deterministic policy gradient is established to obtain the perfect Bayes-Nash equilibrium solution of the air combat game model.Finally,a series of simulations are conducted to verify the effectiveness of the proposed method,and the simulation results show effective synergies and cooperative tactics.
文摘Reinforcement Learning(RL)algorithms enhance intelligence of air combat AutonomousManeuver Decision(AMD)policy,but they may underperform in target combat environmentswith disturbances.To enhance the robustness of the AMD strategy learned by RL,thisstudy proposes a Tube-based Robust RL(TRRL)method.First,this study introduces a tube todescribe reachable trajectories under disturbances,formulates a method for calculating tubes basedon sum-of-squares programming,and proposes the TRRL algorithm that enhances robustness byutilizing tube size as a quantitative indicator.Second,this study introduces offline techniques forregressing the tube size function and establishing a tube library before policy learning,aiming toeliminate complex online tube solving and reduce the computational burden during training.Furthermore,an analysis of the tube library demonstrates that the mitigated AMD strategy achievesgreater robustness,as smaller tube sizes correspond to more cautious actions.This finding highlightsthat TRRL enhances robustness by promoting a conservative policy.To effectively balanceaggressiveness and robustness,the proposed TRRL algorithm introduces a“laziness factor”as aweight of robustness.Finally,combat simulations in an environment with disturbances confirm thatthe AMD policy learned by the TRRL algorithm exhibits superior air combat performance comparedto selected robust RL baselines.
文摘A multi-stage influence diagram is used to model the pilot's sequential decision making in one on one air combat. The model based on the multi-stage influence diagram graphically describes the elements of decision process, and contains a point-mass model for the dynamics of an aircraft and takes into account the decision maker's preferences under uncertain conditions. Considering an active opponent, the opponent's maneuvers can be modeled stochastically. The solution of multistage influence diagram can be obtained by converting the multistage influence diagram into a two-level optimization problem. The simulation results show the model is effective.