Manned combat aerial vehicles (MCAVs), and un-manned combat aerial vehicles (UCAVs) together form a cooper-ative engagement system to carry out operational mission, whichwill be a new air engagement style in the n...Manned combat aerial vehicles (MCAVs), and un-manned combat aerial vehicles (UCAVs) together form a cooper-ative engagement system to carry out operational mission, whichwill be a new air engagement style in the near future. On the basisof analyzing the structure of the MCAV/UCAV cooperative engage-ment system, this paper divides the unique system into three hi-erarchical levels, respectively, i.e., mission level, task-cluster leveland task level. To solve the formation and adjustment problem ofthe latter two levels, three corresponding mathematical modelsare established. To solve these models, three algorithms calledquantum artificial bee colony (QABC) algorithm, greedy strategy(GS) and two-stage greedy strategy (TSGS) are proposed. Finally,a series of simulation experiments are designed to verify the effec-tiveness and superiority of the proposed algorithms.展开更多
Unmanned combat air vehicles(UCAVs) mission planning is a fairly complicated global optimum problem. Military attack missions often employ a fleet of UCAVs equipped with weapons to attack a set of known targets. A UCA...Unmanned combat air vehicles(UCAVs) mission planning is a fairly complicated global optimum problem. Military attack missions often employ a fleet of UCAVs equipped with weapons to attack a set of known targets. A UCAV can carry different weapons to accomplish different combat missions. Choice of different weapons will have different effects on the final combat effectiveness. This work presents a mixed integer programming model for simultaneous weapon configuration and route planning of UCAVs, which solves the problem optimally using the IBM ILOG CPLEX optimizer for simple missions. This paper develops a heuristic algorithm to handle the medium-scale and large-scale problems. The experiments demonstrate the performance of the heuristic algorithm in solving the medium scale and large scale problems. Moreover, we give suggestions on how to select the most appropriate algorithm to solve different scale problems.展开更多
This paper presents a combined strategy to solve the trajectory online optimization problem for unmanned combat aerial vehicle (UCAV). Firstly, as trajectory directly optimizing is quite time costing, an online trajec...This paper presents a combined strategy to solve the trajectory online optimization problem for unmanned combat aerial vehicle (UCAV). Firstly, as trajectory directly optimizing is quite time costing, an online trajectory functional representation method is proposed. Considering the practical requirement of online trajectory, the 4-order polynomial function is used to represent the trajectory, and which can be determined by two independent parameters with the trajectory terminal conditions; thus, the trajectory online optimization problem is converted into the optimization of the two parameters, which largely lowers the complexity of the optimization problem. Furthermore, the scopes of the two parameters have been assessed into small ranges using the golden section ratio method. Secondly, a multi-population rotation strategy differential evolution approach (MPRDE) is designed to optimize the two parameters; in which, 'current-to-best/1/bin', 'current-to-rand/1/bin' and 'rand/2/bin' strategies with fixed parameter settings are designed, these strategies are rotationally used by three subpopulations. Thirdly, the rolling optimization method is applied to model the online trajectory optimization process. Finally, simulation results demonstrate the efficiency and real-time calculation capability of the designed combined strategy for UCAV trajectory online optimizing under dynamic and complicated environments.展开更多
In the air combat process,confrontation position is the critical factor to determine the confrontation situation,attack effect and escape probability of UAVs.Therefore,selecting the optimal confrontation position beco...In the air combat process,confrontation position is the critical factor to determine the confrontation situation,attack effect and escape probability of UAVs.Therefore,selecting the optimal confrontation position becomes the primary goal of maneuver decision-making.By taking the position as the UAV’s maneuver strategy,this paper constructs the optimal confrontation position selecting games(OCPSGs)model.In the OCPSGs model,the payoff function of each UAV is defined by the difference between the comprehensive advantages of both sides,and the strategy space of each UAV at every step is defined by its accessible space determined by the maneuverability.Then we design the limit approximation of mixed strategy Nash equilibrium(LAMSNQ)algorithm,which provides a method to determine the optimal probability distribution of positions in the strategy space.In the simulation phase,we assume the motions on three directions are independent and the strategy space is a cuboid to simplify the model.Several simulations are performed to verify the feasibility,effectiveness and stability of the algorithm.展开更多
针对无人战斗机(unmanned combat air vehicle,UCAV)处于存在威胁区域的战场中路径规划问题,提出一种基于分组教与学算法的UCAV自适应路径规划方法。通过分析UCAV路径评价指标,提出一种自适应的UCAV路径评价模型,根据作战环境规划出距...针对无人战斗机(unmanned combat air vehicle,UCAV)处于存在威胁区域的战场中路径规划问题,提出一种基于分组教与学算法的UCAV自适应路径规划方法。通过分析UCAV路径评价指标,提出一种自适应的UCAV路径评价模型,根据作战环境规划出距离短、威胁小的任务路径。针对教与学算法寻优精度低、耗时长的问题,提出一种分组教与学算法,引入动态分组和高斯分布扰动策略,提高算法寻优性能。通过仿真实验,该方案求解的最优路径更短且安全。展开更多
Dynamic game theory has received considerable attention as a promising technique for formulating control actions for agents in an extended complex enterprise that involves an adversary. At each decision making step, e...Dynamic game theory has received considerable attention as a promising technique for formulating control actions for agents in an extended complex enterprise that involves an adversary. At each decision making step, each side seeks the best scheme with the purpose of maximizing its own objective function. In this paper, a game theoretic approach based on predatorprey particle swarm optimization(PP-PSO) is presented, and the dynamic task assignment problem for multiple unmanned combat aerial vehicles(UCAVs) in military operation is decomposed and modeled as a two-player game at each decision stage. The optimal assignment scheme of each stage is regarded as a mixed Nash equilibrium, which can be solved by using the PP-PSO. The effectiveness of our proposed methodology is verified by a typical example of an air military operation that involves two opposing forces: the attacking force Red and the defense force Blue.展开更多
Recent advances in on-board radar and missile capabilities,combined with individual payload limitations,have led to increased interest in the use of unmanned combat aerial vehicles(UCAVs)for cooperative occupation dur...Recent advances in on-board radar and missile capabilities,combined with individual payload limitations,have led to increased interest in the use of unmanned combat aerial vehicles(UCAVs)for cooperative occupation during beyond-visual-range(BVR)air combat.However,prior research on occupational decision-making in BVR air combat has mostly been limited to one-on-one scenarios.As such,this study presents a practical cooperative occupation decision-making methodology for use with multiple UCAVs.The weapon engagement zone(WEZ)and combat geometry were first used to develop an advantage function for situational assessment of one-on-one engagement.An encircling advantage function was then designed to represent the cooperation of UCAVs,thereby establishing a cooperative occupation model.The corresponding objective function was derived from the one-on-one engagement advantage function and the encircling advantage function.The resulting model exhibited similarities to a mixed-integer nonlinear programming(MINLP)problem.As such,an improved discrete particle swarm optimization(DPSO)algorithm was used to identify a solution.The occupation process was then converted into a formation switching task as part of the cooperative occupation model.A series of simulations were conducted to verify occupational solutions in varying situations,including two-on-two engagement.Simulated results showed these solutions varied with initial conditions and weighting coefficients.This occupation process,based on formation switching,effectively demonstrates the viability of the proposed technique.These cooperative occupation results could provide a theoretical framework for subsequent research in cooperative BVR air combat.展开更多
To solve the problem of realizing autonomous aerial combat decision-making for unmanned combat aerial vehicles(UCAVs) rapidly and accurately in an uncertain environment, this paper proposes a decision-making method ba...To solve the problem of realizing autonomous aerial combat decision-making for unmanned combat aerial vehicles(UCAVs) rapidly and accurately in an uncertain environment, this paper proposes a decision-making method based on an improved deep reinforcement learning(DRL) algorithm: the multistep double deep Q-network(MS-DDQN) algorithm. First, a six-degree-of-freedom UCAV model based on an aircraft control system is established on a simulation platform, and the situation assessment functions of the UCAV and its target are established by considering their angles, altitudes, environments, missile attack performances, and UCAV performance. By controlling the flight path angle, roll angle, and flight velocity, 27 common basic actions are designed. On this basis, aiming to overcome the defects of traditional DRL in terms of training speed and convergence speed, the improved MS-DDQN method is introduced to incorporate the final return value into the previous steps. Finally, the pre-training learning model is used as the starting point for the second learning model to simulate the UCAV aerial combat decision-making process based on the basic training method, which helps to shorten the training time and improve the learning efficiency. The improved DRL algorithm significantly accelerates the training speed and estimates the target value more accurately during training, and it can be applied to aerial combat decision-making.展开更多
针对现代空战中的无人机自主决策问题,将注意力机制(AM)与深度强化学习中的非确定性策略算法Soft Actor Critic(SAC)相结合,提出一种基于AM-SAC算法的机动决策算法。在1V1的作战背景下建立无人机3自由度运动模型和无人机近距空战模型,...针对现代空战中的无人机自主决策问题,将注意力机制(AM)与深度强化学习中的非确定性策略算法Soft Actor Critic(SAC)相结合,提出一种基于AM-SAC算法的机动决策算法。在1V1的作战背景下建立无人机3自由度运动模型和无人机近距空战模型,并利用敌我之间相对距离和相对方位角构建导弹攻击区模型。将AM引入SAC算法,构造权重网络,从而实现训练过程中奖励权重的动态调整并设计仿真实验。通过与SAC算法的对比以及在多个不同初始态势环境下的测试,验证了基于AM-SAC算法的机动决策算法具有更高的收敛速度和机动稳定性,在空战中有更好的表现,且适用于多种不同的作战场景。展开更多
基金supported by the National Natural Science Foundation of China(61573017)the Doctoral Innovation Found of Air Force Engineering University(KGD08101604)
文摘Manned combat aerial vehicles (MCAVs), and un-manned combat aerial vehicles (UCAVs) together form a cooper-ative engagement system to carry out operational mission, whichwill be a new air engagement style in the near future. On the basisof analyzing the structure of the MCAV/UCAV cooperative engage-ment system, this paper divides the unique system into three hi-erarchical levels, respectively, i.e., mission level, task-cluster leveland task level. To solve the formation and adjustment problem ofthe latter two levels, three corresponding mathematical modelsare established. To solve these models, three algorithms calledquantum artificial bee colony (QABC) algorithm, greedy strategy(GS) and two-stage greedy strategy (TSGS) are proposed. Finally,a series of simulation experiments are designed to verify the effec-tiveness and superiority of the proposed algorithms.
基金supported by the National Natural Science Foundation of China(7147117571471174)
文摘Unmanned combat air vehicles(UCAVs) mission planning is a fairly complicated global optimum problem. Military attack missions often employ a fleet of UCAVs equipped with weapons to attack a set of known targets. A UCAV can carry different weapons to accomplish different combat missions. Choice of different weapons will have different effects on the final combat effectiveness. This work presents a mixed integer programming model for simultaneous weapon configuration and route planning of UCAVs, which solves the problem optimally using the IBM ILOG CPLEX optimizer for simple missions. This paper develops a heuristic algorithm to handle the medium-scale and large-scale problems. The experiments demonstrate the performance of the heuristic algorithm in solving the medium scale and large scale problems. Moreover, we give suggestions on how to select the most appropriate algorithm to solve different scale problems.
基金supported by the National Natural Science Foundation of China(61601505)the Aeronautical Science Foundation of China(20155196022)the Shaanxi Natural Science Foundation of China(2016JQ6050)
文摘This paper presents a combined strategy to solve the trajectory online optimization problem for unmanned combat aerial vehicle (UCAV). Firstly, as trajectory directly optimizing is quite time costing, an online trajectory functional representation method is proposed. Considering the practical requirement of online trajectory, the 4-order polynomial function is used to represent the trajectory, and which can be determined by two independent parameters with the trajectory terminal conditions; thus, the trajectory online optimization problem is converted into the optimization of the two parameters, which largely lowers the complexity of the optimization problem. Furthermore, the scopes of the two parameters have been assessed into small ranges using the golden section ratio method. Secondly, a multi-population rotation strategy differential evolution approach (MPRDE) is designed to optimize the two parameters; in which, 'current-to-best/1/bin', 'current-to-rand/1/bin' and 'rand/2/bin' strategies with fixed parameter settings are designed, these strategies are rotationally used by three subpopulations. Thirdly, the rolling optimization method is applied to model the online trajectory optimization process. Finally, simulation results demonstrate the efficiency and real-time calculation capability of the designed combined strategy for UCAV trajectory online optimizing under dynamic and complicated environments.
基金National Key R&D Program of China(Grant No.2021YFA1000402)National Natural Science Foundation of China(Grant No.72071159)to provide fund for conducting experiments。
文摘In the air combat process,confrontation position is the critical factor to determine the confrontation situation,attack effect and escape probability of UAVs.Therefore,selecting the optimal confrontation position becomes the primary goal of maneuver decision-making.By taking the position as the UAV’s maneuver strategy,this paper constructs the optimal confrontation position selecting games(OCPSGs)model.In the OCPSGs model,the payoff function of each UAV is defined by the difference between the comprehensive advantages of both sides,and the strategy space of each UAV at every step is defined by its accessible space determined by the maneuverability.Then we design the limit approximation of mixed strategy Nash equilibrium(LAMSNQ)algorithm,which provides a method to determine the optimal probability distribution of positions in the strategy space.In the simulation phase,we assume the motions on three directions are independent and the strategy space is a cuboid to simplify the model.Several simulations are performed to verify the feasibility,effectiveness and stability of the algorithm.
文摘针对无人战斗机(unmanned combat air vehicle,UCAV)处于存在威胁区域的战场中路径规划问题,提出一种基于分组教与学算法的UCAV自适应路径规划方法。通过分析UCAV路径评价指标,提出一种自适应的UCAV路径评价模型,根据作战环境规划出距离短、威胁小的任务路径。针对教与学算法寻优精度低、耗时长的问题,提出一种分组教与学算法,引入动态分组和高斯分布扰动策略,提高算法寻优性能。通过仿真实验,该方案求解的最优路径更短且安全。
基金supported by National Natural Science Foundation of China(61425008,61333004,61273054)Top-Notch Young Talents Program of China,and Aeronautical Foundation of China(2013585104)
文摘Dynamic game theory has received considerable attention as a promising technique for formulating control actions for agents in an extended complex enterprise that involves an adversary. At each decision making step, each side seeks the best scheme with the purpose of maximizing its own objective function. In this paper, a game theoretic approach based on predatorprey particle swarm optimization(PP-PSO) is presented, and the dynamic task assignment problem for multiple unmanned combat aerial vehicles(UCAVs) in military operation is decomposed and modeled as a two-player game at each decision stage. The optimal assignment scheme of each stage is regarded as a mixed Nash equilibrium, which can be solved by using the PP-PSO. The effectiveness of our proposed methodology is verified by a typical example of an air military operation that involves two opposing forces: the attacking force Red and the defense force Blue.
基金supported by the National Natural Science Foundation of China(No.61573286)the Aeronautical Science Foundation of China(No.20180753006)+2 种基金the Fundamental Research Funds for the Central Universities(3102019ZDHKY07)the Natural Science Foundation of Shaanxi Province(2020JQ-218)the Shaanxi Province Key Laboratory of Flight Control and Simulation Technology。
文摘Recent advances in on-board radar and missile capabilities,combined with individual payload limitations,have led to increased interest in the use of unmanned combat aerial vehicles(UCAVs)for cooperative occupation during beyond-visual-range(BVR)air combat.However,prior research on occupational decision-making in BVR air combat has mostly been limited to one-on-one scenarios.As such,this study presents a practical cooperative occupation decision-making methodology for use with multiple UCAVs.The weapon engagement zone(WEZ)and combat geometry were first used to develop an advantage function for situational assessment of one-on-one engagement.An encircling advantage function was then designed to represent the cooperation of UCAVs,thereby establishing a cooperative occupation model.The corresponding objective function was derived from the one-on-one engagement advantage function and the encircling advantage function.The resulting model exhibited similarities to a mixed-integer nonlinear programming(MINLP)problem.As such,an improved discrete particle swarm optimization(DPSO)algorithm was used to identify a solution.The occupation process was then converted into a formation switching task as part of the cooperative occupation model.A series of simulations were conducted to verify occupational solutions in varying situations,including two-on-two engagement.Simulated results showed these solutions varied with initial conditions and weighting coefficients.This occupation process,based on formation switching,effectively demonstrates the viability of the proposed technique.These cooperative occupation results could provide a theoretical framework for subsequent research in cooperative BVR air combat.
基金supported by the National Natural Science Foundation of China (No. 61573286)the Aeronautical Science Foundation of China (No. 20180753006)+2 种基金the Fundamental Research Funds for the Central Universities (3102019ZDHKY07)the Natural Science Foundation of Shaanxi Province (2019JM-163, 2020JQ-218)the Shaanxi Province Key Laboratory of Flight Control and Simulation Technology。
文摘To solve the problem of realizing autonomous aerial combat decision-making for unmanned combat aerial vehicles(UCAVs) rapidly and accurately in an uncertain environment, this paper proposes a decision-making method based on an improved deep reinforcement learning(DRL) algorithm: the multistep double deep Q-network(MS-DDQN) algorithm. First, a six-degree-of-freedom UCAV model based on an aircraft control system is established on a simulation platform, and the situation assessment functions of the UCAV and its target are established by considering their angles, altitudes, environments, missile attack performances, and UCAV performance. By controlling the flight path angle, roll angle, and flight velocity, 27 common basic actions are designed. On this basis, aiming to overcome the defects of traditional DRL in terms of training speed and convergence speed, the improved MS-DDQN method is introduced to incorporate the final return value into the previous steps. Finally, the pre-training learning model is used as the starting point for the second learning model to simulate the UCAV aerial combat decision-making process based on the basic training method, which helps to shorten the training time and improve the learning efficiency. The improved DRL algorithm significantly accelerates the training speed and estimates the target value more accurately during training, and it can be applied to aerial combat decision-making.
文摘针对现代空战中的无人机自主决策问题,将注意力机制(AM)与深度强化学习中的非确定性策略算法Soft Actor Critic(SAC)相结合,提出一种基于AM-SAC算法的机动决策算法。在1V1的作战背景下建立无人机3自由度运动模型和无人机近距空战模型,并利用敌我之间相对距离和相对方位角构建导弹攻击区模型。将AM引入SAC算法,构造权重网络,从而实现训练过程中奖励权重的动态调整并设计仿真实验。通过与SAC算法的对比以及在多个不同初始态势环境下的测试,验证了基于AM-SAC算法的机动决策算法具有更高的收敛速度和机动稳定性,在空战中有更好的表现,且适用于多种不同的作战场景。