摘要
Unmanned Aerial Vehicles(UAVs)play increasing important role in modern battlefield.In this paper,considering the incomplete observation information of individual UAV in complex combat environment,we put forward an UAV swarm non-cooperative game model based on Multi-Agent Deep Reinforcement Learning(MADRL),where the state space and action space are constructed to adapt the real features of UAV swarm air-to-air combat.The multi-agent particle environment is employed to generate an UAV combat scene with continuous observation space.Some recently popular MADRL methods are compared extensively in the UAV swarm noncooperative game model,the results indicate that the performance of Multi-Agent Soft Actor-Critic(MASAC)is better than that of other MADRL methods by a large margin.UAV swarm employing MASAC can learn more effective policies,and obtain much higher hit rate and win rate.Simulations under different swarm sizes and UAV physical parameters are also performed,which implies that MASAC owns a well generalization effect.Furthermore,the practicability and convergence of MASAC are addressed by investigating the loss value of Q-value networks with respect to individual UAV,the results demonstrate that MASAC is of good practicability and the Nash equilibrium of the UAV swarm non-cooperative game under incomplete information can be reached.
基金
supported by the National Key R&D Program of China(No.2018AAA0100804)
the National Natural Science Foundation of China(No.62173237)
the Academic Research Projects of Beijing Union University,China(Nos.SK160202103,ZK50201911,ZK30202107,ZK30202108)
the Song Shan Laboratory Foundation,China(No.YYJC062022017)
the Applied Basic Research Programs of Liaoning Province,China(Nos.2022020502-JH2/1013,2022JH2/101300150)
the Special Funds program of Civil Aircraft,China(No.01020220627066)
the Special Funds program of Shenyang Science and Technology,China(No.22-322-3-34).