摘要
为进一步提升基于值函数强化学习的智能干扰决策算法的收敛速度,增强战场决策的有效性,设计了一种融合有效方差置信上界思想的改进Q学习智能通信干扰决策算法。该算法在Q学习算法的框架基础上,利用有效干扰动作的价值方差设置置信区间,从干扰动作空间中剔除置信度较低的干扰动作,减少干扰方在未知环境中不必要的探索成本,加快其在干扰动作空间的搜索速度,并同步更新所有干扰动作的价值,进而加速学习最优干扰策略。通过将干扰决策场景建模为马尔科夫决策过程进行仿真实验,所构造的干扰实验结果表明:当通信方使用干扰方未知的干扰躲避策略变更通信波道时,与现有基于强化学习的干扰决策算法相比,该算法在无通信方的先验信息条件下,收敛速度更快,可达到更高的干扰成功率,获得更大的干扰总收益。此外,该算法还适用于“多对多”协同对抗环境,可利用动作剔除方法降低联合干扰动作的空间维度,相同实验条件下,其干扰成功率比传统Q学习决策算法高50%以上。
To further improve the convergence speed of the intelligent jamming decision-making algorithm based on value function in reinforcement learning and enhance its effectiveness,an improved Q-learning intelligent communication jamming decision algorithm was designed integrating the efficient upper confidence bound variance.Based on the framework of Q-learning algorithm,the proposed algorithm utilizes the value variance of effective jamming action to set the confidence interval.It can eliminate the jamming action with low confidence from the jamming action space,reduce the unnecessary exploration cost in the unknown environment,speed up its searching speed in the interference action space,and synchronously update the value of all actions,thus accelerating the optimal strategy learning process.The jamming decision-making scenario was modeled as the Markov decision process for simulation.Results show that when the correspondent used interference avoidance strategy against the jammer to change the communication channel,the proposed algorithm could achieve faster convergence speed,higher jamming success rate,and greater total jamming rewards,under the condition of no prior information,compared with the existing decision-making algorithms based on reinforcement learning.Besides,the algorithm could be applied to the“many-to-many”cooperative countermeasure environment.The action elimination method was used to reduce the dimension of joint jamming action,and the jamming success rate of the proposed algorithm was 50%higher than those of the traditional Q-learning decision algorithms under the same conditions.
作者
饶宁
许华
宋佰霖
RAO Ning;XU Hua;SONG Bailin(Information and Navigation College,Air Force Engineering University,Xi’an 710077,China)
出处
《哈尔滨工业大学学报》
EI
CAS
CSCD
北大核心
2022年第5期162-170,共9页
Journal of Harbin Institute of Technology
关键词
干扰决策
强化学习
有效方差置信上界
Q学习
干扰动作剔除
马尔科夫决策过程
jamming decision-making
reinforcement learning
efficient upper confidence bound variance
Q-learning
jamming action elimination
Markov decision process