期刊文献+

融合有效方差置信上界的Q学习智能干扰决策算法 被引量:2

Q-learning intelligent jamming decision algorithm based on efficient upper confidence bound variance
下载PDF
导出
摘要 为进一步提升基于值函数强化学习的智能干扰决策算法的收敛速度,增强战场决策的有效性,设计了一种融合有效方差置信上界思想的改进Q学习智能通信干扰决策算法。该算法在Q学习算法的框架基础上,利用有效干扰动作的价值方差设置置信区间,从干扰动作空间中剔除置信度较低的干扰动作,减少干扰方在未知环境中不必要的探索成本,加快其在干扰动作空间的搜索速度,并同步更新所有干扰动作的价值,进而加速学习最优干扰策略。通过将干扰决策场景建模为马尔科夫决策过程进行仿真实验,所构造的干扰实验结果表明:当通信方使用干扰方未知的干扰躲避策略变更通信波道时,与现有基于强化学习的干扰决策算法相比,该算法在无通信方的先验信息条件下,收敛速度更快,可达到更高的干扰成功率,获得更大的干扰总收益。此外,该算法还适用于“多对多”协同对抗环境,可利用动作剔除方法降低联合干扰动作的空间维度,相同实验条件下,其干扰成功率比传统Q学习决策算法高50%以上。 To further improve the convergence speed of the intelligent jamming decision-making algorithm based on value function in reinforcement learning and enhance its effectiveness,an improved Q-learning intelligent communication jamming decision algorithm was designed integrating the efficient upper confidence bound variance.Based on the framework of Q-learning algorithm,the proposed algorithm utilizes the value variance of effective jamming action to set the confidence interval.It can eliminate the jamming action with low confidence from the jamming action space,reduce the unnecessary exploration cost in the unknown environment,speed up its searching speed in the interference action space,and synchronously update the value of all actions,thus accelerating the optimal strategy learning process.The jamming decision-making scenario was modeled as the Markov decision process for simulation.Results show that when the correspondent used interference avoidance strategy against the jammer to change the communication channel,the proposed algorithm could achieve faster convergence speed,higher jamming success rate,and greater total jamming rewards,under the condition of no prior information,compared with the existing decision-making algorithms based on reinforcement learning.Besides,the algorithm could be applied to the“many-to-many”cooperative countermeasure environment.The action elimination method was used to reduce the dimension of joint jamming action,and the jamming success rate of the proposed algorithm was 50%higher than those of the traditional Q-learning decision algorithms under the same conditions.
作者 饶宁 许华 宋佰霖 RAO Ning;XU Hua;SONG Bailin(Information and Navigation College,Air Force Engineering University,Xi’an 710077,China)
出处 《哈尔滨工业大学学报》 EI CAS CSCD 北大核心 2022年第5期162-170,共9页 Journal of Harbin Institute of Technology
关键词 干扰决策 强化学习 有效方差置信上界 Q学习 干扰动作剔除 马尔科夫决策过程 jamming decision-making reinforcement learning efficient upper confidence bound variance Q-learning jamming action elimination Markov decision process
  • 相关文献

参考文献3

二级参考文献14

  • 1张春磊.认知电子战”拉开序幕--DARPA开始开发“智能干扰机”[J].通信电子战.2011(1):16-19.
  • 2DARPA. Notice of Intent to Award Sole Source Contract: Behavioral Learning for Adaptive Electronic Warfare (BLADE) Phase 3 [ R/OL]. (2014-2-19). https:// www. fbo. gov/spg/ODA/DARPA/CMO/DARPA-SN-14- 24/listing. html.
  • 3Barry Manz. Cognition: EW Gets Brainy [ J ]. Journal of Electronic Defense ,2012,35 (10) :32.
  • 4Air Force. Cognitive Jammer [ EB/OL ]. https ://www. fbo. gov. ( 2010-1-20 ).
  • 5ONR. Broad Agency Announcement (BAA) NUMBER 13-005 Electronic Warfare Technology [ R/OL]. (2012- 11-19). https://www, fbo. gov.
  • 6DARPA. Broad Agency AnnouncementCOMMUNICATIONS UNDER EXTREME RFSPECTRUM CONDITIONS (Com- mEx )STRATEGIC TECHNOLOGY OFFICEDARPA-BAA 10-74[R/OL]. (2010-9-10). https://www, fbo. gov.
  • 7Disruptor SRxTM[EB/OL].(201-10-4).Exelis官网.
  • 8RANDALL JANKA. Applying Cognitive Radio Concepts to Next Generation Electronic Warfare[ C ]//2010年度软件无线电会议论文集.
  • 9杨小牛.从软件无线电到认知无线电,走向终极无线电——无线通信发展展望[J].中国电子科学研究院学报,2008,3(1):1-7. 被引量:75
  • 10张春磊,杨小牛.认知电子战初探[J].通信对抗,2013,32(2):1-4. 被引量:35

共引文献60

同被引文献18

引证文献2

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部