摘要
针对大气层内高速机动目标的拦截问题,提出了一种基于双延迟深度确定性策略梯度(TD3)算法的深度强化学习制导律,它直接将交战状态信息映射为拦截弹的指令加速度,是一种端到端、无模型的制导策略。首先,将攻防双方的交战运动学模型描述为适用于深度强化学习算法的马尔科夫决策过程,之后通过合理地设计算法训练所需的交战场景、动作空间、状态空间和网络结构,并引入奖励函数整形和状态随机初始化,构建了完整的深度强化学习制导算法。仿真结果表明:与比例导引和增强比例导引两种方案相比,深度强化学习制导策略在脱靶量更小的同时能够降低对中制导精度的要求;具有良好的鲁棒性和泛化能力,并且计算负担较小,具备在弹载计算机上运行的条件。
Aiming at the problem of intercepting endo-atmospheric high-speed maneuvering targets, a deep reinforcement learning guidance law is proposed based on the twin delayed deep deterministic policy gradient(TD3) algorithm. It directly maps the engagement information to the commanded acceleration of the interceptor, which is an end-to-end, model-free guidance strategy. Firstly, the engagement kinematic model of both sides is described as a Markov decision process suitable for deep reinforcement learning algorithms. After that, a complete deep reinforcement learning guidance algorithm is constructed by reasonably designing the engagement scenarios, action space, state space and network structure required for algorithm training. The reward shaping and random initialization are introduced to construct a complete algorithm. The simulation results show that, compared with the proportional guidance and augmented proportional guidance laws, the proposed guidance strategy can reduce the requirement for mid-course guidance while having smaller miss distances. It has good robustness and generalization ability, with less computational burden that makes it eligible to run on missile-borne computers.
作者
邱潇颀
高长生
荆武兴
QIU Xiaoqi;GAO Changsheng;JING Wuxing(Department of Aerospace Engineering,Harbin Institute of Technology,Harbin 150001,China)
出处
《宇航学报》
EI
CAS
CSCD
北大核心
2022年第5期685-695,共11页
Journal of Astronautics
基金
国家自然科学基金(12072090)。
关键词
导弹制导
大气层内拦截
机动目标
深度强化学习
马尔科夫决策
Missile guidance
Endo-atmospheric interception
Maneuvering target
Deep reinforcement learning
Markov decision