期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
基于AM-SAC的无人机自主空战决策 被引量:2
1
作者 李曾琳 李波 +1 位作者 白双霞 孟波波 《兵工学报》 EI CAS CSCD 北大核心 2023年第9期2849-2858,共10页
针对现代空战中的无人机自主决策问题,将注意力机制(AM)与深度强化学习中的非确定性策略算法Soft Actor Critic(SAC)相结合,提出一种基于AM-SAC算法的机动决策算法。在1V1的作战背景下建立无人机3自由度运动模型和无人机近距空战模型,... 针对现代空战中的无人机自主决策问题,将注意力机制(AM)与深度强化学习中的非确定性策略算法Soft Actor Critic(SAC)相结合,提出一种基于AM-SAC算法的机动决策算法。在1V1的作战背景下建立无人机3自由度运动模型和无人机近距空战模型,并利用敌我之间相对距离和相对方位角构建导弹攻击区模型。将AM引入SAC算法,构造权重网络,从而实现训练过程中奖励权重的动态调整并设计仿真实验。通过与SAC算法的对比以及在多个不同初始态势环境下的测试,验证了基于AM-SAC算法的机动决策算法具有更高的收敛速度和机动稳定性,在空战中有更好的表现,且适用于多种不同的作战场景。 展开更多
关键词 无人机 空战决策算法 soft Actor Critic 注意力机制
下载PDF
基于SAC算法的无人机自主空战决策算法 被引量:3
2
作者 李波 白双霞 +2 位作者 孟波波 梁诗阳 李曾琳 《指挥控制与仿真》 2022年第5期24-30,共7页
针对无人机在空战过程中的自主决策问题,以无人机1v1攻防为背景提出了无人机近距空战模型。采用Markov决策过程建立了无人机自主机动模型,提出基于Soft Actor Critic (SAC)算法的无人机自主空战决策算法,以无人机空战态势数据作为输入,... 针对无人机在空战过程中的自主决策问题,以无人机1v1攻防为背景提出了无人机近距空战模型。采用Markov决策过程建立了无人机自主机动模型,提出基于Soft Actor Critic (SAC)算法的无人机自主空战决策算法,以无人机空战态势数据作为输入,输出无人机机动指令,使得无人机通过完成指定指令,率先锁定敌方无人机并抢先攻击。最后,设计仿真实验,通过对比双延迟深度确定性策略梯度(Twin Delayed Deep Deterministic Policy Gradient Algorithm, TD3)算法,验证了基于SAC算法的无人机空战决策算法在增强策略探索的情况下,学习速度大幅度提高,使无人机在任意初始态势下主动占据优势,并成功打击目标,有效提高了无人机在空战决策过程中的自主性。 展开更多
关键词 无人机 空战决策算法 soft Actor Critic MARKOV决策过程
下载PDF
Development of a Soft Actor Critic deep reinforcement learning approach for harnessing energy flexibility in a Large Office building 被引量:1
3
作者 Anjukan Kathirgamanathan Eleni Mangina Donal P.Finn 《Energy and AI》 2021年第3期228-241,共14页
This research is concerned with the novel application and investigation of‘Soft Actor Critic’based deep reinforcement learning to control the cooling setpoint(and hence cooling loads)of a large commercial building t... This research is concerned with the novel application and investigation of‘Soft Actor Critic’based deep reinforcement learning to control the cooling setpoint(and hence cooling loads)of a large commercial building to harness energy flexibility.The research is motivated by the challenge associated with the development and application of conventional model-based control approaches at scale to the wider building stock.Soft Actor Critic is a model-free deep reinforcement learning technique that is able to handle continuous action spaces and which has seen limited application to real-life or high-fidelity simulation implementations in the context of automated and intelligent control of building energy systems.Such control techniques are seen as one possible solution to supporting the operation of a smart,sustainable and future electrical grid.This research tests the suitability of the technique through training and deployment of the agent on an EnergyPlus based environment of the office building.The agent was found to learn an optimal control policy that was able to minimise energy costs by 9.7%compared to the default rule-based control scheme and was able to improve or maintain thermal comfort limits over a test period of one week.The algorithm was shown to be robust to the different hyperparameters and this optimal control policy was learnt through the use of a minimal state space consisting of readily available variables.The robustness of the algorithm was tested through investigation of the speed of learning and ability to deploy to different seasons and climates.It was found that the agent requires minimal training sample points and outperforms the baseline after three months of operation and also without disruption to thermal comfort during this period.The agent is transferable to other climates and seasons although further retraining or hyperparameter tuning is recommended. 展开更多
关键词 Deep Reinforcement Learning(DRL) Building energy flexibility soft Actor Critic(SAC) Machine learning Smart grid
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部