摘要
为解决由于固定温度SAC(Soft Actor Critic)算法中存在的Q函数高估可能会导致算法陷入局部最优的问题,通过深入分析提出了一个稳定且受限的SAC算法(SCSAC:Stable Constrained Soft Actor Critic)。该算法通过改进最大熵目标函数修复固定温度SAC算法中的Q函数高估问题,同时增强算法在测试过程中稳定性的效果。最后,在4个OpenAI Gym Mujoco环境下对SCSAC算法进行了验证,实验结果表明,稳定且受限的SAC算法相比固定温度SAC算法可以有效减小Q函数高估出现的次数并能在测试中获得更加稳定的结果。
To solve the problem that Q function overestimation may cause SAC(Soft Actor Critic)algorithm trapped in local optimal solution,SCSAC(Stable Constrained Soft Actor Critic)algorithm is proposed for perfectly resolving the above weakness hidden in maximum entropy objective function improving the stability of Stable Constrained Soft Actor Critic algorithm in trailing process.The result of evaluating Stable Constrained Soft Actor Critic algorithm on the suite of OpenAI Gym Mujoco environments shows less Q value overestimation appearance and more stable results in trailing process comparing with SAC algorithm.
作者
海日
张兴亮
姜源
杨永健
HAI Ri;ZHANG Xingliang;JIANG Yuan;YANG Yongjian(College of Computer Science and Technology,Jilin University,Changchun 130012,China;China Mobile Jilin Company Limited,China Mobile Communications Group Company Limited,Changchun 130022,China)
出处
《吉林大学学报(信息科学版)》
CAS
2024年第2期318-325,共8页
Journal of Jilin University(Information Science Edition)
基金
吉林省发改委创新能力建设基金资助项目(2020C017-2)
吉林省科技发展计划重点基金资助项目(20210201082GX)。
关键词
强化学习
最大熵强化学习
Q值高估
SAC算法
reinforcement learning
maximum entropy reinforcement learning
Q value overestimation
soft actor critic(SAC)algorithm