摘要
经验回放策略已经成为深度强化学习算法的一个重要组成部分,它不仅可以加速深度强化学习算法收敛,而且还能加强智能体的表现。主流的经验回放策略使用均匀抽样、优先经验回放、专家经验回放等方法加速学习。为了进一步提高深度强化学习中经验样本的利用率,本文提出一种基于混合样本的经验回放策略(ER-MS)。该策略主要使用立即学习最新经验和复习成功经验2种方法,对智能体与环境交互产生的最新样本进行立即学习,同时使用额外的经验缓存池保存成功回合样本进行经验回放。实验表明,基于混合样本的经验回放策略结合DDPG算法能够在OpenAImujoco任务中取得更优成绩。
Experience replay strategy has become an important part of deep reinforcement learning algorithm.It can not only ac⁃celerate the convergence of deep reinforcement learning algorithm,but also enhance the performance of agents.Mainstream expe⁃rience replay strategies use uniform sampling,priority experience replay,expert experience replay and other methods to acceler⁃ate learning.In order to further improve the utilization of experience samples in deep reinforcement learning,this paper proposes an experience replay strategy based on mixed samples(ER-MS).This strategy mainly uses two methods:immediate learning of the latest experience and review of successful experience.It immediately learns the latest samples generated by the interaction be⁃tween the agent and the environment,and uses an additional experience buffer pool to save the samples of successful rounds for experience replay.Experiments show that the experience replay strategy based on mixed samples combined with DDPG algorithm can achieve better results in Open AI mujoco task.
作者
赖建彬
冯刚
LAI Jian-bin;FENG Gang(School of Computer Science,South China Normal University,Guangzhou 510635,China)
出处
《计算机与现代化》
2023年第6期33-38,共6页
Computer and Modernization
关键词
经验回放
深度强化学习
专家经验
experience replay
deep reinforcement learning
expert experience