期刊文献+

远离旧区域和避免回路的强化探索方法 被引量:1

Reinforcement Exploration Method to Keep Away from Old Areas and Avoid Loops
下载PDF
导出
摘要 以内在动机为导向的探索类强化学习中,通常根据智能体对状态的熟悉程度产生内在奖励,难以获得较合适的近似度量方法,且这种长期累计度量的方式没有重视状态在其所处episode中的作用。Anchor方法使用锚代替分层强化学习中的子目标,鼓励智能体以远离锚的方式进行探索。受Anchor方法的启发,根据转移状态与同一个episode中历史状态之间的距离设计内在奖励函数,进而提出远离旧区域和避免回路的强化探索方法。将当前episode中部分历史状态组成的集合作为区域,周期性更新区域为最近访问的状态集合,根据转移状态与区域的最小距离给予智能体内在奖励,使智能体远离当前最近访问过的旧区域。将转移状态的连续前驱状态作为窗口并规定窗口大小,根据窗口范围内以转移状态为终点的最短回路长度给予内在奖励,防止智能体走回路。在经典的奖励稀疏环境MiniGrid中的实验结果表明,该方法避免了对状态熟悉程度的度量,同时以一个episode为周期对环境进行探索,有效提升了智能体的探索能力。 In intrinsic motivation-oriented exploratory reinforcement learning,intrinsic rewards are typically generated based on an agent's familiarity with the states.An appropriate approximate measure is difficult to obtain,and this long-term cumulative measure does not consider the role of the state in an episode.The Anchor method replaces subgoals in hierarchical reinforcement learning with anchors,thus encouraging the agent to explore in areas distant from the anchors.Inspired by this,an intrinsic reward function is designed based on the distance between the next state and the historical states in the same episode,and a reinforcement exploration method to keep Away from old Areas and Avoid Loops(AAAL)is proposed.Considering the set of partial historical states in this episode as a area and periodically treating the most recently visited state set as a new area,an intrinsic reward is allocated to the agent based on the shortest distance between the next state and area such that the agent is distant from the currently visited old area.Treating the successive precursor states of the next state as a window and specifying the window size,an intrinsic reward is allocated based on the shortest loop length of the window,with the next state regarded as the end point such that the agent can avoid walking the circuit.The experimental results in the classic reward sparse MiniGrid environment show that the AAAL method no longer requires measurements of familiarity with the states;in fact,it can explore the environment with an episode as a cycle and effectively improve the exploration ability of the agent.
作者 蔡丽娇 秦进 陈双 CAI Lijiao;QIN Jin;CHEN Shuang(State Key Laboratory of Public Big Data,College of Computer Science and Technology,Guizhou University,Guiyang 550025,China;Guizhou Door To Time Science and Technology Co.,Ltd.,Guiyang 550025,China)
出处 《计算机工程》 CAS CSCD 北大核心 2023年第7期118-124,134,共8页 Computer Engineering
基金 贵州省科技计划项目(黔科合基础[2020]1Y275,黔科合支撑[2020]3Y004)。
关键词 深度强化学习 奖励稀疏任务 内在奖励 旧区域 回路 deep reinforcement learning reward sparse task intrinsic reward old area loop
  • 相关文献

参考文献1

二级参考文献8

共引文献486

同被引文献3

引证文献1

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部