为解决传统的深度Q网络模型下机器人探索复杂未知环境时收敛速度慢的问题,提出了基于竞争网络结构的改进深度双Q网络方法(Improved Dueling Deep Double Q-Network,IDDDQN)。移动机器人通过改进的DDQN网络结构对其三个动作的值函数进行...为解决传统的深度Q网络模型下机器人探索复杂未知环境时收敛速度慢的问题,提出了基于竞争网络结构的改进深度双Q网络方法(Improved Dueling Deep Double Q-Network,IDDDQN)。移动机器人通过改进的DDQN网络结构对其三个动作的值函数进行估计,并更新网络参数,通过训练网络得到相应的Q值。移动机器人采用玻尔兹曼分布与ε-greedy相结合的探索策略,选择一个最优动作,到达下一个观察。机器人将通过学习收集到的数据采用改进的重采样优选机制存储到缓存记忆单元中,并利用小批量数据训练网络。实验结果显示,与基本DDQN算法比,IDDDQN训练的机器人能够更快地适应未知环境,网络的收敛速度也得到提高,到达目标点的成功率增加了3倍多,在未知的复杂环境中可以更好地获取最优路径。展开更多
In this paper,a new algorithm combining the features of bi-direction evolutionary structural optimization(BESO)and reinforcement learning(RL)is proposed for continuum structural topology optimization(STO).In contrast ...In this paper,a new algorithm combining the features of bi-direction evolutionary structural optimization(BESO)and reinforcement learning(RL)is proposed for continuum structural topology optimization(STO).In contrast to conventional approaches which only generate a certain quasi-optimal solution,the goal of the combined method is to provide more quasi-optimal solutions for designers such as the idea of generative design.Two key components were adopted.First,besides sensitivity,value function updated by Monte-Carlo reinforcement learning was utilized to measure the importance of each element,which made the solving process convergent and closer to the optimum.Second,ε-greedy policy added a random perturbation to the main search direction so as to extend the search ability.Finally,the quality and diversity of solutions could be guaranteed by controlling the value of compliance as well as Intersection-over-Union(IoU).Results of several 2D and 3D compliance minimization problems,including a geometrically nonlinear case,show that the combined method is capable of generating a group of good and different solutions that satisfy various possible requirements in engineering design within acceptable computation cost.展开更多
文摘为解决传统的深度Q网络模型下机器人探索复杂未知环境时收敛速度慢的问题,提出了基于竞争网络结构的改进深度双Q网络方法(Improved Dueling Deep Double Q-Network,IDDDQN)。移动机器人通过改进的DDQN网络结构对其三个动作的值函数进行估计,并更新网络参数,通过训练网络得到相应的Q值。移动机器人采用玻尔兹曼分布与ε-greedy相结合的探索策略,选择一个最优动作,到达下一个观察。机器人将通过学习收集到的数据采用改进的重采样优选机制存储到缓存记忆单元中,并利用小批量数据训练网络。实验结果显示,与基本DDQN算法比,IDDDQN训练的机器人能够更快地适应未知环境,网络的收敛速度也得到提高,到达目标点的成功率增加了3倍多,在未知的复杂环境中可以更好地获取最优路径。
文摘In this paper,a new algorithm combining the features of bi-direction evolutionary structural optimization(BESO)and reinforcement learning(RL)is proposed for continuum structural topology optimization(STO).In contrast to conventional approaches which only generate a certain quasi-optimal solution,the goal of the combined method is to provide more quasi-optimal solutions for designers such as the idea of generative design.Two key components were adopted.First,besides sensitivity,value function updated by Monte-Carlo reinforcement learning was utilized to measure the importance of each element,which made the solving process convergent and closer to the optimum.Second,ε-greedy policy added a random perturbation to the main search direction so as to extend the search ability.Finally,the quality and diversity of solutions could be guaranteed by controlling the value of compliance as well as Intersection-over-Union(IoU).Results of several 2D and 3D compliance minimization problems,including a geometrically nonlinear case,show that the combined method is capable of generating a group of good and different solutions that satisfy various possible requirements in engineering design within acceptable computation cost.