Asynchronous advantage actor‐critic(A3C)algorithm is a commonly used policy opti-mization algorithm in reinforcement learning,in which asynchronous is parallel inter-active sampling and training,and advantage is a sa...Asynchronous advantage actor‐critic(A3C)algorithm is a commonly used policy opti-mization algorithm in reinforcement learning,in which asynchronous is parallel inter-active sampling and training,and advantage is a sampling multi‐step reward estimation method for computing weights.In order to address the problem of low efficiency and insufficient convergence caused by the traditional heuristic exploration of A3C algorithm in reinforcement learning,an improved A3C algorithm is proposed in this paper.In this algorithm,a noise network function,which updates the noise tensor in an explicit way is constructed to train the agent.Generalised advantage estimation(GAE)is also adopted to describe the dominance function.Finally,a new mean gradient parallelisation method is designed to update the parameters in both the primary and secondary networks by summing and averaging the gradients passed from all the sub‐processes to the main process.Simulation experiments were conducted in a gym environment using the PyTorch Agent Net(PTAN)advanced reinforcement learning library,and the results show that the method enables the agent to complete the learning training faster and its convergence during the training process is better.The improved A3C algorithm has a better performance than the original algorithm,which can provide new ideas for sub-sequent research on reinforcement learning algorithms.展开更多
基金Natural Science Foundation of Zhejiang Province,Grant/Award Number:LQ15F030006Key Research and Development Program of Zhejiang Province,Grant/Award Number:2018C01085。
文摘Asynchronous advantage actor‐critic(A3C)algorithm is a commonly used policy opti-mization algorithm in reinforcement learning,in which asynchronous is parallel inter-active sampling and training,and advantage is a sampling multi‐step reward estimation method for computing weights.In order to address the problem of low efficiency and insufficient convergence caused by the traditional heuristic exploration of A3C algorithm in reinforcement learning,an improved A3C algorithm is proposed in this paper.In this algorithm,a noise network function,which updates the noise tensor in an explicit way is constructed to train the agent.Generalised advantage estimation(GAE)is also adopted to describe the dominance function.Finally,a new mean gradient parallelisation method is designed to update the parameters in both the primary and secondary networks by summing and averaging the gradients passed from all the sub‐processes to the main process.Simulation experiments were conducted in a gym environment using the PyTorch Agent Net(PTAN)advanced reinforcement learning library,and the results show that the method enables the agent to complete the learning training faster and its convergence during the training process is better.The improved A3C algorithm has a better performance than the original algorithm,which can provide new ideas for sub-sequent research on reinforcement learning algorithms.