Policy evaluation(PE)is a critical sub-problem in reinforcement learning,which estimates the value function for a given policy and can be used for policy improvement.However,there still exist some limitations in curre...Policy evaluation(PE)is a critical sub-problem in reinforcement learning,which estimates the value function for a given policy and can be used for policy improvement.However,there still exist some limitations in current PE methods,such as low sample efficiency and local convergence,especially on complex tasks.In this study,a novel PE algorithm called Least-Squares Truncated Temporal-Difference learning(LST2D)is proposed.In LST2D,an adaptive truncation mechanism is designed,which effectively takes advantage of the fast convergence property of Least-Squares Temporal Difference learning and the asymptotic convergence property of Temporal Difference learning(TD).Then,two feature pre-training methods are utilised to improve the approximation ability of LST2D.Furthermore,an Actor-Critic algorithm based on LST2D and pre-trained feature representations(ACLPF)is proposed,where LST2D is integrated into the critic network to improve learning-prediction efficiency.Comprehensive simulation studies were conducted on four robotic tasks,and the corresponding results illustrate the effectiveness of LST2D.The proposed ACLPF algorithm outperformed DQN,ACER and PPO in terms of sample efficiency and stability,which demonstrated that LST2D can be applied to online learning control problems by incorporating it into the actor-critic architecture.展开更多
Aim To find a more efficient learning method based on temporal difference learning for delayed reinforcement learning tasks. Methods A kind of Q learning algorithm based on truncated TD( λ ) with adaptive scheme...Aim To find a more efficient learning method based on temporal difference learning for delayed reinforcement learning tasks. Methods A kind of Q learning algorithm based on truncated TD( λ ) with adaptive schemes of λ value selection addressed to absorbing Markov decision processes was presented and implemented on computers. Results and Conclusion Simulations on the shortest path searching problems show that using adaptive λ in the Q learning based on TTD( λ ) can speed up its convergence.展开更多
Key challenges for 5G and Beyond networks relate with the requirements for exceptionally low latency, high reliability, and extremely high data rates. The Ultra-Reliable Low Latency Communication (URLLC) use case is t...Key challenges for 5G and Beyond networks relate with the requirements for exceptionally low latency, high reliability, and extremely high data rates. The Ultra-Reliable Low Latency Communication (URLLC) use case is the trickiest to support and current research is focused on physical or MAC layer solutions, while proposals focused on the network layer using Machine Learning (ML) and Artificial Intelligence (AI) algorithms running on base stations and User Equipment (UE) or Internet of Things (IoT) devices are in early stages. In this paper, we describe the operation rationale of the most recent relevant ML algorithms and techniques, and we propose and validate ML algorithms running on both cells (base stations/gNBs) and UEs or IoT devices to handle URLLC service control. One ML algorithm runs on base stations to evaluate latency demands and offload traffic in case of need, while another lightweight algorithm runs on UEs and IoT devices to rank cells with the best URLLC service in real-time to indicate the best one cell for a UE or IoT device to camp. We show that the interplay of these algorithms leads to good service control and eventually optimal load allocation, under slow load mobility. .展开更多
Aim To investigate the model free multi step average reward reinforcement learning algorithm. Methods By combining the R learning algorithms with the temporal difference learning (TD( λ ) learning) algorithm...Aim To investigate the model free multi step average reward reinforcement learning algorithm. Methods By combining the R learning algorithms with the temporal difference learning (TD( λ ) learning) algorithms for average reward problems, a novel incremental algorithm, called R( λ ) learning, was proposed. Results and Conclusion The proposed algorithm is a natural extension of the Q( λ) learning, the multi step discounted reward reinforcement learning algorithm, to the average reward cases. Simulation results show that the R( λ ) learning with intermediate λ values makes significant performance improvement over the simple R learning.展开更多
In this work,we combined the model based reinforcement learning(MBRL)and model free reinforcement learning(MFRL)to stabilize a biped robot(NAO robot)on a rotating platform,where the angular velocity of the platform is...In this work,we combined the model based reinforcement learning(MBRL)and model free reinforcement learning(MFRL)to stabilize a biped robot(NAO robot)on a rotating platform,where the angular velocity of the platform is unknown for the proposed learning algorithm and treated as the external disturbance.Nonparametric Gaussian processes normally require a large number of training data points to deal with the discontinuity of the estimated model.Although some improved method such as probabilistic inference for learning control(PILCO)does not require an explicit global model as the actions are obtained by directly searching the policy space,the overfitting and lack of model complexity may still result in a large deviation between the prediction and the real system.Besides,none of these approaches consider the data error and measurement noise during the training process and test process,respectively.We propose a hierarchical Gaussian processes(GP)models,containing two layers of independent GPs,where the physically continuous probability transition model of the robot is obtained.Due to the physically continuous estimation,the algorithm overcomes the overfitting problem with a guaranteed model complexity,and the number of training data is also reduced.The policy for any given initial state is generated automatically by minimizing the expected cost according to the predefined cost function and the obtained probability distribution of the state.Furthermore,a novel Q(λ)based MFRL method scheme is employed to improve the policy.Simulation results show that the proposed RL algorithm is able to balance NAO robot on a rotating platform,and it is capable of adapting to the platform with varying angular velocity.展开更多
The iterated prisoner's dilemma(IPD) is an ideal model for analyzing interactions between agents in complex networks. It has attracted wide interest in the development of novel strategies since the success of tit-...The iterated prisoner's dilemma(IPD) is an ideal model for analyzing interactions between agents in complex networks. It has attracted wide interest in the development of novel strategies since the success of tit-for-tat in Axelrod's tournament. This paper studies a new adaptive strategy of IPD in different complex networks, where agents can learn and adapt their strategies through reinforcement learning method. A temporal difference learning method is applied for designing the adaptive strategy to optimize the decision making process of the agents. Previous studies indicated that mutual cooperation is hard to emerge in the IPD. Therefore, three examples which based on square lattice network and scale-free network are provided to show two features of the adaptive strategy. First, the mutual cooperation can be achieved by the group with adaptive agents under scale-free network, and once evolution has converged mutual cooperation, it is unlikely to shift. Secondly, the adaptive strategy can earn a better payoff compared with other strategies in the square network. The analytical properties are discussed for verifying evolutionary stability of the adaptive strategy.展开更多
离线强化学习通过减小分布偏移实现了习得策略向行为策略的逼近,但离线经验缓存的数据分布往往会直接影响习得策略的质量.通过优化采样模型来改善强化学习智能体的训练效果,提出两种离线优先采样模型:基于时序差分误差的采样模型和基于...离线强化学习通过减小分布偏移实现了习得策略向行为策略的逼近,但离线经验缓存的数据分布往往会直接影响习得策略的质量.通过优化采样模型来改善强化学习智能体的训练效果,提出两种离线优先采样模型:基于时序差分误差的采样模型和基于鞅的采样模型.基于时序差分误差的采样模型可以使智能体更多地学习值估计不准确的经验数据,通过估计更准确的值函数来应对可能出现的分布外状态.基于鞅的采样模型可以使智能体更多地学习对策略优化有利的正样本,减少负样本对值函数迭代的影响.进一步,将所提离线优先采样模型分别与批约束深度Q学习(Batch-constrained deep Q-learning,BCQ)相结合,提出基于时序差分误差的优先BCQ和基于鞅的优先BCQ.D4RL和Torcs数据集上的实验结果表明:所提离线优先采样模型可以有针对性地选择有利于值函数估计或策略优化的经验数据,获得更高的回报.展开更多
针对深度Q网络(DQN)算法因过估计导致收敛稳定性差的问题,在传统时序差分(TD)的基础上提出N阶TD误差的概念,设计基于二阶TD误差的双网络DQN算法。构造基于二阶TD误差的值函数更新公式,同时结合DQN算法建立双网络模型,得到两个同构的值...针对深度Q网络(DQN)算法因过估计导致收敛稳定性差的问题,在传统时序差分(TD)的基础上提出N阶TD误差的概念,设计基于二阶TD误差的双网络DQN算法。构造基于二阶TD误差的值函数更新公式,同时结合DQN算法建立双网络模型,得到两个同构的值函数网络分别用于表示先后两轮的值函数,协同更新网络参数,以提高DQN算法中值函数估计的稳定性。基于Open AI Gym平台的实验结果表明,在解决Mountain Car和Cart Pole问题方面,该算法较经典DQN算法具有更好的收敛稳定性。展开更多
基金Joint Funds of the National Natural Science Foundation of China,Grant/Award Number:U21A20518National Natural Science Foundation of China,Grant/Award Numbers:62106279,61903372。
文摘Policy evaluation(PE)is a critical sub-problem in reinforcement learning,which estimates the value function for a given policy and can be used for policy improvement.However,there still exist some limitations in current PE methods,such as low sample efficiency and local convergence,especially on complex tasks.In this study,a novel PE algorithm called Least-Squares Truncated Temporal-Difference learning(LST2D)is proposed.In LST2D,an adaptive truncation mechanism is designed,which effectively takes advantage of the fast convergence property of Least-Squares Temporal Difference learning and the asymptotic convergence property of Temporal Difference learning(TD).Then,two feature pre-training methods are utilised to improve the approximation ability of LST2D.Furthermore,an Actor-Critic algorithm based on LST2D and pre-trained feature representations(ACLPF)is proposed,where LST2D is integrated into the critic network to improve learning-prediction efficiency.Comprehensive simulation studies were conducted on four robotic tasks,and the corresponding results illustrate the effectiveness of LST2D.The proposed ACLPF algorithm outperformed DQN,ACER and PPO in terms of sample efficiency and stability,which demonstrated that LST2D can be applied to online learning control problems by incorporating it into the actor-critic architecture.
文摘Aim To find a more efficient learning method based on temporal difference learning for delayed reinforcement learning tasks. Methods A kind of Q learning algorithm based on truncated TD( λ ) with adaptive schemes of λ value selection addressed to absorbing Markov decision processes was presented and implemented on computers. Results and Conclusion Simulations on the shortest path searching problems show that using adaptive λ in the Q learning based on TTD( λ ) can speed up its convergence.
文摘Key challenges for 5G and Beyond networks relate with the requirements for exceptionally low latency, high reliability, and extremely high data rates. The Ultra-Reliable Low Latency Communication (URLLC) use case is the trickiest to support and current research is focused on physical or MAC layer solutions, while proposals focused on the network layer using Machine Learning (ML) and Artificial Intelligence (AI) algorithms running on base stations and User Equipment (UE) or Internet of Things (IoT) devices are in early stages. In this paper, we describe the operation rationale of the most recent relevant ML algorithms and techniques, and we propose and validate ML algorithms running on both cells (base stations/gNBs) and UEs or IoT devices to handle URLLC service control. One ML algorithm runs on base stations to evaluate latency demands and offload traffic in case of need, while another lightweight algorithm runs on UEs and IoT devices to rank cells with the best URLLC service in real-time to indicate the best one cell for a UE or IoT device to camp. We show that the interplay of these algorithms leads to good service control and eventually optimal load allocation, under slow load mobility. .
文摘Aim To investigate the model free multi step average reward reinforcement learning algorithm. Methods By combining the R learning algorithms with the temporal difference learning (TD( λ ) learning) algorithms for average reward problems, a novel incremental algorithm, called R( λ ) learning, was proposed. Results and Conclusion The proposed algorithm is a natural extension of the Q( λ) learning, the multi step discounted reward reinforcement learning algorithm, to the average reward cases. Simulation results show that the R( λ ) learning with intermediate λ values makes significant performance improvement over the simple R learning.
文摘In this work,we combined the model based reinforcement learning(MBRL)and model free reinforcement learning(MFRL)to stabilize a biped robot(NAO robot)on a rotating platform,where the angular velocity of the platform is unknown for the proposed learning algorithm and treated as the external disturbance.Nonparametric Gaussian processes normally require a large number of training data points to deal with the discontinuity of the estimated model.Although some improved method such as probabilistic inference for learning control(PILCO)does not require an explicit global model as the actions are obtained by directly searching the policy space,the overfitting and lack of model complexity may still result in a large deviation between the prediction and the real system.Besides,none of these approaches consider the data error and measurement noise during the training process and test process,respectively.We propose a hierarchical Gaussian processes(GP)models,containing two layers of independent GPs,where the physically continuous probability transition model of the robot is obtained.Due to the physically continuous estimation,the algorithm overcomes the overfitting problem with a guaranteed model complexity,and the number of training data is also reduced.The policy for any given initial state is generated automatically by minimizing the expected cost according to the predefined cost function and the obtained probability distribution of the state.Furthermore,a novel Q(λ)based MFRL method scheme is employed to improve the policy.Simulation results show that the proposed RL algorithm is able to balance NAO robot on a rotating platform,and it is capable of adapting to the platform with varying angular velocity.
基金supported by the National Natural Science Foundation(NNSF)of China(61603196,61503079,61520106009,61533008)the Natural Science Foundation of Jiangsu Province of China(BK20150851)+4 种基金China Postdoctoral Science Foundation(2015M581842)Jiangsu Postdoctoral Science Foundation(1601259C)Nanjing University of Posts and Telecommunications Science Foundation(NUPTSF)(NY215011)Priority Academic Program Development of Jiangsu Higher Education Institutions,the open fund of Key Laboratory of Measurement and Control of Complex Systems of Engineering,Ministry of Education(MCCSE2015B02)the Research Innovation Program for College Graduates of Jiangsu Province(CXLX1309)
文摘The iterated prisoner's dilemma(IPD) is an ideal model for analyzing interactions between agents in complex networks. It has attracted wide interest in the development of novel strategies since the success of tit-for-tat in Axelrod's tournament. This paper studies a new adaptive strategy of IPD in different complex networks, where agents can learn and adapt their strategies through reinforcement learning method. A temporal difference learning method is applied for designing the adaptive strategy to optimize the decision making process of the agents. Previous studies indicated that mutual cooperation is hard to emerge in the IPD. Therefore, three examples which based on square lattice network and scale-free network are provided to show two features of the adaptive strategy. First, the mutual cooperation can be achieved by the group with adaptive agents under scale-free network, and once evolution has converged mutual cooperation, it is unlikely to shift. Secondly, the adaptive strategy can earn a better payoff compared with other strategies in the square network. The analytical properties are discussed for verifying evolutionary stability of the adaptive strategy.
文摘离线强化学习通过减小分布偏移实现了习得策略向行为策略的逼近,但离线经验缓存的数据分布往往会直接影响习得策略的质量.通过优化采样模型来改善强化学习智能体的训练效果,提出两种离线优先采样模型:基于时序差分误差的采样模型和基于鞅的采样模型.基于时序差分误差的采样模型可以使智能体更多地学习值估计不准确的经验数据,通过估计更准确的值函数来应对可能出现的分布外状态.基于鞅的采样模型可以使智能体更多地学习对策略优化有利的正样本,减少负样本对值函数迭代的影响.进一步,将所提离线优先采样模型分别与批约束深度Q学习(Batch-constrained deep Q-learning,BCQ)相结合,提出基于时序差分误差的优先BCQ和基于鞅的优先BCQ.D4RL和Torcs数据集上的实验结果表明:所提离线优先采样模型可以有针对性地选择有利于值函数估计或策略优化的经验数据,获得更高的回报.
文摘针对深度Q网络(DQN)算法因过估计导致收敛稳定性差的问题,在传统时序差分(TD)的基础上提出N阶TD误差的概念,设计基于二阶TD误差的双网络DQN算法。构造基于二阶TD误差的值函数更新公式,同时结合DQN算法建立双网络模型,得到两个同构的值函数网络分别用于表示先后两轮的值函数,协同更新网络参数,以提高DQN算法中值函数估计的稳定性。基于Open AI Gym平台的实验结果表明,在解决Mountain Car和Cart Pole问题方面,该算法较经典DQN算法具有更好的收敛稳定性。