This paper addresses the issue of safety in reinforcement learning(RL)with disturbances and its application in the safety-constrained motion control of autonomous robots.To tackle this problem,a robust Lyapunov value ...This paper addresses the issue of safety in reinforcement learning(RL)with disturbances and its application in the safety-constrained motion control of autonomous robots.To tackle this problem,a robust Lyapunov value function(rLVF)is proposed.The rLVF is obtained by introducing a data-based LVF under the worst-case disturbance of the observed state.Using the rLVF,a uniformly ultimate boundedness criterion is established.This criterion is desired to ensure that the cost function,which serves as a safety criterion,ultimately converges to a range via the policy to be designed.Moreover,to mitigate the drastic variation of the rLVF caused by differences in states,a smoothing regularization of the rLVF is introduced.To train policies with safety guarantees under the worst disturbances of the observed states,an off-policy robust RL algorithm is proposed.The proposed algorithm is applied to motion control tasks of an autonomous vehicle and a cartpole,which involve external disturbances and variations of the model parameters,respectively.The experimental results demonstrate the effectiveness of the theoretical findings and the advantages of the proposed algorithm in terms of robustness and safety.展开更多
This paper is concerned with the problems of mapless navigation for unmanned aerial vehicles in the scenarios with limited sensor accuracy and computing capability.A novel learning-based algorithm called soft actor-cr...This paper is concerned with the problems of mapless navigation for unmanned aerial vehicles in the scenarios with limited sensor accuracy and computing capability.A novel learning-based algorithm called soft actor-critic from demonstrations(SACfD)is proposed,integrating reinforcement learning with imitation learning.Specifically,the maximum entropy reinforcement learning framework is introduced to enhance the exploration capability of the algorithm,upon which the paper explores a way to sufficiently leverage demonstration data to significantly accelerate the convergence rate while improving policy performance reliably.Further,the proposed algorithm enables an implementation of mapless navigation for unmanned aerial vehicles and experimental results show that it outperforms the existing algorithms.展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.62225305 and 12072088)the Fundamental Research Funds for the Central Universities,China(Grant Nos.HIT.BRET.2022004,HIT.OCEF.2022047,and HIT.DZIJ.2023049)+1 种基金the Grant JCKY2022603C016,State Key Laboratory of Robotics and System(HIT)the Heilongjiang Touyan Team。
文摘This paper addresses the issue of safety in reinforcement learning(RL)with disturbances and its application in the safety-constrained motion control of autonomous robots.To tackle this problem,a robust Lyapunov value function(rLVF)is proposed.The rLVF is obtained by introducing a data-based LVF under the worst-case disturbance of the observed state.Using the rLVF,a uniformly ultimate boundedness criterion is established.This criterion is desired to ensure that the cost function,which serves as a safety criterion,ultimately converges to a range via the policy to be designed.Moreover,to mitigate the drastic variation of the rLVF caused by differences in states,a smoothing regularization of the rLVF is introduced.To train policies with safety guarantees under the worst disturbances of the observed states,an off-policy robust RL algorithm is proposed.The proposed algorithm is applied to motion control tasks of an autonomous vehicle and a cartpole,which involve external disturbances and variations of the model parameters,respectively.The experimental results demonstrate the effectiveness of the theoretical findings and the advantages of the proposed algorithm in terms of robustness and safety.
基金supported by the National Natural Science Foundation of China(Grant Nos.12072088,62003117,and 62003118)the National Defense Basic Scientific Research Program of China(Grant No.JCKY2020603B010)the Natural Science Foundation of Heilongjiang Province,China(Grant No.ZD2020F001)。
文摘This paper is concerned with the problems of mapless navigation for unmanned aerial vehicles in the scenarios with limited sensor accuracy and computing capability.A novel learning-based algorithm called soft actor-critic from demonstrations(SACfD)is proposed,integrating reinforcement learning with imitation learning.Specifically,the maximum entropy reinforcement learning framework is introduced to enhance the exploration capability of the algorithm,upon which the paper explores a way to sufficiently leverage demonstration data to significantly accelerate the convergence rate while improving policy performance reliably.Further,the proposed algorithm enables an implementation of mapless navigation for unmanned aerial vehicles and experimental results show that it outperforms the existing algorithms.