Behavior-based autonomous systems rely on human intelligence to resolve multi-mission conflicts by designing mission priority rules and nonlinear controllers.In this work,a novel twolayer reinforcement learning behavi...Behavior-based autonomous systems rely on human intelligence to resolve multi-mission conflicts by designing mission priority rules and nonlinear controllers.In this work,a novel twolayer reinforcement learning behavioral control(RLBC)method is proposed to reduce such dependence by trial-and-error learning.Specifically,in the upper layer,a reinforcement learning mission supervisor(RLMS)is designed to learn the optimal mission priority.Compared with existing mission supervisors,the RLMS improves the dynamic performance of mission priority adjustment by maximizing cumulative rewards and reducing hardware storage demand when using neural networks.In the lower layer,a reinforcement learning controller(RLC)is designed to learn the optimal control policy.Compared with existing behavioral controllers,the RLC reduces the control cost of mission priority adjustment by balancing control performance and consumption.All error signals are proved to be semi-globally uniformly ultimately bounded(SGUUB).Simulation results show that the number of mission priority adjustment and the control cost are significantly reduced compared to some existing mission supervisors and behavioral controllers,respectively.展开更多
In this study,a novel reinforcement learning task supervisor(RLTS)with memory in a behavioral control framework is proposed for human–multi-robot coordination systems(HMRCSs).Existing HMRCSs suffer from high decision...In this study,a novel reinforcement learning task supervisor(RLTS)with memory in a behavioral control framework is proposed for human–multi-robot coordination systems(HMRCSs).Existing HMRCSs suffer from high decision-making time cost and large task tracking errors caused by repeated human intervention,which restricts the autonomy of multi-robot systems(MRSs).Moreover,existing task supervisors in the null-space-based behavioral control(NSBC)framework need to formulate many priority-switching rules manually,which makes it difficult to realize an optimal behavioral priority adjustment strategy in the case of multiple robots and multiple tasks.The proposed RLTS with memory provides a detailed integration of the deep Q-network(DQN)and long short-term memory(LSTM)knowledge base within the NSBC framework,to achieve an optimal behavioral priority adjustment strategy in the presence of task conflict and to reduce the frequency of human intervention.Specifically,the proposed RLTS with memory begins by memorizing human intervention history when the robot systems are not confident in emergencies,and then reloads the history information when encountering the same situation that has been tackled by humans previously.Simulation results demonstrate the effectiveness of the proposed RLTS.Finally,an experiment using a group of mobile robots subject to external noise and disturbances validates the effectiveness of the proposed RLTS with memory in uncertain real-world environments.展开更多
基金the National Natural Science Foundation of China(61603094)。
文摘Behavior-based autonomous systems rely on human intelligence to resolve multi-mission conflicts by designing mission priority rules and nonlinear controllers.In this work,a novel twolayer reinforcement learning behavioral control(RLBC)method is proposed to reduce such dependence by trial-and-error learning.Specifically,in the upper layer,a reinforcement learning mission supervisor(RLMS)is designed to learn the optimal mission priority.Compared with existing mission supervisors,the RLMS improves the dynamic performance of mission priority adjustment by maximizing cumulative rewards and reducing hardware storage demand when using neural networks.In the lower layer,a reinforcement learning controller(RLC)is designed to learn the optimal control policy.Compared with existing behavioral controllers,the RLC reduces the control cost of mission priority adjustment by balancing control performance and consumption.All error signals are proved to be semi-globally uniformly ultimately bounded(SGUUB).Simulation results show that the number of mission priority adjustment and the control cost are significantly reduced compared to some existing mission supervisors and behavioral controllers,respectively.
基金supported by the National Natural Science Foundation of China(No.61603094)。
文摘In this study,a novel reinforcement learning task supervisor(RLTS)with memory in a behavioral control framework is proposed for human–multi-robot coordination systems(HMRCSs).Existing HMRCSs suffer from high decision-making time cost and large task tracking errors caused by repeated human intervention,which restricts the autonomy of multi-robot systems(MRSs).Moreover,existing task supervisors in the null-space-based behavioral control(NSBC)framework need to formulate many priority-switching rules manually,which makes it difficult to realize an optimal behavioral priority adjustment strategy in the case of multiple robots and multiple tasks.The proposed RLTS with memory provides a detailed integration of the deep Q-network(DQN)and long short-term memory(LSTM)knowledge base within the NSBC framework,to achieve an optimal behavioral priority adjustment strategy in the presence of task conflict and to reduce the frequency of human intervention.Specifically,the proposed RLTS with memory begins by memorizing human intervention history when the robot systems are not confident in emergencies,and then reloads the history information when encountering the same situation that has been tackled by humans previously.Simulation results demonstrate the effectiveness of the proposed RLTS.Finally,an experiment using a group of mobile robots subject to external noise and disturbances validates the effectiveness of the proposed RLTS with memory in uncertain real-world environments.