In this study,a novel reinforcement learning task supervisor(RLTS)with memory in a behavioral control framework is proposed for human–multi-robot coordination systems(HMRCSs).Existing HMRCSs suffer from high decision...In this study,a novel reinforcement learning task supervisor(RLTS)with memory in a behavioral control framework is proposed for human–multi-robot coordination systems(HMRCSs).Existing HMRCSs suffer from high decision-making time cost and large task tracking errors caused by repeated human intervention,which restricts the autonomy of multi-robot systems(MRSs).Moreover,existing task supervisors in the null-space-based behavioral control(NSBC)framework need to formulate many priority-switching rules manually,which makes it difficult to realize an optimal behavioral priority adjustment strategy in the case of multiple robots and multiple tasks.The proposed RLTS with memory provides a detailed integration of the deep Q-network(DQN)and long short-term memory(LSTM)knowledge base within the NSBC framework,to achieve an optimal behavioral priority adjustment strategy in the presence of task conflict and to reduce the frequency of human intervention.Specifically,the proposed RLTS with memory begins by memorizing human intervention history when the robot systems are not confident in emergencies,and then reloads the history information when encountering the same situation that has been tackled by humans previously.Simulation results demonstrate the effectiveness of the proposed RLTS.Finally,an experiment using a group of mobile robots subject to external noise and disturbances validates the effectiveness of the proposed RLTS with memory in uncertain real-world environments.展开更多
基金supported by the National Natural Science Foundation of China(No.61603094)。
文摘In this study,a novel reinforcement learning task supervisor(RLTS)with memory in a behavioral control framework is proposed for human–multi-robot coordination systems(HMRCSs).Existing HMRCSs suffer from high decision-making time cost and large task tracking errors caused by repeated human intervention,which restricts the autonomy of multi-robot systems(MRSs).Moreover,existing task supervisors in the null-space-based behavioral control(NSBC)framework need to formulate many priority-switching rules manually,which makes it difficult to realize an optimal behavioral priority adjustment strategy in the case of multiple robots and multiple tasks.The proposed RLTS with memory provides a detailed integration of the deep Q-network(DQN)and long short-term memory(LSTM)knowledge base within the NSBC framework,to achieve an optimal behavioral priority adjustment strategy in the presence of task conflict and to reduce the frequency of human intervention.Specifically,the proposed RLTS with memory begins by memorizing human intervention history when the robot systems are not confident in emergencies,and then reloads the history information when encountering the same situation that has been tackled by humans previously.Simulation results demonstrate the effectiveness of the proposed RLTS.Finally,an experiment using a group of mobile robots subject to external noise and disturbances validates the effectiveness of the proposed RLTS with memory in uncertain real-world environments.