Behavior-based autonomous systems rely on human intelligence to resolve multi-mission conflicts by designing mission priority rules and nonlinear controllers.In this work,a novel twolayer reinforcement learning behavi...Behavior-based autonomous systems rely on human intelligence to resolve multi-mission conflicts by designing mission priority rules and nonlinear controllers.In this work,a novel twolayer reinforcement learning behavioral control(RLBC)method is proposed to reduce such dependence by trial-and-error learning.Specifically,in the upper layer,a reinforcement learning mission supervisor(RLMS)is designed to learn the optimal mission priority.Compared with existing mission supervisors,the RLMS improves the dynamic performance of mission priority adjustment by maximizing cumulative rewards and reducing hardware storage demand when using neural networks.In the lower layer,a reinforcement learning controller(RLC)is designed to learn the optimal control policy.Compared with existing behavioral controllers,the RLC reduces the control cost of mission priority adjustment by balancing control performance and consumption.All error signals are proved to be semi-globally uniformly ultimately bounded(SGUUB).Simulation results show that the number of mission priority adjustment and the control cost are significantly reduced compared to some existing mission supervisors and behavioral controllers,respectively.展开更多
Reinforcement learning behavioral control(RLBC)is limited to an individual agent without any swarm mission,because it models the behavior priority learning as a Markov decision process.In this paper,a novel multi-agen...Reinforcement learning behavioral control(RLBC)is limited to an individual agent without any swarm mission,because it models the behavior priority learning as a Markov decision process.In this paper,a novel multi-agent reinforcement learning behavioral control(MARLBC)method is proposed to overcome such limitations by implementing joint learning.Specifically,a multi-agent reinforcement learning mission supervisor(MARLMS)is designed for a group of nonlinear second-order systems to assign the behavior priorities at the decision layer.Through modeling behavior priority switching as a cooperative Markov game,the MARLMS learns an optimal joint behavior priority to reduce dependence on human intelligence and high-performance computing hardware.At the control layer,a group of second-order reinforcement learning controllers are designed to learn the optimal control policies to track position and velocity signals simultaneously.In particular,input saturation constraints are strictly implemented via designing a group of adaptive compensators.Numerical simulation results show that the proposed MARLBC has a lower switching frequency and control cost than finite-time and fixed-time behavioral control and RLBC methods.展开更多
基金the National Natural Science Foundation of China(61603094)。
文摘Behavior-based autonomous systems rely on human intelligence to resolve multi-mission conflicts by designing mission priority rules and nonlinear controllers.In this work,a novel twolayer reinforcement learning behavioral control(RLBC)method is proposed to reduce such dependence by trial-and-error learning.Specifically,in the upper layer,a reinforcement learning mission supervisor(RLMS)is designed to learn the optimal mission priority.Compared with existing mission supervisors,the RLMS improves the dynamic performance of mission priority adjustment by maximizing cumulative rewards and reducing hardware storage demand when using neural networks.In the lower layer,a reinforcement learning controller(RLC)is designed to learn the optimal control policy.Compared with existing behavioral controllers,the RLC reduces the control cost of mission priority adjustment by balancing control performance and consumption.All error signals are proved to be semi-globally uniformly ultimately bounded(SGUUB).Simulation results show that the number of mission priority adjustment and the control cost are significantly reduced compared to some existing mission supervisors and behavioral controllers,respectively.
基金Project supported by the National Natural Science Foundation of China(No.92367109)。
文摘Reinforcement learning behavioral control(RLBC)is limited to an individual agent without any swarm mission,because it models the behavior priority learning as a Markov decision process.In this paper,a novel multi-agent reinforcement learning behavioral control(MARLBC)method is proposed to overcome such limitations by implementing joint learning.Specifically,a multi-agent reinforcement learning mission supervisor(MARLMS)is designed for a group of nonlinear second-order systems to assign the behavior priorities at the decision layer.Through modeling behavior priority switching as a cooperative Markov game,the MARLMS learns an optimal joint behavior priority to reduce dependence on human intelligence and high-performance computing hardware.At the control layer,a group of second-order reinforcement learning controllers are designed to learn the optimal control policies to track position and velocity signals simultaneously.In particular,input saturation constraints are strictly implemented via designing a group of adaptive compensators.Numerical simulation results show that the proposed MARLBC has a lower switching frequency and control cost than finite-time and fixed-time behavioral control and RLBC methods.