The scale of ground-to-air confrontation task assignments is large and needs to deal with many concurrent task assignments and random events.Aiming at the problems where existing task assignment methods are applied to...The scale of ground-to-air confrontation task assignments is large and needs to deal with many concurrent task assignments and random events.Aiming at the problems where existing task assignment methods are applied to ground-to-air confrontation,there is low efficiency in dealing with complex tasks,and there are interactive conflicts in multiagent systems.This study proposes a multiagent architecture based on a one-general agent with multiple narrow agents(OGMN)to reduce task assignment conflicts.Considering the slow speed of traditional dynamic task assignment algorithms,this paper proposes the proximal policy optimization for task assignment of general and narrow agents(PPOTAGNA)algorithm.The algorithm based on the idea of the optimal assignment strategy algorithm and combined with the training framework of deep reinforcement learning(DRL)adds a multihead attention mechanism and a stage reward mechanism to the bilateral band clipping PPO algorithm to solve the problem of low training efficiency.Finally,simulation experiments are carried out in the digital battlefield.The multiagent architecture based on OGMN combined with the PPO-TAGNA algorithm can obtain higher rewards faster and has a higher win ratio.By analyzing agent behavior,the efficiency,superiority and rationality of resource utilization of this method are verified.展开更多
Due to the fading characteristics of wireless channels and the burstiness of data traffic,how to deal with congestion in Ad-hoc networks with effective algorithms is still open and challenging.In this paper,we focus o...Due to the fading characteristics of wireless channels and the burstiness of data traffic,how to deal with congestion in Ad-hoc networks with effective algorithms is still open and challenging.In this paper,we focus on enabling congestion control to minimize network transmission delays through flexible power control.To effectively solve the congestion problem,we propose a distributed cross-layer scheduling algorithm,which is empowered by graph-based multi-agent deep reinforcement learning.The transmit power is adaptively adjusted in real-time by our algorithm based only on local information(i.e.,channel state information and queue length)and local communication(i.e.,information exchanged with neighbors).Moreover,the training complexity of the algorithm is low due to the regional cooperation based on the graph attention network.In the evaluation,we show that our algorithm can reduce the transmission delay of data flow under severe signal interference and drastically changing channel states,and demonstrate the adaptability and stability in different topologies.The method is general and can be extended to various types of topologies.展开更多
In recent years,with the continuous development of deep learning and knowledge graph reasoning methods,more and more researchers have shown great interest in improving knowledge graph reasoning methods by inferring mi...In recent years,with the continuous development of deep learning and knowledge graph reasoning methods,more and more researchers have shown great interest in improving knowledge graph reasoning methods by inferring missing facts through reasoning.By searching paths on the knowledge graph and making fact and link predictions based on these paths,deep learning-based Reinforcement Learning(RL)agents can demonstrate good performance and interpretability.Therefore,deep reinforcement learning-based knowledge reasoning methods have rapidly emerged in recent years and have become a hot research topic.However,even in a small and fixed knowledge graph reasoning action space,there are still a large number of invalid actions.It often leads to the interruption of RL agents’wandering due to the selection of invalid actions,resulting in a significant decrease in the success rate of path mining.In order to improve the success rate of RL agents in the early stages of path search,this article proposes a knowledge reasoning method based on Deep Transfer Reinforcement Learning path(DTRLpath).Before supervised pre-training and retraining,a pre-task of searching for effective actions in a single step is added.The RL agent is first trained in the pre-task to improve its ability to search for effective actions.Then,the trained agent is transferred to the target reasoning task for path search training,which improves its success rate in searching for target task paths.Finally,based on the comparative experimental results on the FB15K-237 and NELL-995 datasets,it can be concluded that the proposed method significantly improves the success rate of path search and outperforms similar methods in most reasoning tasks.展开更多
In this paper, the reinforcement learning method for cooperative multi-agent systems(MAS) with incremental number of agents is studied. The existing multi-agent reinforcement learning approaches deal with the MAS with...In this paper, the reinforcement learning method for cooperative multi-agent systems(MAS) with incremental number of agents is studied. The existing multi-agent reinforcement learning approaches deal with the MAS with a specific number of agents, and can learn well-performed policies. However, if there is an increasing number of agents, the previously learned in may not perform well in the current scenario. The new agents need to learn from scratch to find optimal policies with others,which may slow down the learning speed of the whole team. To solve that problem, in this paper, we propose a new algorithm to take full advantage of the historical knowledge which was learned before, and transfer it from the previous agents to the new agents. Since the previous agents have been trained well in the source environment, they are treated as teacher agents in the target environment. Correspondingly, the new agents are called student agents. To enable the student agents to learn from the teacher agents, we first modify the input nodes of the networks for teacher agents to adapt to the current environment. Then, the teacher agents take the observations of the student agents as input, and output the advised actions and values as supervising information. Finally, the student agents combine the reward from the environment and the supervising information from the teacher agents, and learn the optimal policies with modified loss functions. By taking full advantage of the knowledge of teacher agents, the search space for the student agents will be reduced significantly, which can accelerate the learning speed of the holistic system. The proposed algorithm is verified in some multi-agent simulation environments, and its efficiency has been demonstrated by the experiment results.展开更多
For multi-agent reinforcement learning in Markov games, knowledge extraction and sharing are key research problems. State list extracting means to calculate the optimal shared state path from state trajectories with c...For multi-agent reinforcement learning in Markov games, knowledge extraction and sharing are key research problems. State list extracting means to calculate the optimal shared state path from state trajectories with cycles. A state list extracting algorithm checks cyclic state lists of a current state in the state trajectory, condensing the optimal action set of the current state. By reinforcing the optimal action selected, the action policy of cyclic states is optimized gradually. The state list extracting is repeatedly learned and used as the experience knowledge which is shared by teams. Agents speed up the rate of convergence by experience sharing. Competition games of preys and predators are used for the experiments. The results of experiments prove that the proposed algorithms overcome the lack of experience in the initial stage, speed up learning and improve the performance.展开更多
Single-agent reinforcement learning (RL) is commonly used to learn how to play computer games, in which the agent makes one move before making the next in a sequential decision process. Recently single agent was also ...Single-agent reinforcement learning (RL) is commonly used to learn how to play computer games, in which the agent makes one move before making the next in a sequential decision process. Recently single agent was also employed in the design of molecules and drugs. While a single agent is a good fit for computer games, it has limitations when used in molecule design. Its sequential learning makes it impossible to modify or improve the previous steps while working on the current step. In this paper, we proposed to apply the multi-agent RL approach to the research of molecules, which can optimize all sites of a molecule simultaneously. To elucidate the validity of our approach, we chose one chemical compound Favipiravir to explore its local chemical space. Favipiravir is a broad-spectrum inhibitor of viral RNA polymerase, and is one of the compounds that are currently being used in SARS-CoV-2 (COVID-19) clinical trials. Our experiments revealed the collaborative learning of a team of deep RL agents as well as the learning of its individual learning agent in the exploration of Favipiravir. In particular, our multi-agents not only discovered the molecules near Favipiravir in chemical space, but also the learnability of each site in the string representation of Favipiravir, critical information for us to understand the underline mechanism that supports machine learning of molecules.展开更多
Multl-agent reinforcement learning algorithms are studied. A prediction-based multi-agent reinforcement learning algorithm is presented for multl-robot cooperation task. The multi-robot cooperation experiment based on...Multl-agent reinforcement learning algorithms are studied. A prediction-based multi-agent reinforcement learning algorithm is presented for multl-robot cooperation task. The multi-robot cooperation experiment based on multi-agent inverted pendulum is made to test the efficency of the new algorithm, and the experiment results show that the new algorithm can achieve the cooperation strategy much faster than the primitive multiagent reinforcement learning algorithm.展开更多
Cooperative multi-agent reinforcement learning( MARL) is an important topic in the field of artificial intelligence,in which distributed constraint optimization( DCOP) algorithms have been widely used to coordinat...Cooperative multi-agent reinforcement learning( MARL) is an important topic in the field of artificial intelligence,in which distributed constraint optimization( DCOP) algorithms have been widely used to coordinate the actions of multiple agents. However,dense communication among agents affects the practicability of DCOP algorithms. In this paper,we propose a novel DCOP algorithm dealing with the previous DCOP algorithms' communication problem by reducing constraints.The contributions of this paper are primarily threefold:(1) It is proved that removing constraints can effectively reduce the communication burden of DCOP algorithms.(2) An criterion is provided to identify insignificant constraints whose elimination doesn't have a great impact on the performance of the whole system.(3) A constraint-reduced DCOP algorithm is proposed by adopting a variant of spectral clustering algorithm to detect and eliminate the insignificant constraints. Our algorithm reduces the communication burdern of the benchmark DCOP algorithm while keeping its overall performance unaffected. The performance of constraint-reduced DCOP algorithm is evaluated on four configurations of cooperative sensor networks. The effectiveness of communication reduction is also verified by comparisons between the constraint-reduced DCOP and the benchmark DCOP.展开更多
Q-learning is a popular temporal-difference reinforcement learning algorithm which often explicitly stores state values using lookup tables. This implementation has been proven to converge to the optimal solution, but...Q-learning is a popular temporal-difference reinforcement learning algorithm which often explicitly stores state values using lookup tables. This implementation has been proven to converge to the optimal solution, but it is often beneficial to use a function-approximation system, such as deep neural networks, to estimate state values. It has been previously observed that Q-learning can be unstable when using value function approximation or when operating in a stochastic environment. This instability can adversely affect the algorithm’s ability to maximize its returns. In this paper, we present a new algorithm called Multi Q-learning to attempt to overcome the instability seen in Q-learning. We test our algorithm on a 4 × 4 grid-world with different stochastic reward functions using various deep neural networks and convolutional networks. Our results show that in most cases, Multi Q-learning outperforms Q-learning, achieving average returns up to 2.5 times higher than Q-learning and having a standard deviation of state values as low as 0.58.展开更多
Robot learning in unstructured environments has been proved to be an extremely challenging problem, mainly because of many uncertainties always present in the real world. Human beings, on the other hand, seem to cope ...Robot learning in unstructured environments has been proved to be an extremely challenging problem, mainly because of many uncertainties always present in the real world. Human beings, on the other hand, seem to cope very well with uncertain and unpredictable environments, often relying on perception-based information. Furthermore, humans beings can also utilize perceptions to guide their learning on those parts of the perception-action space that are actually relevant to the task. Therefore, we conduct a research aimed at improving robot learning through the incorporation of both perception-based and measurement-based information. For this reason, a fuzzy reinforcement learning (FRL) agent is proposed in this paper. Based on a neural-fuzzy architecture, different kinds of information can be incorporated into the FRL agent to initialise its action network, critic network and evaluation feedback module so as to accelerate its learning. By making use of the global optimisation capability of GAs (genetic algorithms), a GA-based FRL (GAFRL) agent is presented to solve the local minima problem in traditional actor-critic reinforcement learning. On the other hand, with the prediction capability of the critic network, GAs can perform a more effective global search. Different GAFRL agents are constructed and verified by using the simulation model of a physical biped robot. The simulation analysis shows that the biped learning rate for dynamic balance can be improved by incorporating perception-based information on biped balancing and walking evaluation. The biped robot can find its application in ocean exploration, detection or sea rescue activity, as well as military maritime activity.展开更多
Introduces a mixture genetic algorithm and reinforcement learning computation model used for independent agent learning in continuous, distributive, open environment, which takes full advantage of the reactive and rob...Introduces a mixture genetic algorithm and reinforcement learning computation model used for independent agent learning in continuous, distributive, open environment, which takes full advantage of the reactive and robust of reinforcement learning algorithm and the property that genetic algorithm is suitable to the problem with high dimension,large collectivity, complex environment, and concludes that through proper training, the result verifies that this method is available in the complex multi agent environment.展开更多
AGVs dispatching, one of the hot problems in FMS, has attracted widespread interest in recent years. It is hard to dynamically schedule AGVs with pre designed rule because of the uncertainty and dynamic nature of AGVs...AGVs dispatching, one of the hot problems in FMS, has attracted widespread interest in recent years. It is hard to dynamically schedule AGVs with pre designed rule because of the uncertainty and dynamic nature of AGVs dispatching progress, so the AGVs system in this paper is treated as a cooperative learning multiagent system, in which each agent adopts multilevel decision method, which includes two level decisions: the option level and the action level. On the option level, an agent learns a policy to execute a subtask with the best response to the other AGVs’ current options. On the action level, an agent learns an optimal policy of actions for achieving his planned option. The method is applied to a AGVs’ dispatching simulation, and the performance of the AGVs system based on this method is verified.展开更多
The cloud boundary network environment is characterized by a passive defense strategy,discrete defense actions,and delayed defense feedback in the face of network attacks,ignoring the influence of the external environ...The cloud boundary network environment is characterized by a passive defense strategy,discrete defense actions,and delayed defense feedback in the face of network attacks,ignoring the influence of the external environment on defense decisions,thus resulting in poor defense effectiveness.Therefore,this paper proposes a cloud boundary network active defense model and decision method based on the reinforcement learning of intelligent agent,designs the network structure of the intelligent agent attack and defense game,and depicts the attack and defense game process of cloud boundary network;constructs the observation space and action space of reinforcement learning of intelligent agent in the non-complete information environment,and portrays the interaction process between intelligent agent and environment;establishes the reward mechanism based on the attack and defense gain,and encourage intelligent agents to learn more effective defense strategies.the designed active defense decision intelligent agent based on deep reinforcement learning can solve the problems of border dynamics,interaction lag,and control dispersion in the defense decision process of cloud boundary networks,and improve the autonomy and continuity of defense decisions.展开更多
针对目前智能体间追逐过程中对智能体的情感因素考虑不充分的问题,提出一种新的解决方案:首先通过情感建模将个性、情感融入以两个智能体为基元的追逐行为中,使其运动更有多样性;其次通过博弈论引导决策的选取;最后收集对方运动的轨迹点...针对目前智能体间追逐过程中对智能体的情感因素考虑不充分的问题,提出一种新的解决方案:首先通过情感建模将个性、情感融入以两个智能体为基元的追逐行为中,使其运动更有多样性;其次通过博弈论引导决策的选取;最后收集对方运动的轨迹点,用Q-learning加强学习方式学习归纳,以寻找最优追逐运动路径。在Visual Studio 2012编译环境下得到整个具有可信度的运动动画以及智能体的情感、体力等因素的变化规律图像。演示结果表明,此解决方案对于智能体间高效的追逐有很好的促进作用。展开更多
基金the Project of National Natural Science Foundation of China(Grant No.62106283)the Project of National Natural Science Foundation of China(Grant No.72001214)to provide fund for conducting experimentsthe Project of Natural Science Foundation of Shaanxi Province(Grant No.2020JQ-484)。
文摘The scale of ground-to-air confrontation task assignments is large and needs to deal with many concurrent task assignments and random events.Aiming at the problems where existing task assignment methods are applied to ground-to-air confrontation,there is low efficiency in dealing with complex tasks,and there are interactive conflicts in multiagent systems.This study proposes a multiagent architecture based on a one-general agent with multiple narrow agents(OGMN)to reduce task assignment conflicts.Considering the slow speed of traditional dynamic task assignment algorithms,this paper proposes the proximal policy optimization for task assignment of general and narrow agents(PPOTAGNA)algorithm.The algorithm based on the idea of the optimal assignment strategy algorithm and combined with the training framework of deep reinforcement learning(DRL)adds a multihead attention mechanism and a stage reward mechanism to the bilateral band clipping PPO algorithm to solve the problem of low training efficiency.Finally,simulation experiments are carried out in the digital battlefield.The multiagent architecture based on OGMN combined with the PPO-TAGNA algorithm can obtain higher rewards faster and has a higher win ratio.By analyzing agent behavior,the efficiency,superiority and rationality of resource utilization of this method are verified.
基金supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.RS-2022-00155885, Artificial Intelligence Convergence Innovation Human Resources Development (Hanyang University ERICA))supported by the National Natural Science Foundation of China under Grant No. 61971264the National Natural Science Foundation of China/Research Grants Council Collaborative Research Scheme under Grant No. 62261160390
文摘Due to the fading characteristics of wireless channels and the burstiness of data traffic,how to deal with congestion in Ad-hoc networks with effective algorithms is still open and challenging.In this paper,we focus on enabling congestion control to minimize network transmission delays through flexible power control.To effectively solve the congestion problem,we propose a distributed cross-layer scheduling algorithm,which is empowered by graph-based multi-agent deep reinforcement learning.The transmit power is adaptively adjusted in real-time by our algorithm based only on local information(i.e.,channel state information and queue length)and local communication(i.e.,information exchanged with neighbors).Moreover,the training complexity of the algorithm is low due to the regional cooperation based on the graph attention network.In the evaluation,we show that our algorithm can reduce the transmission delay of data flow under severe signal interference and drastically changing channel states,and demonstrate the adaptability and stability in different topologies.The method is general and can be extended to various types of topologies.
基金supported by Key Laboratory of Information System Requirement,No.LHZZ202202Natural Science Foundation of Xinjiang Uyghur Autonomous Region(2023D01C55)Scientific Research Program of the Higher Education Institution of Xinjiang(XJEDU2023P127).
文摘In recent years,with the continuous development of deep learning and knowledge graph reasoning methods,more and more researchers have shown great interest in improving knowledge graph reasoning methods by inferring missing facts through reasoning.By searching paths on the knowledge graph and making fact and link predictions based on these paths,deep learning-based Reinforcement Learning(RL)agents can demonstrate good performance and interpretability.Therefore,deep reinforcement learning-based knowledge reasoning methods have rapidly emerged in recent years and have become a hot research topic.However,even in a small and fixed knowledge graph reasoning action space,there are still a large number of invalid actions.It often leads to the interruption of RL agents’wandering due to the selection of invalid actions,resulting in a significant decrease in the success rate of path mining.In order to improve the success rate of RL agents in the early stages of path search,this article proposes a knowledge reasoning method based on Deep Transfer Reinforcement Learning path(DTRLpath).Before supervised pre-training and retraining,a pre-task of searching for effective actions in a single step is added.The RL agent is first trained in the pre-task to improve its ability to search for effective actions.Then,the trained agent is transferred to the target reasoning task for path search training,which improves its success rate in searching for target task paths.Finally,based on the comparative experimental results on the FB15K-237 and NELL-995 datasets,it can be concluded that the proposed method significantly improves the success rate of path search and outperforms similar methods in most reasoning tasks.
基金supported by the National Key R&D Program of China (2018AAA0101400)the National Natural Science Foundation of China (62173251+3 种基金61921004U1713209)the Natural Science Foundation of Jiangsu Province of China (BK20202006)the Guangdong Provincial Key Laboratory of Intelligent Decision and Cooperative Control。
文摘In this paper, the reinforcement learning method for cooperative multi-agent systems(MAS) with incremental number of agents is studied. The existing multi-agent reinforcement learning approaches deal with the MAS with a specific number of agents, and can learn well-performed policies. However, if there is an increasing number of agents, the previously learned in may not perform well in the current scenario. The new agents need to learn from scratch to find optimal policies with others,which may slow down the learning speed of the whole team. To solve that problem, in this paper, we propose a new algorithm to take full advantage of the historical knowledge which was learned before, and transfer it from the previous agents to the new agents. Since the previous agents have been trained well in the source environment, they are treated as teacher agents in the target environment. Correspondingly, the new agents are called student agents. To enable the student agents to learn from the teacher agents, we first modify the input nodes of the networks for teacher agents to adapt to the current environment. Then, the teacher agents take the observations of the student agents as input, and output the advised actions and values as supervising information. Finally, the student agents combine the reward from the environment and the supervising information from the teacher agents, and learn the optimal policies with modified loss functions. By taking full advantage of the knowledge of teacher agents, the search space for the student agents will be reduced significantly, which can accelerate the learning speed of the holistic system. The proposed algorithm is verified in some multi-agent simulation environments, and its efficiency has been demonstrated by the experiment results.
基金supported by the National Natural Science Foundation of China (61070143 61173088)
文摘For multi-agent reinforcement learning in Markov games, knowledge extraction and sharing are key research problems. State list extracting means to calculate the optimal shared state path from state trajectories with cycles. A state list extracting algorithm checks cyclic state lists of a current state in the state trajectory, condensing the optimal action set of the current state. By reinforcing the optimal action selected, the action policy of cyclic states is optimized gradually. The state list extracting is repeatedly learned and used as the experience knowledge which is shared by teams. Agents speed up the rate of convergence by experience sharing. Competition games of preys and predators are used for the experiments. The results of experiments prove that the proposed algorithms overcome the lack of experience in the initial stage, speed up learning and improve the performance.
文摘Single-agent reinforcement learning (RL) is commonly used to learn how to play computer games, in which the agent makes one move before making the next in a sequential decision process. Recently single agent was also employed in the design of molecules and drugs. While a single agent is a good fit for computer games, it has limitations when used in molecule design. Its sequential learning makes it impossible to modify or improve the previous steps while working on the current step. In this paper, we proposed to apply the multi-agent RL approach to the research of molecules, which can optimize all sites of a molecule simultaneously. To elucidate the validity of our approach, we chose one chemical compound Favipiravir to explore its local chemical space. Favipiravir is a broad-spectrum inhibitor of viral RNA polymerase, and is one of the compounds that are currently being used in SARS-CoV-2 (COVID-19) clinical trials. Our experiments revealed the collaborative learning of a team of deep RL agents as well as the learning of its individual learning agent in the exploration of Favipiravir. In particular, our multi-agents not only discovered the molecules near Favipiravir in chemical space, but also the learnability of each site in the string representation of Favipiravir, critical information for us to understand the underline mechanism that supports machine learning of molecules.
基金Sponsored bythe Ministerial Level Foundation (70302)
文摘Multl-agent reinforcement learning algorithms are studied. A prediction-based multi-agent reinforcement learning algorithm is presented for multl-robot cooperation task. The multi-robot cooperation experiment based on multi-agent inverted pendulum is made to test the efficency of the new algorithm, and the experiment results show that the new algorithm can achieve the cooperation strategy much faster than the primitive multiagent reinforcement learning algorithm.
基金Supported by the National Social Science Foundation of China(15ZDA034,14BZZ028)Beijing Social Science Foundation(16JDGLA036)JKF Program of People’s Public Security University of China(2016JKF01318)
文摘Cooperative multi-agent reinforcement learning( MARL) is an important topic in the field of artificial intelligence,in which distributed constraint optimization( DCOP) algorithms have been widely used to coordinate the actions of multiple agents. However,dense communication among agents affects the practicability of DCOP algorithms. In this paper,we propose a novel DCOP algorithm dealing with the previous DCOP algorithms' communication problem by reducing constraints.The contributions of this paper are primarily threefold:(1) It is proved that removing constraints can effectively reduce the communication burden of DCOP algorithms.(2) An criterion is provided to identify insignificant constraints whose elimination doesn't have a great impact on the performance of the whole system.(3) A constraint-reduced DCOP algorithm is proposed by adopting a variant of spectral clustering algorithm to detect and eliminate the insignificant constraints. Our algorithm reduces the communication burdern of the benchmark DCOP algorithm while keeping its overall performance unaffected. The performance of constraint-reduced DCOP algorithm is evaluated on four configurations of cooperative sensor networks. The effectiveness of communication reduction is also verified by comparisons between the constraint-reduced DCOP and the benchmark DCOP.
文摘Q-learning is a popular temporal-difference reinforcement learning algorithm which often explicitly stores state values using lookup tables. This implementation has been proven to converge to the optimal solution, but it is often beneficial to use a function-approximation system, such as deep neural networks, to estimate state values. It has been previously observed that Q-learning can be unstable when using value function approximation or when operating in a stochastic environment. This instability can adversely affect the algorithm’s ability to maximize its returns. In this paper, we present a new algorithm called Multi Q-learning to attempt to overcome the instability seen in Q-learning. We test our algorithm on a 4 × 4 grid-world with different stochastic reward functions using various deep neural networks and convolutional networks. Our results show that in most cases, Multi Q-learning outperforms Q-learning, achieving average returns up to 2.5 times higher than Q-learning and having a standard deviation of state values as low as 0.58.
基金Supported by National Natural Science Foundation of China(60474035),National Research Foundation for the Doctoral Program of Higher Education of China(20050359004),Natural Science Foundation of Anhui Province(070412035)
文摘Robot learning in unstructured environments has been proved to be an extremely challenging problem, mainly because of many uncertainties always present in the real world. Human beings, on the other hand, seem to cope very well with uncertain and unpredictable environments, often relying on perception-based information. Furthermore, humans beings can also utilize perceptions to guide their learning on those parts of the perception-action space that are actually relevant to the task. Therefore, we conduct a research aimed at improving robot learning through the incorporation of both perception-based and measurement-based information. For this reason, a fuzzy reinforcement learning (FRL) agent is proposed in this paper. Based on a neural-fuzzy architecture, different kinds of information can be incorporated into the FRL agent to initialise its action network, critic network and evaluation feedback module so as to accelerate its learning. By making use of the global optimisation capability of GAs (genetic algorithms), a GA-based FRL (GAFRL) agent is presented to solve the local minima problem in traditional actor-critic reinforcement learning. On the other hand, with the prediction capability of the critic network, GAs can perform a more effective global search. Different GAFRL agents are constructed and verified by using the simulation model of a physical biped robot. The simulation analysis shows that the biped learning rate for dynamic balance can be improved by incorporating perception-based information on biped balancing and walking evaluation. The biped robot can find its application in ocean exploration, detection or sea rescue activity, as well as military maritime activity.
基金Supported by National Natural Science Foundation of China (61273137, 51209026, 61074017), the Scientific Research Fund of Liaoning Provincial Education Department (L2013202), and the Fundamental Research Funds for the Central Universities (3132013037, 3132014047, 3132014321)
文摘Introduces a mixture genetic algorithm and reinforcement learning computation model used for independent agent learning in continuous, distributive, open environment, which takes full advantage of the reactive and robust of reinforcement learning algorithm and the property that genetic algorithm is suitable to the problem with high dimension,large collectivity, complex environment, and concludes that through proper training, the result verifies that this method is available in the complex multi agent environment.
文摘AGVs dispatching, one of the hot problems in FMS, has attracted widespread interest in recent years. It is hard to dynamically schedule AGVs with pre designed rule because of the uncertainty and dynamic nature of AGVs dispatching progress, so the AGVs system in this paper is treated as a cooperative learning multiagent system, in which each agent adopts multilevel decision method, which includes two level decisions: the option level and the action level. On the option level, an agent learns a policy to execute a subtask with the best response to the other AGVs’ current options. On the action level, an agent learns an optimal policy of actions for achieving his planned option. The method is applied to a AGVs’ dispatching simulation, and the performance of the AGVs system based on this method is verified.
基金supported in part by the National Natural Science Foundation of China(62106053)the Guangxi Natural Science Foundation(2020GXNSFBA159042)+2 种基金Innovation Project of Guangxi Graduate Education(YCSW2023478)the Guangxi Education Department Program(2021KY0347)the Doctoral Fund of Guangxi University of Science and Technology(XiaoKe Bo19Z33)。
文摘The cloud boundary network environment is characterized by a passive defense strategy,discrete defense actions,and delayed defense feedback in the face of network attacks,ignoring the influence of the external environment on defense decisions,thus resulting in poor defense effectiveness.Therefore,this paper proposes a cloud boundary network active defense model and decision method based on the reinforcement learning of intelligent agent,designs the network structure of the intelligent agent attack and defense game,and depicts the attack and defense game process of cloud boundary network;constructs the observation space and action space of reinforcement learning of intelligent agent in the non-complete information environment,and portrays the interaction process between intelligent agent and environment;establishes the reward mechanism based on the attack and defense gain,and encourage intelligent agents to learn more effective defense strategies.the designed active defense decision intelligent agent based on deep reinforcement learning can solve the problems of border dynamics,interaction lag,and control dispersion in the defense decision process of cloud boundary networks,and improve the autonomy and continuity of defense decisions.
文摘针对目前智能体间追逐过程中对智能体的情感因素考虑不充分的问题,提出一种新的解决方案:首先通过情感建模将个性、情感融入以两个智能体为基元的追逐行为中,使其运动更有多样性;其次通过博弈论引导决策的选取;最后收集对方运动的轨迹点,用Q-learning加强学习方式学习归纳,以寻找最优追逐运动路径。在Visual Studio 2012编译环境下得到整个具有可信度的运动动画以及智能体的情感、体力等因素的变化规律图像。演示结果表明,此解决方案对于智能体间高效的追逐有很好的促进作用。