Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metavers...Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metaverses. However, avatar tasks include a multitude of human-to-avatar and avatar-to-avatar interactive applications, e.g., augmented reality navigation,which consumes intensive computing resources. It is inefficient and impractical for vehicles to process avatar tasks locally. Fortunately, migrating avatar tasks to the nearest roadside units(RSU)or unmanned aerial vehicles(UAV) for execution is a promising solution to decrease computation overhead and reduce task processing latency, while the high mobility of vehicles brings challenges for vehicles to independently perform avatar migration decisions depending on current and future vehicle status. To address these challenges, in this paper, we propose a novel avatar task migration system based on multi-agent deep reinforcement learning(MADRL) to execute immersive vehicular avatar tasks dynamically. Specifically, we first formulate the problem of avatar task migration from vehicles to RSUs/UAVs as a partially observable Markov decision process that can be solved by MADRL algorithms. We then design the multi-agent proximal policy optimization(MAPPO) approach as the MADRL algorithm for the avatar task migration problem. To overcome slow convergence resulting from the curse of dimensionality and non-stationary issues caused by shared parameters in MAPPO, we further propose a transformer-based MAPPO approach via sequential decision-making models for the efficient representation of relationships among agents. Finally, to motivate terrestrial or non-terrestrial edge servers(e.g., RSUs or UAVs) to share computation resources and ensure traceability of the sharing records, we apply smart contracts and blockchain technologies to achieve secure sharing management. Numerical results demonstrate that the proposed approach outperforms the MAPPO approach by around 2% and effectively reduces approximately 20% of the latency of avatar task execution in UAV-assisted vehicular Metaverses.展开更多
Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinfor...Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinforce-ment learning(DRL)theory and an improved Multi-Agent Deep Deterministic Policy Gradient(MADDPG-D2)algorithm with a dual experience replay pool and a dual noise based on multi-agent architecture is proposed to improve the efficiency of DTA.The algorithm is based on the traditional Multi-Agent Deep Deterministic Policy Gradient(MADDPG)algorithm,and considers the introduction of a double noise mechanism to increase the action exploration space in the early stage of the algorithm,and the introduction of a double experience pool to improve the data utilization rate;at the same time,in order to accelerate the training speed and efficiency of the agents,and to solve the cold-start problem of the training,the a priori knowledge technology is applied to the training of the algorithm.Finally,the MADDPG-D2 algorithm is compared and analyzed based on the digital battlefield of ground and air confrontation.The experimental results show that the agents trained by the MADDPG-D2 algorithm have higher win rates and average rewards,can utilize the resources more reasonably,and better solve the problem of the traditional single agent algorithms facing the difficulty of solving the problem in the high-dimensional decision space.The MADDPG-D2 algorithm based on multi-agent architecture proposed in this paper has certain superiority and rationality in DTA.展开更多
In response to the uncertainty of information of the injured in post disaster situations,considering constraints such as random chance and the quantity of rescue resource,the split deliv-ery vehicle routing problem wi...In response to the uncertainty of information of the injured in post disaster situations,considering constraints such as random chance and the quantity of rescue resource,the split deliv-ery vehicle routing problem with stochastic demands(SDVRPSD)model and the multi-depot split delivery heterogeneous vehicle routing problem with stochastic demands(MDSDHVRPSD)model are established.A two-stage hybrid variable neighborhood tabu search algorithm is designed for unmanned vehicle task planning to minimize the path cost of rescue plans.Simulation experiments show that the solution obtained by the algorithm can effectively reduce the rescue vehicle path cost and the rescue task completion time,with high optimization quality and certain portability.展开更多
Efficient exploration in complex coordination tasks has been considered a challenging problem in multi-agent reinforcement learning(MARL). It is significantly more difficult for those tasks with latent variables that ...Efficient exploration in complex coordination tasks has been considered a challenging problem in multi-agent reinforcement learning(MARL). It is significantly more difficult for those tasks with latent variables that agents cannot directly observe. However, most of the existing latent variable discovery methods lack a clear representation of latent variables and an effective evaluation of the influence of latent variables on the agent. In this paper, we propose a new MARL algorithm based on the soft actor-critic method for complex continuous control tasks with confounders. It is called the multi-agent soft actor-critic with latent variable(MASAC-LV) algorithm, which uses variational inference theory to infer the compact latent variables representation space from a large amount of offline experience.Besides, we derive the counterfactual policy whose input has no latent variables and quantify the difference between the actual policy and the counterfactual policy via a distance function. This quantified difference is considered an intrinsic motivation that gives additional rewards based on how much the latent variable affects each agent. The proposed algorithm is evaluated on two collaboration tasks with confounders, and the experimental results demonstrate the effectiveness of MASAC-LV compared to other baseline algorithms.展开更多
This research is the first application of Unmanned Aerial Vehicles(UAVs)equipped with Multi-access Edge Computing(MEC)servers to offshore wind farms,providing a new task offloading solution to address the challenge of...This research is the first application of Unmanned Aerial Vehicles(UAVs)equipped with Multi-access Edge Computing(MEC)servers to offshore wind farms,providing a new task offloading solution to address the challenge of scarce edge servers in offshore wind farms.The proposed strategy is to offload the computational tasks in this scenario to other MEC servers and compute them proportionally,which effectively reduces the computational pressure on local MEC servers when wind turbine data are abnormal.Finally,the task offloading problem is modeled as a multi-intelligent deep reinforcement learning problem,and a task offloading model based on MultiAgent Deep Reinforcement Learning(MADRL)is established.The Adaptive Genetic Algorithm(AGA)is used to explore the action space of the Deep Deterministic Policy Gradient(DDPG),which effectively solves the problem of slow convergence of the DDPG algorithm in the high-dimensional action space.The simulation results show that the proposed algorithm,AGA-DDPG,saves approximately 61.8%,55%,21%,and 33%of the overall overhead compared to local MEC,random offloading,TD3,and DDPG,respectively.The proposed strategy is potentially important for improving real-time monitoring,big data analysis,and predictive maintenance of offshore wind farm operation and maintenance systems.展开更多
Vehicular edge computing(VEC)is emerging as a promising solution paradigm to meet the requirements of compute-intensive applications in internet of vehicle(IoV).Non-orthogonal multiple access(NOMA)has advantages in im...Vehicular edge computing(VEC)is emerging as a promising solution paradigm to meet the requirements of compute-intensive applications in internet of vehicle(IoV).Non-orthogonal multiple access(NOMA)has advantages in improving spectrum efficiency and dealing with bandwidth scarcity and cost.It is an encouraging progress combining VEC and NOMA.In this paper,we jointly optimize task offloading decision and resource allocation to maximize the service utility of the NOMA-VEC system.To solve the optimization problem,we propose a multiagent deep graph reinforcement learning algorithm.The algorithm extracts the topological features and relationship information between agents from the system state as observations,outputs task offloading decision and resource allocation simultaneously with local policy network,which is updated by a local learner.Simulation results demonstrate that the proposed method achieves a 1.52%∼5.80%improvement compared with the benchmark algorithms in system service utility.展开更多
With the advancement of technology and the continuous innovation of applications, low-latency applications such as drones, online games and virtual reality are gradually becoming popular demands in modern society. How...With the advancement of technology and the continuous innovation of applications, low-latency applications such as drones, online games and virtual reality are gradually becoming popular demands in modern society. However, these applications pose a great challenge to the traditional centralized mobile cloud computing paradigm, and it is obvious that the traditional cloud computing model is already struggling to meet such demands. To address the shortcomings of cloud computing, mobile edge computing has emerged. Mobile edge computing provides users with computing and storage resources by offloading computing tasks to servers at the edge of the network. However, most existing work only considers single-objective performance optimization in terms of latency or energy consumption, but not balanced optimization in terms of latency and energy consumption. To reduce task latency and device energy consumption, the problem of joint optimization of computation offloading and resource allocation in multi-cell, multi-user, multi-server MEC environments is investigated. In this paper, a dynamic computation offloading algorithm based on Multi-Agent Deep Deterministic Policy Gradient (MADDPG) is proposed to obtain the optimal policy. The experimental results show that the algorithm proposed in this paper reduces the delay by 5 ms compared to PPO, 1.5 ms compared to DDPG and 10.7 ms compared to DQN, and reduces the energy consumption by 300 compared to PPO, 760 compared to DDPG and 380 compared to DQN. This fully proves that the algorithm proposed in this paper has excellent performance.展开更多
This paper studies the problem of time-varying formation control with finite-time prescribed performance for nonstrict feedback second-order multi-agent systems with unmeasured states and unknown nonlinearities.To eli...This paper studies the problem of time-varying formation control with finite-time prescribed performance for nonstrict feedback second-order multi-agent systems with unmeasured states and unknown nonlinearities.To eliminate nonlinearities,neural networks are applied to approximate the inherent dynamics of the system.In addition,due to the limitations of the actual working conditions,each follower agent can only obtain the locally measurable partial state information of the leader agent.To address this problem,a neural network state observer based on the leader state information is designed.Then,a finite-time prescribed performance adaptive output feedback control strategy is proposed by restricting the sliding mode surface to a prescribed region,which ensures that the closed-loop system has practical finite-time stability and that formation errors of the multi-agent systems converge to the prescribed performance bound in finite time.Finally,a numerical simulation is provided to demonstrate the practicality and effectiveness of the developed algorithm.展开更多
This article investigates the problem of robust adaptive leaderless consensus for heterogeneous uncertain nonminimumphase linear multi-agent systems over directed communication graphs. Each agent is assumed tobe of un...This article investigates the problem of robust adaptive leaderless consensus for heterogeneous uncertain nonminimumphase linear multi-agent systems over directed communication graphs. Each agent is assumed tobe of unknown nominal dynamics and also subject to external disturbances and/or unmodeled dynamics. Anovel distributed robust adaptive control strategy is proposed. It is shown that the robust adaptive leaderlessconsensus problem is solved with the proposed control strategy under some sufficient conditions. Two examplesare provided to demonstrate the efficacy of the proposed control strategy.展开更多
Task allocation is a key issue of agent cooperation mechanism in Multi-Agent Systems. The important features of an agent system such as the latency of the network infrastructure, dynamic topology, and node heterogenei...Task allocation is a key issue of agent cooperation mechanism in Multi-Agent Systems. The important features of an agent system such as the latency of the network infrastructure, dynamic topology, and node heterogeneity impose new challenges on the task allocation in Multi-Agent environments. Based on the traditional parallel computing task allocation method and Ant Colony Optimization (ACO), a novel task allocation method named Collection Path Ant Colony Optimization (CPACO) is proposed to achieve global optimization and reduce processing time. The existing problems of ACO are analyzed; CPACO overcomes such problems by modifying the heuristic function and the update strategy in the Ant-Cycle Model and establishing a threedimensional path pheromone storage space. The experimental results show that CPACO consumed only 10.3% of the time taken by the Global Search Algorithm and exhibited better performance than the Forward Optimal Heuristic Algorithm.展开更多
Large-scale indoor 3D reconstruction with multiple robots faces challenges in core enabling technologies.This work contributes to a framework addressing localization,coordination,and vision processing for multi-agent ...Large-scale indoor 3D reconstruction with multiple robots faces challenges in core enabling technologies.This work contributes to a framework addressing localization,coordination,and vision processing for multi-agent reconstruction.A system architecture fusing visible light positioning,multi-agent path finding via reinforcement learning,and 360°camera techniques for 3D reconstruction is proposed.Our visible light positioning algorithm leverages existing lighting for centimeter-level localization without additional infrastructure.Meanwhile,a decentralized reinforcement learning approach is developed to solve the multi-agent path finding problem,with communications among agents optimized.Our 3D reconstruction pipeline utilizes equirectangular projection from 360°cameras to facilitate depth-independent reconstruction from posed monocular images using neural networks.Experimental validation demonstrates centimeter-level indoor navigation and 3D scene reconstruction capabilities of our framework.The challenges and limitations stemming from the above enabling technologies are discussed at the end of each corresponding section.In summary,this research advances fundamental techniques for multi-robot indoor 3D modeling,contributing to automated,data-driven applications through coordinated robot navigation,perception,and modeling.展开更多
Multi-agent reinforcement learning(MARL)has been a rapidly evolving field.This paper presents a comprehensive survey of MARL and its applications.We trace the historical evolution of MARL,highlight its progress,and di...Multi-agent reinforcement learning(MARL)has been a rapidly evolving field.This paper presents a comprehensive survey of MARL and its applications.We trace the historical evolution of MARL,highlight its progress,and discuss related survey works.Then,we review the existing works addressing inherent challenges and those focusing on diverse applications.Some representative stochastic games,MARL means,spatial forms of MARL,and task classification are revisited.We then conduct an in-depth exploration of a variety of challenges encountered in MARL applications.We also address critical operational aspects,such as hyperparameter tuning and computational complexity,which are pivotal in practical implementations of MARL.Afterward,we make a thorough overview of the applications of MARL to intelligent machines and devices,chemical engineering,biotechnology,healthcare,and societal issues,which highlights the extensive potential and relevance of MARL within both current and future technological contexts.Our survey also encompasses a detailed examination of benchmark environments used in MARL research,which are instrumental in evaluating MARL algorithms and demonstrate the adaptability of MARL to diverse application scenarios.In the end,we give our prospect for MARL and discuss their related techniques and potential future applications.展开更多
The emergence of beyond 5G networks has the potential for seamless and intelligent connectivity on a global scale.Network slicing is crucial in delivering services for different,demanding vertical applications in this...The emergence of beyond 5G networks has the potential for seamless and intelligent connectivity on a global scale.Network slicing is crucial in delivering services for different,demanding vertical applications in this context.Next-generation applications have time-sensitive requirements and depend on the most efficient routing path to ensure packets reach their intended destinations.However,the existing IP(Internet Protocol)over a multi-domain network faces challenges in enforcing network slicing due to minimal collaboration and information sharing among network operators.Conventional inter-domain routing methods,like Border Gateway Protocol(BGP),cannot make routing decisions based on performance,which frequently results in traffic flowing across congested paths that are never optimal.To address these issues,we propose CoopAI-Route,a multi-agent cooperative deep reinforcement learning(DRL)system utilizing hierarchical software-defined networks(SDN).This framework enforces network slicing in multi-domain networks and cooperative communication with various administrators to find performance-based routes in intra-and inter-domain.CoopAI-Route employs the Distributed Global Topology(DGT)algorithm to define inter-domain Quality of Service(QoS)paths.CoopAI-Route uses a DRL agent with a message-passing multi-agent Twin-Delayed Deep Deterministic Policy Gradient method to ensure optimal end-to-end routes adapted to the specific requirements of network slicing applications.Our evaluation demonstrates CoopAI-Route’s commendable performance in scalability,link failure handling,and adaptability to evolving topologies compared to state-of-the-art methods.展开更多
In the evolutionary game of the same task for groups,the changes in game rules,personal interests,the crowd size,and external supervision cause uncertain effects on individual decision-making and game results.In the M...In the evolutionary game of the same task for groups,the changes in game rules,personal interests,the crowd size,and external supervision cause uncertain effects on individual decision-making and game results.In the Markov decision framework,a single-task multi-decision evolutionary game model based on multi-agent reinforcement learning is proposed to explore the evolutionary rules in the process of a game.The model can improve the result of a evolutionary game and facilitate the completion of the task.First,based on the multi-agent theory,to solve the existing problems in the original model,a negative feedback tax penalty mechanism is proposed to guide the strategy selection of individuals in the group.In addition,in order to evaluate the evolutionary game results of the group in the model,a calculation method of the group intelligence level is defined.Secondly,the Q-learning algorithm is used to improve the guiding effect of the negative feedback tax penalty mechanism.In the model,the selection strategy of the Q-learning algorithm is improved and a bounded rationality evolutionary game strategy is proposed based on the rule of evolutionary games and the consideration of the bounded rationality of individuals.Finally,simulation results show that the proposed model can effectively guide individuals to choose cooperation strategies which are beneficial to task completion and stability under different negative feedback factor values and different group sizes,so as to improve the group intelligence level.展开更多
This paper examines the bipartite consensus problems for the nonlinear multi-agent systems in Lurie dynamics form with cooperative and competitive communication between different agents. Based on the contraction theor...This paper examines the bipartite consensus problems for the nonlinear multi-agent systems in Lurie dynamics form with cooperative and competitive communication between different agents. Based on the contraction theory, some new conditions for the nonlinear Lurie multi-agent systems reaching bipartite leaderless consensus and bipartite tracking consensus are presented. Compared with the traditional methods, this approach degrades the dimensions of the conditions, eliminates some restrictions of the system matrix, and extends the range of the nonlinear function. Finally, two numerical examples are provided to illustrate the efficiency of our results.展开更多
This paper investigates the problem of global/semi-global finite-time consensus for integrator-type multi-agent sys-tems.New hyperbolic tangent function-based protocols are pro-posed to achieve global and semi-global ...This paper investigates the problem of global/semi-global finite-time consensus for integrator-type multi-agent sys-tems.New hyperbolic tangent function-based protocols are pro-posed to achieve global and semi-global finite-time consensus for both single-integrator and double-integrator multi-agent systems with leaderless undirected and leader-following directed commu-nication topologies.These new protocols not only provide an explicit upper-bound estimate for the settling time,but also have a user-prescribed bounded control level.In addition,compared to some existing results based on the saturation function,the pro-posed approach considerably simplifies the protocol design and the stability analysis.Illustrative examples and an application demonstrate the effectiveness of the proposed protocols.展开更多
This paper is concerned with consensus of a secondorder linear time-invariant multi-agent system in the situation that there exists a communication delay among the agents in the network.A proportional-integral consens...This paper is concerned with consensus of a secondorder linear time-invariant multi-agent system in the situation that there exists a communication delay among the agents in the network.A proportional-integral consensus protocol is designed by using delayed and memorized state information.Under the proportional-integral consensus protocol,the consensus problem of the multi-agent system is transformed into the problem of asymptotic stability of the corresponding linear time-invariant time-delay system.Note that the location of the eigenvalues of the corresponding characteristic function of the linear time-invariant time-delay system not only determines the stability of the system,but also plays a critical role in the dynamic performance of the system.In this paper,based on recent results on the distribution of roots of quasi-polynomials,several necessary conditions for Hurwitz stability for a class of quasi-polynomials are first derived.Then allowable regions of consensus protocol parameters are estimated.Some necessary and sufficient conditions for determining effective protocol parameters are provided.The designed protocol can achieve consensus and improve the dynamic performance of the second-order multi-agent system.Moreover,the effects of delays on consensus of systems of harmonic oscillators/double integrators under proportional-integral consensus protocols are investigated.Furthermore,some results on proportional-integral consensus are derived for a class of high-order linear time-invariant multi-agent systems.展开更多
Constructing a cross-border power energy system with multiagent power energy as an alliance is important for studying cross-border power-trading markets.This study considers multiple neighboring countries in the form ...Constructing a cross-border power energy system with multiagent power energy as an alliance is important for studying cross-border power-trading markets.This study considers multiple neighboring countries in the form of alliances,introduces neighboring countries’exchange rates into the cross-border multi-agent power-trading market and proposes a method to study each agent’s dynamic decision-making behavior based on evolutionary game theory.To this end,this study uses three national agents as examples,constructs a tripartite evolutionary game model,and analyzes the evolution process of the decision-making behavior of each agent member state under the initial willingness value,cost of payment,and additional revenue of the alliance.This research helps realize cross-border energy operations so that the transaction agent can achieve greater trade profits and provides a theoretical basis for cooperation and stability between multiple agents.展开更多
基金supported in part by NSFC (62102099, U22A2054, 62101594)in part by the Pearl River Talent Recruitment Program (2021QN02S643)+9 种基金Guangzhou Basic Research Program (2023A04J1699)in part by the National Research Foundation, SingaporeInfocomm Media Development Authority under its Future Communications Research Development ProgrammeDSO National Laboratories under the AI Singapore Programme under AISG Award No AISG2-RP-2020-019Energy Research Test-Bed and Industry Partnership Funding Initiative, Energy Grid (EG) 2.0 programmeDesCartes and the Campus for Research Excellence and Technological Enterprise (CREATE) programmeMOE Tier 1 under Grant RG87/22in part by the Singapore University of Technology and Design (SUTD) (SRG-ISTD-2021- 165)in part by the SUTD-ZJU IDEA Grant SUTD-ZJU (VP) 202102in part by the Ministry of Education, Singapore, through its SUTD Kickstarter Initiative (SKI 20210204)。
文摘Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metaverses. However, avatar tasks include a multitude of human-to-avatar and avatar-to-avatar interactive applications, e.g., augmented reality navigation,which consumes intensive computing resources. It is inefficient and impractical for vehicles to process avatar tasks locally. Fortunately, migrating avatar tasks to the nearest roadside units(RSU)or unmanned aerial vehicles(UAV) for execution is a promising solution to decrease computation overhead and reduce task processing latency, while the high mobility of vehicles brings challenges for vehicles to independently perform avatar migration decisions depending on current and future vehicle status. To address these challenges, in this paper, we propose a novel avatar task migration system based on multi-agent deep reinforcement learning(MADRL) to execute immersive vehicular avatar tasks dynamically. Specifically, we first formulate the problem of avatar task migration from vehicles to RSUs/UAVs as a partially observable Markov decision process that can be solved by MADRL algorithms. We then design the multi-agent proximal policy optimization(MAPPO) approach as the MADRL algorithm for the avatar task migration problem. To overcome slow convergence resulting from the curse of dimensionality and non-stationary issues caused by shared parameters in MAPPO, we further propose a transformer-based MAPPO approach via sequential decision-making models for the efficient representation of relationships among agents. Finally, to motivate terrestrial or non-terrestrial edge servers(e.g., RSUs or UAVs) to share computation resources and ensure traceability of the sharing records, we apply smart contracts and blockchain technologies to achieve secure sharing management. Numerical results demonstrate that the proposed approach outperforms the MAPPO approach by around 2% and effectively reduces approximately 20% of the latency of avatar task execution in UAV-assisted vehicular Metaverses.
基金This research was funded by the Project of the National Natural Science Foundation of China,Grant Number 62106283.
文摘Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinforce-ment learning(DRL)theory and an improved Multi-Agent Deep Deterministic Policy Gradient(MADDPG-D2)algorithm with a dual experience replay pool and a dual noise based on multi-agent architecture is proposed to improve the efficiency of DTA.The algorithm is based on the traditional Multi-Agent Deep Deterministic Policy Gradient(MADDPG)algorithm,and considers the introduction of a double noise mechanism to increase the action exploration space in the early stage of the algorithm,and the introduction of a double experience pool to improve the data utilization rate;at the same time,in order to accelerate the training speed and efficiency of the agents,and to solve the cold-start problem of the training,the a priori knowledge technology is applied to the training of the algorithm.Finally,the MADDPG-D2 algorithm is compared and analyzed based on the digital battlefield of ground and air confrontation.The experimental results show that the agents trained by the MADDPG-D2 algorithm have higher win rates and average rewards,can utilize the resources more reasonably,and better solve the problem of the traditional single agent algorithms facing the difficulty of solving the problem in the high-dimensional decision space.The MADDPG-D2 algorithm based on multi-agent architecture proposed in this paper has certain superiority and rationality in DTA.
基金supported by the National Natural Science Foundation of China(No.61903036)。
文摘In response to the uncertainty of information of the injured in post disaster situations,considering constraints such as random chance and the quantity of rescue resource,the split deliv-ery vehicle routing problem with stochastic demands(SDVRPSD)model and the multi-depot split delivery heterogeneous vehicle routing problem with stochastic demands(MDSDHVRPSD)model are established.A two-stage hybrid variable neighborhood tabu search algorithm is designed for unmanned vehicle task planning to minimize the path cost of rescue plans.Simulation experiments show that the solution obtained by the algorithm can effectively reduce the rescue vehicle path cost and the rescue task completion time,with high optimization quality and certain portability.
基金supported in part by the National Natural Science Foundation of China (62136008,62236002,61921004,62173251,62103104)the “Zhishan” Scholars Programs of Southeast Universitythe Fundamental Research Funds for the Central Universities (2242023K30034)。
文摘Efficient exploration in complex coordination tasks has been considered a challenging problem in multi-agent reinforcement learning(MARL). It is significantly more difficult for those tasks with latent variables that agents cannot directly observe. However, most of the existing latent variable discovery methods lack a clear representation of latent variables and an effective evaluation of the influence of latent variables on the agent. In this paper, we propose a new MARL algorithm based on the soft actor-critic method for complex continuous control tasks with confounders. It is called the multi-agent soft actor-critic with latent variable(MASAC-LV) algorithm, which uses variational inference theory to infer the compact latent variables representation space from a large amount of offline experience.Besides, we derive the counterfactual policy whose input has no latent variables and quantify the difference between the actual policy and the counterfactual policy via a distance function. This quantified difference is considered an intrinsic motivation that gives additional rewards based on how much the latent variable affects each agent. The proposed algorithm is evaluated on two collaboration tasks with confounders, and the experimental results demonstrate the effectiveness of MASAC-LV compared to other baseline algorithms.
基金supported in part by the National Natural Science Foundation of China under grant 61861007the Guizhou Province Science and Technology Planning Project ZK[2021]303+2 种基金the Guizhou Province Science Technology Support Plan under grant[2022]264,[2023]096,[2023]409 and[2023]412the Science Technology Project of POWERCHINA Guizhou Engineering Co.,Ltd.(DJ-ZDXM-2022-44)the Project of POWERCHINA Guiyang Engineering Corporation Limited(YJ2022-12).
文摘This research is the first application of Unmanned Aerial Vehicles(UAVs)equipped with Multi-access Edge Computing(MEC)servers to offshore wind farms,providing a new task offloading solution to address the challenge of scarce edge servers in offshore wind farms.The proposed strategy is to offload the computational tasks in this scenario to other MEC servers and compute them proportionally,which effectively reduces the computational pressure on local MEC servers when wind turbine data are abnormal.Finally,the task offloading problem is modeled as a multi-intelligent deep reinforcement learning problem,and a task offloading model based on MultiAgent Deep Reinforcement Learning(MADRL)is established.The Adaptive Genetic Algorithm(AGA)is used to explore the action space of the Deep Deterministic Policy Gradient(DDPG),which effectively solves the problem of slow convergence of the DDPG algorithm in the high-dimensional action space.The simulation results show that the proposed algorithm,AGA-DDPG,saves approximately 61.8%,55%,21%,and 33%of the overall overhead compared to local MEC,random offloading,TD3,and DDPG,respectively.The proposed strategy is potentially important for improving real-time monitoring,big data analysis,and predictive maintenance of offshore wind farm operation and maintenance systems.
基金supported by the Talent Fund of Beijing Jiaotong University(No.2023XKRC028)CCFLenovo Blue Ocean Research Fund and Beijing Natural Science Foundation under Grant(No.L221003).
文摘Vehicular edge computing(VEC)is emerging as a promising solution paradigm to meet the requirements of compute-intensive applications in internet of vehicle(IoV).Non-orthogonal multiple access(NOMA)has advantages in improving spectrum efficiency and dealing with bandwidth scarcity and cost.It is an encouraging progress combining VEC and NOMA.In this paper,we jointly optimize task offloading decision and resource allocation to maximize the service utility of the NOMA-VEC system.To solve the optimization problem,we propose a multiagent deep graph reinforcement learning algorithm.The algorithm extracts the topological features and relationship information between agents from the system state as observations,outputs task offloading decision and resource allocation simultaneously with local policy network,which is updated by a local learner.Simulation results demonstrate that the proposed method achieves a 1.52%∼5.80%improvement compared with the benchmark algorithms in system service utility.
文摘With the advancement of technology and the continuous innovation of applications, low-latency applications such as drones, online games and virtual reality are gradually becoming popular demands in modern society. However, these applications pose a great challenge to the traditional centralized mobile cloud computing paradigm, and it is obvious that the traditional cloud computing model is already struggling to meet such demands. To address the shortcomings of cloud computing, mobile edge computing has emerged. Mobile edge computing provides users with computing and storage resources by offloading computing tasks to servers at the edge of the network. However, most existing work only considers single-objective performance optimization in terms of latency or energy consumption, but not balanced optimization in terms of latency and energy consumption. To reduce task latency and device energy consumption, the problem of joint optimization of computation offloading and resource allocation in multi-cell, multi-user, multi-server MEC environments is investigated. In this paper, a dynamic computation offloading algorithm based on Multi-Agent Deep Deterministic Policy Gradient (MADDPG) is proposed to obtain the optimal policy. The experimental results show that the algorithm proposed in this paper reduces the delay by 5 ms compared to PPO, 1.5 ms compared to DDPG and 10.7 ms compared to DQN, and reduces the energy consumption by 300 compared to PPO, 760 compared to DDPG and 380 compared to DQN. This fully proves that the algorithm proposed in this paper has excellent performance.
基金the National Natural Science Foundation of China(62203356)Fundamental Research Funds for the Central Universities of China(31020210502002)。
文摘This paper studies the problem of time-varying formation control with finite-time prescribed performance for nonstrict feedback second-order multi-agent systems with unmeasured states and unknown nonlinearities.To eliminate nonlinearities,neural networks are applied to approximate the inherent dynamics of the system.In addition,due to the limitations of the actual working conditions,each follower agent can only obtain the locally measurable partial state information of the leader agent.To address this problem,a neural network state observer based on the leader state information is designed.Then,a finite-time prescribed performance adaptive output feedback control strategy is proposed by restricting the sliding mode surface to a prescribed region,which ensures that the closed-loop system has practical finite-time stability and that formation errors of the multi-agent systems converge to the prescribed performance bound in finite time.Finally,a numerical simulation is provided to demonstrate the practicality and effectiveness of the developed algorithm.
基金Research Grants Council of Hong Kong under Grant CityU-11205221.
文摘This article investigates the problem of robust adaptive leaderless consensus for heterogeneous uncertain nonminimumphase linear multi-agent systems over directed communication graphs. Each agent is assumed tobe of unknown nominal dynamics and also subject to external disturbances and/or unmodeled dynamics. Anovel distributed robust adaptive control strategy is proposed. It is shown that the robust adaptive leaderlessconsensus problem is solved with the proposed control strategy under some sufficient conditions. Two examplesare provided to demonstrate the efficacy of the proposed control strategy.
基金supported by National Natural Science Foundation of China under Grant No.61170117Major National Science and Technology Programs under Grant No.2010ZX07102006+3 种基金National Key Technology R&D Program under Grant No.2012BAH25B02the National 973 Program of China under Grant No.2011CB505402the Guangdong Province University-Industry Cooperation under Grant No.2011A090200008the Scientific Research Foundation, Returned Overseas Chinese Scholars, State Education Ministry
文摘Task allocation is a key issue of agent cooperation mechanism in Multi-Agent Systems. The important features of an agent system such as the latency of the network infrastructure, dynamic topology, and node heterogeneity impose new challenges on the task allocation in Multi-Agent environments. Based on the traditional parallel computing task allocation method and Ant Colony Optimization (ACO), a novel task allocation method named Collection Path Ant Colony Optimization (CPACO) is proposed to achieve global optimization and reduce processing time. The existing problems of ACO are analyzed; CPACO overcomes such problems by modifying the heuristic function and the update strategy in the Ant-Cycle Model and establishing a threedimensional path pheromone storage space. The experimental results show that CPACO consumed only 10.3% of the time taken by the Global Search Algorithm and exhibited better performance than the Forward Optimal Heuristic Algorithm.
基金supported by Bright Dream Robotics and the HKUSTBDR Joint Research Institute Funding Scheme under Project HBJRI-FTP-005(Automated 3D Reconstruction using Robot-mounted 360-Degree Camera with Visible Light Positioning Technology for Building Information Modelling Applications,OKT22EG06).
文摘Large-scale indoor 3D reconstruction with multiple robots faces challenges in core enabling technologies.This work contributes to a framework addressing localization,coordination,and vision processing for multi-agent reconstruction.A system architecture fusing visible light positioning,multi-agent path finding via reinforcement learning,and 360°camera techniques for 3D reconstruction is proposed.Our visible light positioning algorithm leverages existing lighting for centimeter-level localization without additional infrastructure.Meanwhile,a decentralized reinforcement learning approach is developed to solve the multi-agent path finding problem,with communications among agents optimized.Our 3D reconstruction pipeline utilizes equirectangular projection from 360°cameras to facilitate depth-independent reconstruction from posed monocular images using neural networks.Experimental validation demonstrates centimeter-level indoor navigation and 3D scene reconstruction capabilities of our framework.The challenges and limitations stemming from the above enabling technologies are discussed at the end of each corresponding section.In summary,this research advances fundamental techniques for multi-robot indoor 3D modeling,contributing to automated,data-driven applications through coordinated robot navigation,perception,and modeling.
基金Ministry of Education,Singapore,under AcRF TIER 1 Grant RG64/23the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship,a Schmidt Futures program,USA.
文摘Multi-agent reinforcement learning(MARL)has been a rapidly evolving field.This paper presents a comprehensive survey of MARL and its applications.We trace the historical evolution of MARL,highlight its progress,and discuss related survey works.Then,we review the existing works addressing inherent challenges and those focusing on diverse applications.Some representative stochastic games,MARL means,spatial forms of MARL,and task classification are revisited.We then conduct an in-depth exploration of a variety of challenges encountered in MARL applications.We also address critical operational aspects,such as hyperparameter tuning and computational complexity,which are pivotal in practical implementations of MARL.Afterward,we make a thorough overview of the applications of MARL to intelligent machines and devices,chemical engineering,biotechnology,healthcare,and societal issues,which highlights the extensive potential and relevance of MARL within both current and future technological contexts.Our survey also encompasses a detailed examination of benchmark environments used in MARL research,which are instrumental in evaluating MARL algorithms and demonstrate the adaptability of MARL to diverse application scenarios.In the end,we give our prospect for MARL and discuss their related techniques and potential future applications.
文摘The emergence of beyond 5G networks has the potential for seamless and intelligent connectivity on a global scale.Network slicing is crucial in delivering services for different,demanding vertical applications in this context.Next-generation applications have time-sensitive requirements and depend on the most efficient routing path to ensure packets reach their intended destinations.However,the existing IP(Internet Protocol)over a multi-domain network faces challenges in enforcing network slicing due to minimal collaboration and information sharing among network operators.Conventional inter-domain routing methods,like Border Gateway Protocol(BGP),cannot make routing decisions based on performance,which frequently results in traffic flowing across congested paths that are never optimal.To address these issues,we propose CoopAI-Route,a multi-agent cooperative deep reinforcement learning(DRL)system utilizing hierarchical software-defined networks(SDN).This framework enforces network slicing in multi-domain networks and cooperative communication with various administrators to find performance-based routes in intra-and inter-domain.CoopAI-Route employs the Distributed Global Topology(DGT)algorithm to define inter-domain Quality of Service(QoS)paths.CoopAI-Route uses a DRL agent with a message-passing multi-agent Twin-Delayed Deep Deterministic Policy Gradient method to ensure optimal end-to-end routes adapted to the specific requirements of network slicing applications.Our evaluation demonstrates CoopAI-Route’s commendable performance in scalability,link failure handling,and adaptability to evolving topologies compared to state-of-the-art methods.
基金supported by the National Key R&D Program of China(2017YFB1400105).
文摘In the evolutionary game of the same task for groups,the changes in game rules,personal interests,the crowd size,and external supervision cause uncertain effects on individual decision-making and game results.In the Markov decision framework,a single-task multi-decision evolutionary game model based on multi-agent reinforcement learning is proposed to explore the evolutionary rules in the process of a game.The model can improve the result of a evolutionary game and facilitate the completion of the task.First,based on the multi-agent theory,to solve the existing problems in the original model,a negative feedback tax penalty mechanism is proposed to guide the strategy selection of individuals in the group.In addition,in order to evaluate the evolutionary game results of the group in the model,a calculation method of the group intelligence level is defined.Secondly,the Q-learning algorithm is used to improve the guiding effect of the negative feedback tax penalty mechanism.In the model,the selection strategy of the Q-learning algorithm is improved and a bounded rationality evolutionary game strategy is proposed based on the rule of evolutionary games and the consideration of the bounded rationality of individuals.Finally,simulation results show that the proposed model can effectively guide individuals to choose cooperation strategies which are beneficial to task completion and stability under different negative feedback factor values and different group sizes,so as to improve the group intelligence level.
基金Project supported by the National Natural Science Foundation of China(Grant No.62363005)the Jiangxi Provincial Natural Science Foundation(Grant Nos.20161BAB212032 and 20232BAB202034)the Science and Technology Research Project of Jiangxi Provincial Department of Education(Grant Nos.GJJ202602 and GJJ202601)。
文摘This paper examines the bipartite consensus problems for the nonlinear multi-agent systems in Lurie dynamics form with cooperative and competitive communication between different agents. Based on the contraction theory, some new conditions for the nonlinear Lurie multi-agent systems reaching bipartite leaderless consensus and bipartite tracking consensus are presented. Compared with the traditional methods, this approach degrades the dimensions of the conditions, eliminates some restrictions of the system matrix, and extends the range of the nonlinear function. Finally, two numerical examples are provided to illustrate the efficiency of our results.
基金supported by the National Natural Science Foundation of China(62073019)。
文摘This paper investigates the problem of global/semi-global finite-time consensus for integrator-type multi-agent sys-tems.New hyperbolic tangent function-based protocols are pro-posed to achieve global and semi-global finite-time consensus for both single-integrator and double-integrator multi-agent systems with leaderless undirected and leader-following directed commu-nication topologies.These new protocols not only provide an explicit upper-bound estimate for the settling time,but also have a user-prescribed bounded control level.In addition,compared to some existing results based on the saturation function,the pro-posed approach considerably simplifies the protocol design and the stability analysis.Illustrative examples and an application demonstrate the effectiveness of the proposed protocols.
基金supported in part by the National Natural Science Foundation of China (NSFC)(61703086, 61773106)the IAPI Fundamental Research Funds (2018ZCX27)
文摘This paper is concerned with consensus of a secondorder linear time-invariant multi-agent system in the situation that there exists a communication delay among the agents in the network.A proportional-integral consensus protocol is designed by using delayed and memorized state information.Under the proportional-integral consensus protocol,the consensus problem of the multi-agent system is transformed into the problem of asymptotic stability of the corresponding linear time-invariant time-delay system.Note that the location of the eigenvalues of the corresponding characteristic function of the linear time-invariant time-delay system not only determines the stability of the system,but also plays a critical role in the dynamic performance of the system.In this paper,based on recent results on the distribution of roots of quasi-polynomials,several necessary conditions for Hurwitz stability for a class of quasi-polynomials are first derived.Then allowable regions of consensus protocol parameters are estimated.Some necessary and sufficient conditions for determining effective protocol parameters are provided.The designed protocol can achieve consensus and improve the dynamic performance of the second-order multi-agent system.Moreover,the effects of delays on consensus of systems of harmonic oscillators/double integrators under proportional-integral consensus protocols are investigated.Furthermore,some results on proportional-integral consensus are derived for a class of high-order linear time-invariant multi-agent systems.
基金National Key R&D Program of China(Grant No.2022YFB2703500)National Natural Science Foundation of China(Grant No.52277104)+2 种基金National Key R&D Program of Yunnan Province(202303AC100003)Applied Basic Research Foundation of Yunnan Province (202301AT070455, 202101AT070080)Revitalizing Talent Support Program of Yunnan Province (KKRD202204024).
文摘Constructing a cross-border power energy system with multiagent power energy as an alliance is important for studying cross-border power-trading markets.This study considers multiple neighboring countries in the form of alliances,introduces neighboring countries’exchange rates into the cross-border multi-agent power-trading market and proposes a method to study each agent’s dynamic decision-making behavior based on evolutionary game theory.To this end,this study uses three national agents as examples,constructs a tripartite evolutionary game model,and analyzes the evolution process of the decision-making behavior of each agent member state under the initial willingness value,cost of payment,and additional revenue of the alliance.This research helps realize cross-border energy operations so that the transaction agent can achieve greater trade profits and provides a theoretical basis for cooperation and stability between multiple agents.