期刊文献+
共找到716篇文章
< 1 2 36 >
每页显示 20 50 100
Task assignment in ground-to-air confrontation based on multiagent deep reinforcement learning 被引量:2
1
作者 Jia-yi Liu Gang Wang +2 位作者 Qiang Fu Shao-hua Yue Si-yuan Wang 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2023年第1期210-219,共10页
The scale of ground-to-air confrontation task assignments is large and needs to deal with many concurrent task assignments and random events.Aiming at the problems where existing task assignment methods are applied to... The scale of ground-to-air confrontation task assignments is large and needs to deal with many concurrent task assignments and random events.Aiming at the problems where existing task assignment methods are applied to ground-to-air confrontation,there is low efficiency in dealing with complex tasks,and there are interactive conflicts in multiagent systems.This study proposes a multiagent architecture based on a one-general agent with multiple narrow agents(OGMN)to reduce task assignment conflicts.Considering the slow speed of traditional dynamic task assignment algorithms,this paper proposes the proximal policy optimization for task assignment of general and narrow agents(PPOTAGNA)algorithm.The algorithm based on the idea of the optimal assignment strategy algorithm and combined with the training framework of deep reinforcement learning(DRL)adds a multihead attention mechanism and a stage reward mechanism to the bilateral band clipping PPO algorithm to solve the problem of low training efficiency.Finally,simulation experiments are carried out in the digital battlefield.The multiagent architecture based on OGMN combined with the PPO-TAGNA algorithm can obtain higher rewards faster and has a higher win ratio.By analyzing agent behavior,the efficiency,superiority and rationality of resource utilization of this method are verified. 展开更多
关键词 Ground-to-air confrontation Task assignment General and narrow agents deep reinforcement learning Proximal policy optimization(PPO)
下载PDF
Multi-Agent Deep Reinforcement Learning for Cross-Layer Scheduling in Mobile Ad-Hoc Networks
2
作者 Xinxing Zheng Yu Zhao +1 位作者 Joohyun Lee Wei Chen 《China Communications》 SCIE CSCD 2023年第8期78-88,共11页
Due to the fading characteristics of wireless channels and the burstiness of data traffic,how to deal with congestion in Ad-hoc networks with effective algorithms is still open and challenging.In this paper,we focus o... Due to the fading characteristics of wireless channels and the burstiness of data traffic,how to deal with congestion in Ad-hoc networks with effective algorithms is still open and challenging.In this paper,we focus on enabling congestion control to minimize network transmission delays through flexible power control.To effectively solve the congestion problem,we propose a distributed cross-layer scheduling algorithm,which is empowered by graph-based multi-agent deep reinforcement learning.The transmit power is adaptively adjusted in real-time by our algorithm based only on local information(i.e.,channel state information and queue length)and local communication(i.e.,information exchanged with neighbors).Moreover,the training complexity of the algorithm is low due to the regional cooperation based on the graph attention network.In the evaluation,we show that our algorithm can reduce the transmission delay of data flow under severe signal interference and drastically changing channel states,and demonstrate the adaptability and stability in different topologies.The method is general and can be extended to various types of topologies. 展开更多
关键词 Ad-hoc network cross-layer scheduling multi agent deep reinforcement learning interference elimination power control queue scheduling actorcritic methods markov decision process
下载PDF
Exploring Local Chemical Space in De Novo Molecular Generation Using Multi-Agent Deep Reinforcement Learning 被引量:2
3
作者 Wei Hu 《Natural Science》 2021年第9期412-424,共13页
Single-agent reinforcement learning (RL) is commonly used to learn how to play computer games, in which the agent makes one move before making the next in a sequential decision process. Recently single agent was also ... Single-agent reinforcement learning (RL) is commonly used to learn how to play computer games, in which the agent makes one move before making the next in a sequential decision process. Recently single agent was also employed in the design of molecules and drugs. While a single agent is a good fit for computer games, it has limitations when used in molecule design. Its sequential learning makes it impossible to modify or improve the previous steps while working on the current step. In this paper, we proposed to apply the multi-agent RL approach to the research of molecules, which can optimize all sites of a molecule simultaneously. To elucidate the validity of our approach, we chose one chemical compound Favipiravir to explore its local chemical space. Favipiravir is a broad-spectrum inhibitor of viral RNA polymerase, and is one of the compounds that are currently being used in SARS-CoV-2 (COVID-19) clinical trials. Our experiments revealed the collaborative learning of a team of deep RL agents as well as the learning of its individual learning agent in the exploration of Favipiravir. In particular, our multi-agents not only discovered the molecules near Favipiravir in chemical space, but also the learnability of each site in the string representation of Favipiravir, critical information for us to understand the underline mechanism that supports machine learning of molecules. 展开更多
关键词 multi-agent reinforcement learning Actor-Critic Molecule Design SARS-CoV-2 COVID-19
下载PDF
Exploring Deep Reinforcement Learning with Multi Q-Learning 被引量:25
4
作者 Ethan Duryea Michael Ganger Wei Hu 《Intelligent Control and Automation》 2016年第4期129-144,共16页
Q-learning is a popular temporal-difference reinforcement learning algorithm which often explicitly stores state values using lookup tables. This implementation has been proven to converge to the optimal solution, but... Q-learning is a popular temporal-difference reinforcement learning algorithm which often explicitly stores state values using lookup tables. This implementation has been proven to converge to the optimal solution, but it is often beneficial to use a function-approximation system, such as deep neural networks, to estimate state values. It has been previously observed that Q-learning can be unstable when using value function approximation or when operating in a stochastic environment. This instability can adversely affect the algorithm’s ability to maximize its returns. In this paper, we present a new algorithm called Multi Q-learning to attempt to overcome the instability seen in Q-learning. We test our algorithm on a 4 × 4 grid-world with different stochastic reward functions using various deep neural networks and convolutional networks. Our results show that in most cases, Multi Q-learning outperforms Q-learning, achieving average returns up to 2.5 times higher than Q-learning and having a standard deviation of state values as low as 0.58. 展开更多
关键词 reinforcement learning deep learning multi Q-learning
下载PDF
A new accelerating algorithm for multi-agent reinforcement learning 被引量:1
5
作者 张汝波 仲宇 顾国昌 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2005年第1期48-51,共4页
In multi-agent systems, joint-action must be employed to achieve cooperation because the evaluation of the behavior of an agent often depends on the other agents’ behaviors. However, joint-action reinforcement learni... In multi-agent systems, joint-action must be employed to achieve cooperation because the evaluation of the behavior of an agent often depends on the other agents’ behaviors. However, joint-action reinforcement learning algorithms suffer the slow convergence rate because of the enormous learning space produced by joint-action. In this article, a prediction-based reinforcement learning algorithm is presented for multi-agent cooperation tasks, which demands all agents to learn predicting the probabilities of actions that other agents may execute. A multi-robot cooperation experiment is run to test the efficacy of the new algorithm, and the experiment results show that the new algorithm can achieve the cooperation policy much faster than the primitive reinforcement learning algorithm. 展开更多
关键词 运算法则 机械学习能力 人工智能系统 数学模拟技术 机器人
下载PDF
Applications and Challenges of Deep Reinforcement Learning in Multi-robot Path Planning 被引量:1
6
作者 Tianyun Qiu Yaxuan Cheng 《Journal of Electronic Research and Application》 2021年第6期25-29,共5页
With the rapid advancement of deep reinforcement learning(DRL)in multi-agent systems,a variety of practical application challenges and solutions in the direction of multi-agent deep reinforcement learning(MADRL)are su... With the rapid advancement of deep reinforcement learning(DRL)in multi-agent systems,a variety of practical application challenges and solutions in the direction of multi-agent deep reinforcement learning(MADRL)are surfacing.Path planning in a collision-free environment is essential for many robots to do tasks quickly and efficiently,and path planning for multiple robots using deep reinforcement learning is a new research area in the field of robotics and artificial intelligence.In this paper,we sort out the training methods for multi-robot path planning,as well as summarize the practical applications in the field of DRL-based multi-robot path planning based on the methods;finally,we suggest possible research directions for researchers. 展开更多
关键词 madrl deep reinforcement learning multi-agent system multi-ROBOT Path planning
下载PDF
Multi-Agent Reinforcement Learning Algorithm Based on Action Prediction
7
作者 童亮 陆际联 《Journal of Beijing Institute of Technology》 EI CAS 2006年第2期133-137,共5页
Multl-agent reinforcement learning algorithms are studied. A prediction-based multi-agent reinforcement learning algorithm is presented for multl-robot cooperation task. The multi-robot cooperation experiment based on... Multl-agent reinforcement learning algorithms are studied. A prediction-based multi-agent reinforcement learning algorithm is presented for multl-robot cooperation task. The multi-robot cooperation experiment based on multi-agent inverted pendulum is made to test the efficency of the new algorithm, and the experiment results show that the new algorithm can achieve the cooperation strategy much faster than the primitive multiagent reinforcement learning algorithm. 展开更多
关键词 multi-agent system reinforcement learning action prediction ROBOT
下载PDF
Multi-agent reinforcement learning with cooperation based on eligibility traces
8
作者 杨玉君 程君实 陈佳品 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2004年第5期564-568,共5页
The application of reinforcement learning is widely used by multi-agent systems in recent years. An agent uses a multi-agent system to cooperate with other agents to accomplish the given task, and one agent′s behavio... The application of reinforcement learning is widely used by multi-agent systems in recent years. An agent uses a multi-agent system to cooperate with other agents to accomplish the given task, and one agent′s behavior usually affects the others′ behaviors. In traditional reinforcement learning, one agent takes the others location, so it is difficult to consider the others′ behavior, which decreases the learning efficiency. This paper proposes multi-agent reinforcement learning with cooperation based on eligibility traces, i.e. one agent estimates the other agent′s behavior with the other agent′s eligibility traces. The results of this simulation prove the validity of the proposed learning method. 展开更多
关键词 人工智能 机器学习 多主体增强学习系统 学习方法
下载PDF
Cooperative Multi-Agent Reinforcement Learning with Constraint-Reduced DCOP
9
作者 Yi Xie Zhongyi Liu +1 位作者 Zhao Liu Yijun Gu 《Journal of Beijing Institute of Technology》 EI CAS 2017年第4期525-533,共9页
Cooperative multi-agent reinforcement learning( MARL) is an important topic in the field of artificial intelligence,in which distributed constraint optimization( DCOP) algorithms have been widely used to coordinat... Cooperative multi-agent reinforcement learning( MARL) is an important topic in the field of artificial intelligence,in which distributed constraint optimization( DCOP) algorithms have been widely used to coordinate the actions of multiple agents. However,dense communication among agents affects the practicability of DCOP algorithms. In this paper,we propose a novel DCOP algorithm dealing with the previous DCOP algorithms' communication problem by reducing constraints.The contributions of this paper are primarily threefold:(1) It is proved that removing constraints can effectively reduce the communication burden of DCOP algorithms.(2) An criterion is provided to identify insignificant constraints whose elimination doesn't have a great impact on the performance of the whole system.(3) A constraint-reduced DCOP algorithm is proposed by adopting a variant of spectral clustering algorithm to detect and eliminate the insignificant constraints. Our algorithm reduces the communication burdern of the benchmark DCOP algorithm while keeping its overall performance unaffected. The performance of constraint-reduced DCOP algorithm is evaluated on four configurations of cooperative sensor networks. The effectiveness of communication reduction is also verified by comparisons between the constraint-reduced DCOP and the benchmark DCOP. 展开更多
关键词 reinforcement learning cooperative multi-agent system distributed constraint optimization (DCOP) constraint-reduced DCOP
下载PDF
竞争与合作视角下的多Agent强化学习研究进展
10
作者 田小禾 李伟 +3 位作者 许铮 刘天星 戚骁亚 甘中学 《计算机应用与软件》 北大核心 2024年第4期1-15,共15页
随着深度学习和强化学习研究取得长足的进展,多Agent强化学习已成为解决大规模复杂序贯决策问题的通用方法。为了推动该领域的发展,从竞争与合作的视角收集并总结近期相关的研究成果。该文介绍单Agent强化学习;分别介绍多Agent强化学习... 随着深度学习和强化学习研究取得长足的进展,多Agent强化学习已成为解决大规模复杂序贯决策问题的通用方法。为了推动该领域的发展,从竞争与合作的视角收集并总结近期相关的研究成果。该文介绍单Agent强化学习;分别介绍多Agent强化学习的基本理论框架——马尔可夫博弈以及扩展式博弈,并重点阐述了其在竞争、合作和混合三种场景下经典算法及其近期研究进展;讨论多Agent强化学习面临的核心挑战——环境的不稳定性,并通过一个例子对其解决思路进行总结与展望。 展开更多
关键词 深度学习 强化学习 agent强化学习 环境的不稳定性
下载PDF
AGVs Dispatching Using Multiple Cooperative Learning Agents
11
作者 李晓萌 YangYupu 《High Technology Letters》 EI CAS 2002年第3期83-87,共5页
AGVs dispatching, one of the hot problems in FMS, has attracted widespread interest in recent years. It is hard to dynamically schedule AGVs with pre designed rule because of the uncertainty and dynamic nature of AGVs... AGVs dispatching, one of the hot problems in FMS, has attracted widespread interest in recent years. It is hard to dynamically schedule AGVs with pre designed rule because of the uncertainty and dynamic nature of AGVs dispatching progress, so the AGVs system in this paper is treated as a cooperative learning multiagent system, in which each agent adopts multilevel decision method, which includes two level decisions: the option level and the action level. On the option level, an agent learns a policy to execute a subtask with the best response to the other AGVs’ current options. On the action level, an agent learns an optimal policy of actions for achieving his planned option. The method is applied to a AGVs’ dispatching simulation, and the performance of the AGVs system based on this method is verified. 展开更多
关键词 制造行业 AGV输送 多级合作学习动因 专家系统
下载PDF
DP-Q(λ):大规模Web3D场景中Multi-agent实时路径规划算法 被引量:2
12
作者 闫丰亭 贾金原 《系统仿真学报》 CAS CSCD 北大核心 2019年第1期16-26,共11页
大规模场景中Multi-agent可视化路径规划算法,需要在Web3D上实现实时、稳定的碰撞避让。提出了动态概率单链收敛回溯DP-Q(λ)算法,采用方向启发约束,使用高奖赏或重惩罚训练方法,在单智能体上采用概率p(0-1随机数)调节奖罚值,决定下一... 大规模场景中Multi-agent可视化路径规划算法,需要在Web3D上实现实时、稳定的碰撞避让。提出了动态概率单链收敛回溯DP-Q(λ)算法,采用方向启发约束,使用高奖赏或重惩罚训练方法,在单智能体上采用概率p(0-1随机数)调节奖罚值,决定下一步的寻路策略,同时感知下一位置是否空闲,完成行走过程的避碰行为,将单智能体的路径规划方案扩展到多智能体路径规划方案中,并进一步在Web3D上实现了这一方案。实验结果表明:该算法实现的多智能体实时路径规划具备了在Web3D上自主学习的高效性和稳定性的要求。 展开更多
关键词 WEB3D 大规模未知环境 多智能体 强化学习 动态奖赏p 路径规划
下载PDF
智能家居系统的Multi-Agent建模研究 被引量:1
13
作者 曲宗峰 《家电科技》 2022年第5期16-21,共6页
以智能家居系统作为研究对象,通过Multi-Agent理论建立了相关模型,并采用价值分解网络(Value Decomposition Networks,VDN)作为模型算法对Q函数进行了优化分析。以此提出了建立智能家居中各智能体规则库和知识库,并系统式开放建立各智能... 以智能家居系统作为研究对象,通过Multi-Agent理论建立了相关模型,并采用价值分解网络(Value Decomposition Networks,VDN)作为模型算法对Q函数进行了优化分析。以此提出了建立智能家居中各智能体规则库和知识库,并系统式开放建立各智能体Agent的BDI(信念-需求-意图)集合的建议,从哲学逻辑到收益建模对智能家居使用效用进行量化分析及优化。 展开更多
关键词 多智能体系统 智能家居 场景 强化学习 Q学习
下载PDF
基于Q-learning的一种多Agent系统结构模型 被引量:2
14
作者 许培 薛伟 《计算机与数字工程》 2011年第8期8-11,共4页
多Agent系统是近年来比较热门的一个研究领域,而Q-learning算法是强化学习算法中比较著名的算法,也是应用最广泛的一种强化学习算法。以单Agent强化学习Q-learning算法为基础,提出了一种新的学习协作算法,并根据此算法提出了一种新的多A... 多Agent系统是近年来比较热门的一个研究领域,而Q-learning算法是强化学习算法中比较著名的算法,也是应用最广泛的一种强化学习算法。以单Agent强化学习Q-learning算法为基础,提出了一种新的学习协作算法,并根据此算法提出了一种新的多Agent系统体系结构模型,该结构的最大特点是提出了知识共享机制、团队结构思想和引入了服务商概念,最后通过仿真实验说明了该结构体系的优越性。 展开更多
关键词 agent系统 强化学习 Q学习 体系结构 知识共享
下载PDF
Pass-ball trainning based on genetic reinforcement learning
15
作者 褚海涛 洪炳熔 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2001年第3期279-282,共4页
Introduces a mixture genetic algorithm and reinforcement learning computation model used for independent agent learning in continuous, distributive, open environment, which takes full advantage of the reactive and rob... Introduces a mixture genetic algorithm and reinforcement learning computation model used for independent agent learning in continuous, distributive, open environment, which takes full advantage of the reactive and robust of reinforcement learning algorithm and the property that genetic algorithm is suitable to the problem with high dimension,large collectivity, complex environment, and concludes that through proper training, the result verifies that this method is available in the complex multi agent environment. 展开更多
关键词 reinforcement GENETIC multi agent genetic reinforcement learning
下载PDF
Reinforcement learning with partitioning function system
16
作者 李伟 叶庆泰 朱昌明 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2004年第4期377-381,共5页
The size of state-space is the limiting factor in applying reinforcement learning algorithms to practical cases. A reinforcement learning system with partitioning function (RLWPF) is established, in which state-space ... The size of state-space is the limiting factor in applying reinforcement learning algorithms to practical cases. A reinforcement learning system with partitioning function (RLWPF) is established, in which state-space is partitioned into several regions. Inside the performance principle of RLWPF is based on a Semi-Markov decision process and has general significance. It can be applied to any reinforcement learning with a large state-space. In RLWPF, the partitioning module dispatches agents into different regions in order to decrease the state-space of each agent. This article proves the convergence of the SARSA algorithm for a Semi-Markov decision process, ensuring the convergence of RLWPF by analyzing the equivalence of two value functions in two Semi-Markov decision processes before and after partitioning. This article can show that the optimal policy learned by RLWPF is consistent with prior domain knowledge. An elevator group system is devised to decrease the average waiting time of passengers. Four agents control four elevator cars respectively. Based on RLWPF, a partitioning module is developed through defining a uniform round trip time as the partitioning criteria, making the wait time of most passengers more or less identical then elevator cars should only answer hall calls in their own region. Compared with ordinary elevator systems and reinforcement learning systems without partitioning module, the performance results show the advantage of RLWPF. 展开更多
关键词 agent系统 分割函数 强化学习 半马尔可夫决策 分割模数
下载PDF
作战Agent的学习算法研究进展与发展趋势
17
作者 王步云 刘聚 《兵工自动化》 2023年第9期74-78,96,共6页
针对作战Agent适应性问题,梳理遗传算法、强化学习、神经网络等方法在实现作战Agent适应性方面的成果,总结每种方法的特点;介绍深度强化学习方法在实现作战Agent适应性方面的应用情况,讨论深度强化学习在该方面应用的发展趋势和研究重... 针对作战Agent适应性问题,梳理遗传算法、强化学习、神经网络等方法在实现作战Agent适应性方面的成果,总结每种方法的特点;介绍深度强化学习方法在实现作战Agent适应性方面的应用情况,讨论深度强化学习在该方面应用的发展趋势和研究重点。该研究可为后续相关研究提供参考。 展开更多
关键词 作战agent 适应性 强化学习 深度学习 神经网络
下载PDF
煤矿井下掘进机器人路径规划方法研究 被引量:1
18
作者 张旭辉 郑西利 +4 位作者 杨文娟 李语阳 麻兵 董征 陈鑫 《煤田地质与勘探》 EI CAS CSCD 北大核心 2024年第4期152-163,共12页
针对煤矿非全断面巷道条件下掘进机器人移机难度大、效率低下等问题,分析了煤矿井下非结构化环境特征及掘进机器人运动特性,提出了基于深度强化学习的掘进机器人机身路径规划方法。利用深度相机将巷道环境实时重建,在虚拟环境中建立掘... 针对煤矿非全断面巷道条件下掘进机器人移机难度大、效率低下等问题,分析了煤矿井下非结构化环境特征及掘进机器人运动特性,提出了基于深度强化学习的掘进机器人机身路径规划方法。利用深度相机将巷道环境实时重建,在虚拟环境中建立掘进机器人与巷道环境的碰撞检测模型,并使用层次包围盒法进行虚拟环境碰撞检测,形成巷道边界受限下的避障策略。考虑到掘进机器人形体大小且路径规划过程目标单一,在传统SAC算法的基础上引入后见经验回放技术,提出HER-SAC算法,该算法通过环境初始目标得到的轨迹扩展目标子集,以增加训练样本、提高训练速度。在此基础上,基于奖惩机制建立智能体,根据掘进机器人运动特性定义其状态空间与动作空间,在同一场景下分别使用3种算法对智能体进行训练,综合平均奖励值、最高奖励值、达到最高奖励值的步数以及鲁棒性4项性能指标进行对比分析。为进一步验证所提方法的可靠性,采用虚实结合的方式,通过调整目标位置设置2种实验场景进行掘进机器人的路径规划,并将传统SAC算法和HER-SAC算法的路径结果进行对比。结果表明:相较于PPO算法和SAC算法,HER-SAC算法收敛速度更快、综合性能达到最优;在2种实验场景下,HER-SAC算法相比传统SAC算法规划出的路径更加平滑、路径长度更短、路径终点与目标位置的误差在3.53 cm以内,能够有效地完成移机路径规划任务。该方法为煤矿掘进机器人的自主移机控制奠定了理论基础,为煤矿掘进设备自动化提供了新方法。 展开更多
关键词 掘进机器人 路径规划 深度强化学习 智能体 虚实结合 改进SAC算法 煤矿
下载PDF
多智能体深度强化学习研究进展
19
作者 丁世飞 杜威 +2 位作者 张健 丽丽 丁玲 《计算机学报》 EI CAS CSCD 北大核心 2024年第7期1547-1567,共21页
深度强化学习(Deep Reinforcement Learning,DRL)在近年受到广泛的关注,并在各种领域取得显著的成功.由于现实环境通常包括多个与环境交互的智能体,多智能体深度强化学习(Multi-Agent Deep Reinforcement Learning,MADRL)获得蓬勃的发展... 深度强化学习(Deep Reinforcement Learning,DRL)在近年受到广泛的关注,并在各种领域取得显著的成功.由于现实环境通常包括多个与环境交互的智能体,多智能体深度强化学习(Multi-Agent Deep Reinforcement Learning,MADRL)获得蓬勃的发展,在各种复杂的序列决策任务上取得优异的表现.本文对多智能体深度强化学习的工作进展进行综述,主要内容分为三个部分.首先,我们回顾了几种常见的多智能体强化学习问题表示及其对应的合作、竞争和混合任务.其次,我们对目前的MADRL方法进行了全新的多维度的分类,并对不同类别的方法展开进一步介绍.其中,我们重点综述值函数分解方法,基于通信的MADRL方法以及基于图神经网络的MADRL方法.最后,我们研究了MADRL方法在现实场景中的主要应用.希望本文能够为即将进入这一快速发展领域的新研究人员和希望获得全方位了解并根据最新进展确定新方向的现有领域专家提供帮助. 展开更多
关键词 多智能体深度强化学习 基于值函数 基于策略 通信学习 图神经网络
下载PDF
利用A2C-ac的城轨车车通信资源分配算法
20
作者 王瑞峰 张明 +1 位作者 黄子恒 何涛 《电子与信息学报》 EI CAS CSCD 北大核心 2024年第4期1306-1313,共8页
在城市轨道交通列车控制系统中,车车(T2T)通信作为新一代列车通信模式,利用列车间直接通信来降低通信时延,提高列车运行效率。在T2T通信与车地(T2G)通信并存场景下,针对复用T2G链路产生的干扰问题,在保证用户通信质量的前提下,该文提出... 在城市轨道交通列车控制系统中,车车(T2T)通信作为新一代列车通信模式,利用列车间直接通信来降低通信时延,提高列车运行效率。在T2T通信与车地(T2G)通信并存场景下,针对复用T2G链路产生的干扰问题,在保证用户通信质量的前提下,该文提出一种基于多智能体深度强化学习(MADRL)的改进优势演员-评论家(A2C-ac)资源分配算法。首先以系统吞吐量为优化目标,以T2T通信发送端为智能体,策略网络采用分层输出结构指导智能体选择需复用的频谱资源和功率水平,然后智能体做出相应动作并与T2T通信环境交互,得到该时隙下T2G用户和T2T用户吞吐量,价值网络对两者分别评价,利用权重因子β为每个智能体定制化加权时序差分(TD)误差,以此来灵活优化神经网络参数。最后,智能体根据训练好的模型联合选出最佳的频谱资源和功率水平。仿真结果表明,该算法相较于A2C算法和深度Q网络(DQN)算法,在收敛速度、T2T成功接入率、吞吐量等方面均有明显提升。 展开更多
关键词 城市轨道交通 资源分配 T2T通信 多智能体深度强化学习 A2C-ac算法
下载PDF
上一页 1 2 36 下一页 到第
使用帮助 返回顶部