针对5G新空口-车联网(New Radio-Vehicle to Everything,NR-V2X)场景下车对基础设施(Vehicle to Infrastructure,V2I)和车对车(Vehicle to Vehicle,V2V)共享上行通信链路的频谱资源分配问题,提出了一种联邦-多智能体深度Q网络(Federated...针对5G新空口-车联网(New Radio-Vehicle to Everything,NR-V2X)场景下车对基础设施(Vehicle to Infrastructure,V2I)和车对车(Vehicle to Vehicle,V2V)共享上行通信链路的频谱资源分配问题,提出了一种联邦-多智能体深度Q网络(Federated Learning-Multi-Agent Deep Q Network,FL-MADQN)算法.该分布式算法中,每个车辆用户作为一个智能体,根据获取的本地信道状态信息,以网络信道容量最佳为目标函数,采用DQN算法训练学习本地网络模型.采用联邦学习加快以及稳定各智能体网络模型训练的收敛速度,即将各智能体的本地模型上传至基站进行聚合形成全局模型,再将全局模型下发至各智能体更新本地模型.仿真结果表明:与传统分布式多智能体DQN算法相比,所提出的方案具有更快的模型收敛速度,并且当车辆用户数增大时仍然保证V2V链路的通信效率以及V2I链路的信道容量.展开更多
Autonomous navigation of mobile robots is a challenging task that requires them to travel from their initial position to their destination without collision in an environment.Reinforcement Learning methods enable a st...Autonomous navigation of mobile robots is a challenging task that requires them to travel from their initial position to their destination without collision in an environment.Reinforcement Learning methods enable a state action function in mobile robots suited to their environment.During trial-and-error interaction with its surroundings,it helps a robot tofind an ideal behavior on its own.The Deep Q Network(DQN)algorithm is used in TurtleBot 3(TB3)to achieve the goal by successfully avoiding the obstacles.But it requires a large number of training iterations.This research mainly focuses on a mobility robot’s best path prediction utilizing DQN and the Artificial Potential Field(APF)algorithms.First,a TB3 Waffle Pi DQN is built and trained to reach the goal.Then the APF shortest path algorithm is incorporated into the DQN algorithm.The proposed planning approach is compared with the standard DQN method in a virtual environment based on the Robot Operation System(ROS).The results from the simulation show that the combination is effective for DQN and APF gives a better optimal path and takes less time when compared to the conventional DQN algo-rithm.The performance improvement rate of the proposed DQN+APF in comparison with DQN in terms of the number of successful targets is attained by 88%.The performance of the proposed DQN+APF in comparison with DQN in terms of average time is achieved by 0.331 s.The performance of the proposed DQN+APF in comparison with DQN average rewards in which the positive goal is attained by 85%and the negative goal is attained by-90%.展开更多
The recent surge of mobile subscribers and user data traffic has accelerated the telecommunication sector towards the adoption of the fifth-generation (5G) mobile networks. Cloud radio access network (CRAN) is a promi...The recent surge of mobile subscribers and user data traffic has accelerated the telecommunication sector towards the adoption of the fifth-generation (5G) mobile networks. Cloud radio access network (CRAN) is a prominent framework in the 5G mobile network to meet the above requirements by deploying low-cost and intelligent multiple distributed antennas known as remote radio heads (RRHs). However, achieving the optimal resource allocation (RA) in CRAN using the traditional approach is still challenging due to the complex structure. In this paper, we introduce the convolutional neural network-based deep Q-network (CNN-DQN) to balance the energy consumption and guarantee the user quality of service (QoS) demand in downlink CRAN. We first formulate the Markov decision process (MDP) for energy efficiency (EE) and build up a 3-layer CNN to capture the environment feature as an input state space. We then use DQN to turn on/off the RRHs dynamically based on the user QoS demand and energy consumption in the CRAN. Finally, we solve the RA problem based on the user constraint and transmit power to guarantee the user QoS demand and maximize the EE with a minimum number of active RRHs. In the end, we conduct the simulation to compare our proposed scheme with nature DQN and the traditional approach.展开更多
容器云系统的资源调度策略对资源利用率和集群性能起着重要作用。现有的容器集群调度没有充分考虑节点内部和节点之间的资源占用情况,容易出现容器资源瓶颈,造成资源利用率低和服务可靠性差的问题。为了均衡容器集群的工作负载,减少容...容器云系统的资源调度策略对资源利用率和集群性能起着重要作用。现有的容器集群调度没有充分考虑节点内部和节点之间的资源占用情况,容易出现容器资源瓶颈,造成资源利用率低和服务可靠性差的问题。为了均衡容器集群的工作负载,减少容器资源瓶颈的出现,提出了一种基于DQN(Deep Q-learning Network)的容器集群调度优化算法CS-DQN(Container Scheduling Optimization Strategy Based on DQN)。首先提出一种面向负载均衡的容器集群资源利用率优化模型。然后利用深度强化学习方法,设计一种基于DQN的容器集群调度算法,定义相关的状态空间、动作空间和奖励函数。通过引入改进的DQN算法,基于自学习方法生成满足优化目标的容器动态调度策略。实验结果表明,该调度策略扩大了在调度中可部署容器的规模,在不同的工作负载中实现了较好的负载均衡,提高了资源利用率,更好地保证了服务可靠性。展开更多
The main aim of future mobile networks is to provide secure,reliable,intelligent,and seamless connectivity.It also enables mobile network operators to ensure their customer’s a better quality of service(QoS).Nowadays...The main aim of future mobile networks is to provide secure,reliable,intelligent,and seamless connectivity.It also enables mobile network operators to ensure their customer’s a better quality of service(QoS).Nowadays,Unmanned Aerial Vehicles(UAVs)are a significant part of the mobile network due to their continuously growing use in various applications.For better coverage,cost-effective,and seamless service connectivity and provisioning,UAVs have emerged as the best choice for telco operators.UAVs can be used as flying base stations,edge servers,and relay nodes in mobile networks.On the other side,Multi-access EdgeComputing(MEC)technology also emerged in the 5G network to provide a better quality of experience(QoE)to users with different QoS requirements.However,UAVs in a mobile network for coverage enhancement and better QoS face several challenges such as trajectory designing,path planning,optimization,QoS assurance,mobilitymanagement,etc.The efficient and proactive path planning and optimization in a highly dynamic environment containing buildings and obstacles are challenging.So,an automated Artificial Intelligence(AI)enabled QoSaware solution is needed for trajectory planning and optimization.Therefore,this work introduces a well-designed AI and MEC-enabled architecture for a UAVs-assisted future network.It has an efficient Deep Reinforcement Learning(DRL)algorithm for real-time and proactive trajectory planning and optimization.It also fulfills QoS-aware service provisioning.A greedypolicy approach is used to maximize the long-term reward for serving more users withQoS.Simulation results reveal the superiority of the proposed DRL mechanism for energy-efficient and QoS-aware trajectory planning over the existing models.展开更多
文摘针对5G新空口-车联网(New Radio-Vehicle to Everything,NR-V2X)场景下车对基础设施(Vehicle to Infrastructure,V2I)和车对车(Vehicle to Vehicle,V2V)共享上行通信链路的频谱资源分配问题,提出了一种联邦-多智能体深度Q网络(Federated Learning-Multi-Agent Deep Q Network,FL-MADQN)算法.该分布式算法中,每个车辆用户作为一个智能体,根据获取的本地信道状态信息,以网络信道容量最佳为目标函数,采用DQN算法训练学习本地网络模型.采用联邦学习加快以及稳定各智能体网络模型训练的收敛速度,即将各智能体的本地模型上传至基站进行聚合形成全局模型,再将全局模型下发至各智能体更新本地模型.仿真结果表明:与传统分布式多智能体DQN算法相比,所提出的方案具有更快的模型收敛速度,并且当车辆用户数增大时仍然保证V2V链路的通信效率以及V2I链路的信道容量.
文摘Autonomous navigation of mobile robots is a challenging task that requires them to travel from their initial position to their destination without collision in an environment.Reinforcement Learning methods enable a state action function in mobile robots suited to their environment.During trial-and-error interaction with its surroundings,it helps a robot tofind an ideal behavior on its own.The Deep Q Network(DQN)algorithm is used in TurtleBot 3(TB3)to achieve the goal by successfully avoiding the obstacles.But it requires a large number of training iterations.This research mainly focuses on a mobility robot’s best path prediction utilizing DQN and the Artificial Potential Field(APF)algorithms.First,a TB3 Waffle Pi DQN is built and trained to reach the goal.Then the APF shortest path algorithm is incorporated into the DQN algorithm.The proposed planning approach is compared with the standard DQN method in a virtual environment based on the Robot Operation System(ROS).The results from the simulation show that the combination is effective for DQN and APF gives a better optimal path and takes less time when compared to the conventional DQN algo-rithm.The performance improvement rate of the proposed DQN+APF in comparison with DQN in terms of the number of successful targets is attained by 88%.The performance of the proposed DQN+APF in comparison with DQN in terms of average time is achieved by 0.331 s.The performance of the proposed DQN+APF in comparison with DQN average rewards in which the positive goal is attained by 85%and the negative goal is attained by-90%.
基金supported by the Universiti Tunku Abdul Rahman (UTAR) Malaysia under UTARRF (IPSR/RMC/UTARRF/2021-C1/T05)
文摘The recent surge of mobile subscribers and user data traffic has accelerated the telecommunication sector towards the adoption of the fifth-generation (5G) mobile networks. Cloud radio access network (CRAN) is a prominent framework in the 5G mobile network to meet the above requirements by deploying low-cost and intelligent multiple distributed antennas known as remote radio heads (RRHs). However, achieving the optimal resource allocation (RA) in CRAN using the traditional approach is still challenging due to the complex structure. In this paper, we introduce the convolutional neural network-based deep Q-network (CNN-DQN) to balance the energy consumption and guarantee the user quality of service (QoS) demand in downlink CRAN. We first formulate the Markov decision process (MDP) for energy efficiency (EE) and build up a 3-layer CNN to capture the environment feature as an input state space. We then use DQN to turn on/off the RRHs dynamically based on the user QoS demand and energy consumption in the CRAN. Finally, we solve the RA problem based on the user constraint and transmit power to guarantee the user QoS demand and maximize the EE with a minimum number of active RRHs. In the end, we conduct the simulation to compare our proposed scheme with nature DQN and the traditional approach.
文摘容器云系统的资源调度策略对资源利用率和集群性能起着重要作用。现有的容器集群调度没有充分考虑节点内部和节点之间的资源占用情况,容易出现容器资源瓶颈,造成资源利用率低和服务可靠性差的问题。为了均衡容器集群的工作负载,减少容器资源瓶颈的出现,提出了一种基于DQN(Deep Q-learning Network)的容器集群调度优化算法CS-DQN(Container Scheduling Optimization Strategy Based on DQN)。首先提出一种面向负载均衡的容器集群资源利用率优化模型。然后利用深度强化学习方法,设计一种基于DQN的容器集群调度算法,定义相关的状态空间、动作空间和奖励函数。通过引入改进的DQN算法,基于自学习方法生成满足优化目标的容器动态调度策略。实验结果表明,该调度策略扩大了在调度中可部署容器的规模,在不同的工作负载中实现了较好的负载均衡,提高了资源利用率,更好地保证了服务可靠性。
基金This work was supported by the Fundamental Research Funds for the Central Universities(No.2019XD-A07)the Director Fund of Beijing Key Laboratory of Space-ground Interconnection and Convergencethe National Key Laboratory of Science and Technology on Vacuum Electronics.
文摘The main aim of future mobile networks is to provide secure,reliable,intelligent,and seamless connectivity.It also enables mobile network operators to ensure their customer’s a better quality of service(QoS).Nowadays,Unmanned Aerial Vehicles(UAVs)are a significant part of the mobile network due to their continuously growing use in various applications.For better coverage,cost-effective,and seamless service connectivity and provisioning,UAVs have emerged as the best choice for telco operators.UAVs can be used as flying base stations,edge servers,and relay nodes in mobile networks.On the other side,Multi-access EdgeComputing(MEC)technology also emerged in the 5G network to provide a better quality of experience(QoE)to users with different QoS requirements.However,UAVs in a mobile network for coverage enhancement and better QoS face several challenges such as trajectory designing,path planning,optimization,QoS assurance,mobilitymanagement,etc.The efficient and proactive path planning and optimization in a highly dynamic environment containing buildings and obstacles are challenging.So,an automated Artificial Intelligence(AI)enabled QoSaware solution is needed for trajectory planning and optimization.Therefore,this work introduces a well-designed AI and MEC-enabled architecture for a UAVs-assisted future network.It has an efficient Deep Reinforcement Learning(DRL)algorithm for real-time and proactive trajectory planning and optimization.It also fulfills QoS-aware service provisioning.A greedypolicy approach is used to maximize the long-term reward for serving more users withQoS.Simulation results reveal the superiority of the proposed DRL mechanism for energy-efficient and QoS-aware trajectory planning over the existing models.