期刊文献+
共找到3,761篇文章
< 1 2 189 >
每页显示 20 50 100
Learning-based user association and dynamic resource allocation in multi-connectivity enabled unmanned aerial vehicle networks
1
作者 Zhipeng Cheng Minghui Liwang +3 位作者 Ning Chen Lianfen Huang Nadra Guizani Xiaojiang Du 《Digital Communications and Networks》 SCIE CSCD 2024年第1期53-62,共10页
Unmanned Aerial Vehicles(UAvs)as aerial base stations to provide communication services for ground users is a flexible and cost-effective paradigm in B5G.Besides,dynamic resource allocation and multi-connectivity can ... Unmanned Aerial Vehicles(UAvs)as aerial base stations to provide communication services for ground users is a flexible and cost-effective paradigm in B5G.Besides,dynamic resource allocation and multi-connectivity can be adopted to further harness the potentials of UAVs in improving communication capacity,in such situations such that the interference among users becomes a pivotal disincentive requiring effective solutions.To this end,we investigate the Joint UAV-User Association,Channel Allocation,and transmission Power Control(J-UACAPC)problem in a multi-connectivity-enabled UAV network with constrained backhaul links,where each UAV can determine the reusable channels and transmission power to serve the selected ground users.The goal was to mitigate co-channel interference while maximizing long-term system utility.The problem was modeled as a cooperative stochastic game with hybrid discrete-continuous action space.A Multi-Agent Hybrid Deep Reinforcement Learning(MAHDRL)algorithm was proposed to address this problem.Extensive simulation results demonstrated the effectiveness of the proposed algorithm and showed that it has a higher system utility than the baseline methods. 展开更多
关键词 UAV-user association Multi-connectivity resource allocation Power control Multi-agent deep reinforcement learning
下载PDF
Resource Allocation for Cognitive Network Slicing in PD-SCMA System Based on Two-Way Deep Reinforcement Learning
2
作者 Zhang Zhenyu Zhang Yong +1 位作者 Yuan Siyu Cheng Zhenjie 《China Communications》 SCIE CSCD 2024年第6期53-68,共16页
In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Se... In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Secondary users(SUs)in the cognitive network are multiplexed by a new Power Domain Sparse Code Multiple Access(PD-SCMA)scheme,and the physical resources of the cognitive base station are virtualized into two types of slices:enhanced mobile broadband(eMBB)slice and ultrareliable low latency communication(URLLC)slice.We design the Double Deep Q Network(DDQN)network output the optimal codebook assignment scheme and simultaneously use the Deep Deterministic Policy Gradient(DDPG)network output the optimal power allocation scheme.The objective is to jointly optimize the spectral efficiency of the system and the Quality of Service(QoS)of SUs.Simulation results show that the proposed algorithm outperforms the CNDDQN algorithm and modified JEERA algorithm in terms of spectral efficiency and QoS satisfaction.Additionally,compared with the Power Domain Non-orthogonal Multiple Access(PD-NOMA)slices and the Sparse Code Multiple Access(SCMA)slices,the PD-SCMA slices can dramatically enhance spectral efficiency and increase the number of accessible users. 展开更多
关键词 cognitive radio deep reinforcement learning network slicing power-domain non-orthogonal multiple access resource allocation
下载PDF
Deep-Ensemble Learning Method for Solar Resource Assessment of Complex Terrain Landscapes
3
作者 Lifeng Li Zaimin Yang +3 位作者 Xiongping Yang Jiaming Li Qianyufan Zhou Ping Yang 《Energy Engineering》 EI 2024年第5期1329-1346,共18页
As the global demand for renewable energy grows,solar energy is gaining attention as a clean,sustainable energy source.Accurate assessment of solar energy resources is crucial for the siting and design of photovoltaic... As the global demand for renewable energy grows,solar energy is gaining attention as a clean,sustainable energy source.Accurate assessment of solar energy resources is crucial for the siting and design of photovoltaic power plants.This study proposes an integrated deep learning-based photovoltaic resource assessment method.Ensemble learning and deep learning methods are fused for photovoltaic resource assessment for the first time.The proposed method combines the random forest,gated recurrent unit,and long short-term memory to effectively improve the accuracy and reliability of photovoltaic resource assessment.The proposed method has strong adaptability and high accuracy even in the photovoltaic resource assessment of complex terrain and landscape.The experimental results show that the proposed method outperforms the comparison algorithm in all evaluation indexes,indicating that the proposed method has higher accuracy and reliability in photovoltaic resource assessment with improved generalization performance traditional single algorithm. 展开更多
关键词 Photovoltaic resource assessment deep learning ensemble learning random forest gated recurrent unit long short-term memory
下载PDF
Task Offloading and Resource Allocation for Edge-Enabled Mobile Learning 被引量:1
4
作者 Ziyan Yang Shaochun Zhong 《China Communications》 SCIE CSCD 2023年第4期326-339,共14页
Mobile learning has evolved into a new format of education based on communication and computer technology that is favored by an increasing number of learning users thanks to the development of wireless communication n... Mobile learning has evolved into a new format of education based on communication and computer technology that is favored by an increasing number of learning users thanks to the development of wireless communication networks,mobile edge computing,artificial intelligence,and mobile devices.However,due to the constrained data processing capacity of mobile devices,efficient and effective interactive mobile learning is a challenge.Therefore,for mobile learning,we propose a"Cloud,Edge and End"fusion system architecture.Through task offloading and resource allocation for edge-enabled mobile learning to reduce the time and energy consumption of user equipment.Then,we present the proposed solutions that uses the minimum cost maximum flow(MCMF)algorithm to deal with the offloading problem and the deep Q network(DQN)algorithm to deal with the resource allocation problem respectively.Finally,the performance evaluation shows that the proposed offloading and resource allocation scheme can improve system performance,save energy,and satisfy the needs of learning users. 展开更多
关键词 mobile learning mobile edge computing(MEC) system construction OFFLOADING resource allocation
下载PDF
An Efficient Federated Learning Framework Deployed in Resource-Constrained IoV:User Selection and Learning Time Optimization Schemes
5
作者 Qiang Wang Shaoyi Xu +1 位作者 Rongtao Xu Dongji Li 《China Communications》 SCIE CSCD 2023年第12期111-130,共20页
In this article,an efficient federated learning(FL)Framework in the Internet of Vehicles(IoV)is studied.In the considered model,vehicle users implement an FL algorithm by training their local FL models and sending the... In this article,an efficient federated learning(FL)Framework in the Internet of Vehicles(IoV)is studied.In the considered model,vehicle users implement an FL algorithm by training their local FL models and sending their models to a base station(BS)that generates a global FL model through the model aggregation.Since each user owns data samples with diverse sizes and different quality,it is necessary for the BS to select the proper participating users to acquire a better global model.Meanwhile,considering the high computational overhead of existing selection methods based on the gradient,the lightweight user selection scheme based on the loss decay is proposed.Due to the limited wireless bandwidth,the BS needs to select an suitable subset of users to implement the FL algorithm.Moreover,the vehicle users’computing resource that can be used for FL training is usually limited in the IoV when other multiple tasks are required to be executed.The local model training and model parameter transmission of FL will have significant effects on the latency of FL.To address this issue,the joint communication and computing optimization problem is formulated whose objective is to minimize the FL delay in the resource-constrained system.To solve the complex nonconvex problem,an algorithm based on the concave-convex procedure(CCCP)is proposed,which can achieve superior performance in the small-scale and delay-insensitive FL system.Due to the fact that the convergence rate of CCCP method is too slow in a large-scale FL system,this method is not suitable for delay-sensitive applications.To solve this issue,a block coordinate descent algorithm based on the one-step projected gradient method is proposed to decrease the complexity of the solution at the cost of light performance degrading.Simulations are conducted and numerical results show the good performance of the proposed methods. 展开更多
关键词 block coordinate descent concave-convex procedure federated learning learning time resource allocation
下载PDF
Research on Dynamic Mathematical Resource Screening Methods Based on Machine Learning
6
作者 Han Zhou 《Journal of Applied Mathematics and Physics》 2023年第11期3610-3624,共15页
The current digital educational resources are of many kinds and large quantities, to solve the problems existing in the existing dynamic resource selection methods, a dynamic resource selection method based on machine... The current digital educational resources are of many kinds and large quantities, to solve the problems existing in the existing dynamic resource selection methods, a dynamic resource selection method based on machine learning is proposed. Firstly, according to the knowledge structure and concepts of mathematical resources, combined with the basic components of dynamic mathematical resources, the knowledge structure graph of mathematical resources is constructed;according to the characteristics of mathematical resources, the interaction between users and resources is simulated, and the graph of the main body of the resources is identified, and the candidate collection of mathematical knowledge is selected;finally, according to the degree of matching between mathematical literature and the candidate collection, machine learning is utilized, and the mathematical resources are screened. 展开更多
关键词 Machine learning Dynamic resource Filtering Knowledge Structure Graph resource Interaction
下载PDF
Low-Cost Federated Broad Learning for Privacy-Preserved Knowledge Sharing in the RIS-Aided Internet of Vehicles 被引量:1
7
作者 Xiaoming Yuan Jiahui Chen +4 位作者 Ning Zhang Qiang(John)Ye Changle Li Chunsheng Zhu Xuemin Sherman Shen 《Engineering》 SCIE EI CAS CSCD 2024年第2期178-189,共12页
High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency... High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency of local data learning models while preventing privacy leakage in a high mobility environment.In order to protect data privacy and improve data learning efficiency in knowledge sharing,we propose an asynchronous federated broad learning(FBL)framework that integrates broad learning(BL)into federated learning(FL).In FBL,we design a broad fully connected model(BFCM)as a local model for training client data.To enhance the wireless channel quality for knowledge sharing and reduce the communication and computation cost of participating clients,we construct a joint resource allocation and reconfigurable intelligent surface(RIS)configuration optimization framework for FBL.The problem is decoupled into two convex subproblems.Aiming to improve the resource scheduling efficiency in FBL,a double Davidon–Fletcher–Powell(DDFP)algorithm is presented to solve the time slot allocation and RIS configuration problem.Based on the results of resource scheduling,we design a reward-allocation algorithm based on federated incentive learning(FIL)in FBL to compensate clients for their costs.The simulation results show that the proposed FBL framework achieves better performance than the comparison models in terms of efficiency,accuracy,and cost for knowledge sharing in the IoV. 展开更多
关键词 Knowledge sharing Internet of Vehicles Federated learning Broad learning Reconfigurable intelligent surfaces resource allocation
下载PDF
A New Solution to Intrusion Detection Systems Based on Improved Federated-Learning Chain
8
作者 Chunhui Li Hua Jiang 《Computers, Materials & Continua》 SCIE EI 2024年第6期4491-4512,共22页
In the context of enterprise systems,intrusion detection(ID)emerges as a critical element driving the digital transformation of enterprises.With systems spanning various sectors of enterprises geographically dispersed... In the context of enterprise systems,intrusion detection(ID)emerges as a critical element driving the digital transformation of enterprises.With systems spanning various sectors of enterprises geographically dispersed,the necessity for seamless information exchange has surged significantly.The existing cross-domain solutions are challenged by such issues as insufficient security,high communication overhead,and a lack of effective update mechanisms,rendering them less feasible for prolonged application on resource-limited devices.This study proposes a new cross-domain collaboration scheme based on federated chains to streamline the server-side workload.Within this framework,individual nodes solely engage in training local data and subsequently amalgamate the final model employing a federated learning algorithm to uphold enterprise systems with efficiency and security.To curtail the resource utilization of blockchains and deter malicious nodes,a node administration module predicated on the workload paradigm is introduced,enabling the release of surplus resources in response to variations in a node’s contribution metric.Upon encountering an intrusion,the system triggers an alert and logs the characteristics of the breach,facilitating a comprehensive global update across all nodes for collective defense.Experimental results across multiple scenarios have verified the security and effectiveness of the proposed solution,with no loss of its recognition accuracy. 展开更多
关键词 Cross-domain collaboration blockchain federated learning contribution value node management release slack resources
下载PDF
Stochastic Gradient Compression for Federated Learning over Wireless Network
9
作者 Lin Xiaohan Liu Yuan +2 位作者 Chen Fangjiong Huang Yang Ge Xiaohu 《China Communications》 SCIE CSCD 2024年第4期230-247,共18页
As a mature distributed machine learning paradigm,federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent(SGD).However,devices need to upload high-dim... As a mature distributed machine learning paradigm,federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent(SGD).However,devices need to upload high-dimensional stochastic gradients to edge server in training,which cause severe communication bottleneck.To address this problem,we compress the communication by sparsifying and quantizing the stochastic gradients of edge devices.We first derive a closed form of the communication compression in terms of sparsification and quantization factors.Then,the convergence rate of this communicationcompressed system is analyzed and several insights are obtained.Finally,we formulate and deal with the quantization resource allocation problem for the goal of minimizing the convergence upper bound,under the constraint of multiple-access channel capacity.Simulations show that the proposed scheme outperforms the benchmarks. 展开更多
关键词 federated learning gradient compression quantization resource allocation stochastic gradient descent(SGD)
下载PDF
Multi-Agent Deep Deterministic Policy Gradien-Based Task Offloading Resource Allocation Joint Offloading
10
作者 Xuan Zhang Xiaohui Hu 《Journal of Computer and Communications》 2024年第6期152-168,共17页
With the advancement of technology and the continuous innovation of applications, low-latency applications such as drones, online games and virtual reality are gradually becoming popular demands in modern society. How... With the advancement of technology and the continuous innovation of applications, low-latency applications such as drones, online games and virtual reality are gradually becoming popular demands in modern society. However, these applications pose a great challenge to the traditional centralized mobile cloud computing paradigm, and it is obvious that the traditional cloud computing model is already struggling to meet such demands. To address the shortcomings of cloud computing, mobile edge computing has emerged. Mobile edge computing provides users with computing and storage resources by offloading computing tasks to servers at the edge of the network. However, most existing work only considers single-objective performance optimization in terms of latency or energy consumption, but not balanced optimization in terms of latency and energy consumption. To reduce task latency and device energy consumption, the problem of joint optimization of computation offloading and resource allocation in multi-cell, multi-user, multi-server MEC environments is investigated. In this paper, a dynamic computation offloading algorithm based on Multi-Agent Deep Deterministic Policy Gradient (MADDPG) is proposed to obtain the optimal policy. The experimental results show that the algorithm proposed in this paper reduces the delay by 5 ms compared to PPO, 1.5 ms compared to DDPG and 10.7 ms compared to DQN, and reduces the energy consumption by 300 compared to PPO, 760 compared to DDPG and 380 compared to DQN. This fully proves that the algorithm proposed in this paper has excellent performance. 展开更多
关键词 Edge Computing Task Offloading Deep Reinforcement learning resource Allocation MADDPG
下载PDF
The Effectiveness of Self-regulated Learning Strategies on Chinese College Students' English Learning
11
作者 张晓雁 李安玲 《海外英语》 2011年第10X期127-128,共2页
The purpose of this paper is to argue the effectiveness of self-regulated learning in English education in Chinese college classroom instruction. A study is given to show whether the introduction of self-regulated lea... The purpose of this paper is to argue the effectiveness of self-regulated learning in English education in Chinese college classroom instruction. A study is given to show whether the introduction of self-regulated learning can help improve Chinese college students' English learning, and help them perform better in the National English test-CET-4 (College English Test Level-4,). 展开更多
关键词 self-regulated learning GOAL-SETTING self-instructional strategies motivation SELF-EFFICACY EXPERIENTIAL GROUP and control GROUP
下载PDF
A Machine-Learning Based Time Constrained Resource Allocation Scheme for Vehicular Fog Computing 被引量:3
12
作者 Xiaosha Chen Supeng Leng +1 位作者 Ke Zhang Kai Xiong 《China Communications》 SCIE CSCD 2019年第11期29-41,共13页
Through integrating advanced communication and data processing technologies into smart vehicles and roadside infrastructures,the Intelligent Transportation System(ITS)has evolved as a promising paradigm for improving ... Through integrating advanced communication and data processing technologies into smart vehicles and roadside infrastructures,the Intelligent Transportation System(ITS)has evolved as a promising paradigm for improving safety,efficiency of the transportation system.However,the strict delay requirement of the safety-related applications is still a great challenge for the ITS,especially in dense traffic environment.In this paper,we introduce the metric called Perception-Reaction Time(PRT),which reflects the time consumption of safety-related applications and is closely related to road efficiency and security.With the integration of the incorporating information-centric networking technology and the fog virtualization approach,we propose a novel fog resource scheduling mechanism to minimize the PRT.Furthermore,we adopt a deep reinforcement learning approach to design an on-line optimal resource allocation scheme.Numerical results demonstrate that our proposed schemes is able to reduce about 70%of the RPT compared with the traditional approach. 展开更多
关键词 deep reinforcement learning information-centric NETWORKING intelligent transport system perception-reaction time resource ALLOCATION vehicular FOG
下载PDF
Machine Learning Based Resource Allocation of Cloud Computing in Auction 被引量:5
13
作者 Jixian Zhang Ning Xie +3 位作者 Xuejie Zhang Kun Yue Weidong Li Deepesh Kumar 《Computers, Materials & Continua》 SCIE EI 2018年第7期123-135,共13页
Resource allocation in auctions is a challenging problem for cloud computing.However,the resource allocation problem is NP-hard and cannot be solved in polynomial time.The existing studies mainly use approximate algor... Resource allocation in auctions is a challenging problem for cloud computing.However,the resource allocation problem is NP-hard and cannot be solved in polynomial time.The existing studies mainly use approximate algorithms such as PTAS or heuristic algorithms to determine a feasible solution;however,these algorithms have the disadvantages of low computational efficiency or low allocate accuracy.In this paper,we use the classification of machine learning to model and analyze the multi-dimensional cloud resource allocation problem and propose two resource allocation prediction algorithms based on linear and logistic regressions.By learning a small-scale training set,the prediction model can guarantee that the social welfare,allocation accuracy,and resource utilization in the feasible solution are very close to those of the optimal allocation solution.The experimental results show that the proposed scheme has good effect on resource allocation in cloud computing. 展开更多
关键词 Cloud computing resource allocation machine learning linear regression logistic regression
下载PDF
Deep Reinforcement Learning Based Joint Partial Computation Offloading and Resource Allocation in Mobility-Aware MEC System 被引量:2
14
作者 Luyao Wang Guanglin Zhang 《China Communications》 SCIE CSCD 2022年第8期85-99,共15页
Mobile edge computing(MEC)emerges as a paradigm to free mobile devices(MDs)from increasingly dense computing workloads in 6G networks.The quality of computing experience can be greatly improved by offloading computing... Mobile edge computing(MEC)emerges as a paradigm to free mobile devices(MDs)from increasingly dense computing workloads in 6G networks.The quality of computing experience can be greatly improved by offloading computing tasks from MDs to MEC servers.Renewable energy harvested by energy harvesting equipments(EHQs)is considered as a promising power supply for users to process and offload tasks.In this paper,we apply the uniform mobility model of MDs to derive a more realistic wireless channel model in a multi-user MEC system with batteries as EHQs to harvest and storage energy.We investigate an optimization problem of the weighted sum of delay cost and energy cost of MDs in the MEC system.We propose an effective joint partial computation offloading and resource allocation(CORA)algorithm which is based on deep reinforcement learning(DRL)to obtain the optimal scheduling without prior knowledge of task arrival,renewable energy arrival as well as channel condition.The simulation results verify the efficiency of the proposed algorithm,which undoubtedly minimizes the cost of MDs compared with other benchmarks. 展开更多
关键词 mobile edge computing energy harvesting device-mobility partial computation offloading resource allocation deep reinforcement learning
下载PDF
Joint Scheduling and Resource Allocation for Federated Learning in SWIPT-Enabled Micro UAV Swarm Networks 被引量:2
15
作者 WanliWen Yunjian Jia Wenchao Xia 《China Communications》 SCIE CSCD 2022年第1期119-135,共17页
Micro-UAV swarms usually generate massive data when performing tasks. These data can be harnessed with various machine learning(ML) algorithms to improve the swarm’s intelligence. To achieve this goal while protectin... Micro-UAV swarms usually generate massive data when performing tasks. These data can be harnessed with various machine learning(ML) algorithms to improve the swarm’s intelligence. To achieve this goal while protecting swarm data privacy, federated learning(FL) has been proposed as a promising enabling technology. During the model training process of FL, the UAV may face an energy scarcity issue due to the limited battery capacity. Fortunately, this issue is potential to be tackled via simultaneous wireless information and power transfer(SWIPT). However, the integration of SWIPT and FL brings new challenges to the system design that have yet to be addressed, which motivates our work. Specifically,in this paper, we consider a micro-UAV swarm network consisting of one base station(BS) and multiple UAVs, where the BS uses FL to train an ML model over the data collected by the swarm. During training, the BS broadcasts the model and energy simultaneously to the UAVs via SWIPT, and each UAV relies on its harvested and battery-stored energy to train the received model and then upload it to the BS for model aggregation. To improve the learning performance, we formulate a problem of maximizing the percentage of scheduled UAVs by jointly optimizing UAV scheduling and wireless resource allocation. The problem is a challenging mixed integer nonlinear programming problem and is NP-hard in general. By exploiting its special structure property, we develop two algorithms to achieve the optimal and suboptimal solutions, respectively. Numerical results show that the suboptimal algorithm achieves a near-optimal performance under various network setups, and significantly outperforms the existing representative baselines. considered. 展开更多
关键词 micro unmanned aerial vehicle federated learning simultaneous wireless information and power transfer SCHEDULING resource allocation
下载PDF
Learning to Optimize for Resource Allocation in LTE-U Networks 被引量:1
16
作者 Guanhua Chai Weihua Wu +2 位作者 Qinghai Yang Runzi Liu Kyung Sup Kwak 《China Communications》 SCIE CSCD 2021年第3期142-154,共13页
This paper proposes a deep learning(DL)resource allocation framework to achieve the harmonious coexistence between the transceiver pairs(TPs)and the Wi-Fi users in LTE-U networks.The nonconvex resource allocation is c... This paper proposes a deep learning(DL)resource allocation framework to achieve the harmonious coexistence between the transceiver pairs(TPs)and the Wi-Fi users in LTE-U networks.The nonconvex resource allocation is considered as a constrained learning problem and the deep neural network(DNN)is employed to approximate the optimal resource allocation decisions through unsupervised manner.A parallel DNN framework is proposed to deal with the two optimization variables in this problem,where one is the licensed power allocation unit and the other is the unlicensed time fraction occupied unit.Besides,to guarantee the feasibility of the proposed algorithm,the Lagrange dual method is used to relax the constraints into the DNN training process.Then,the dual variable and the DNN parameter are alternating update via the batch-based gradient decent method until the training process converges.Numerical results show that the proposed algorithm is feasible and has better performance than other general algorithms. 展开更多
关键词 deep learning resource allocation LTE-U networks Wi-Fi system
下载PDF
Flow Shop Scheduling Problem with Convex Resource Allocation and Learning Effect 被引量:1
17
作者 Xinna Geng Jibo Wang Chou-Jung Hsu 《Journal of Computer and Communications》 2018年第1期239-246,共8页
In this paper, we consider the no-wait two-machine scheduling problem with convex resource allocation and learning effect under the condition of common due date assignment. We take the total earliness, tardiness and c... In this paper, we consider the no-wait two-machine scheduling problem with convex resource allocation and learning effect under the condition of common due date assignment. We take the total earliness, tardiness and common due date cost as the objective function, and find the optimal common due date, the resource allocation and the schedule of jobs to make the objective function minimum under the constraint condition that the total resource is limited. The corresponding algorithm is given and proved that the problem can be solved in polynomial time. 展开更多
关键词 learning Effect NO-WAIT Flow SHOP CONVEX resource ALLOCATION
下载PDF
Multi-Objective Deep Reinforcement Learning Based Time-Frequency Resource Allocation for Multi-Beam Satellite Communications 被引量:1
18
作者 Yuanzhi He Biao Sheng +2 位作者 Hao Yin Di Yan Yingchao Zhang 《China Communications》 SCIE CSCD 2022年第1期77-91,共15页
Resource allocation is an important problem influencing the service quality of multi-beam satellite communications.In multi-beam satellite communications, the available frequency bandwidth is limited, users requiremen... Resource allocation is an important problem influencing the service quality of multi-beam satellite communications.In multi-beam satellite communications, the available frequency bandwidth is limited, users requirements vary rapidly, high service quality and joint allocation of multi-dimensional resources such as time and frequency are required. It is a difficult problem needs to be researched urgently for multi-beam satellite communications, how to obtain a higher comprehensive utilization rate of multidimensional resources, maximize the number of users and system throughput, and meet the demand of rapid allocation adapting dynamic changed the number of users under the condition of limited resources, with using an efficient and fast resource allocation algorithm.In order to solve the multi-dimensional resource allocation problem of multi-beam satellite communications, this paper establishes a multi-objective optimization model based on the maximum the number of users and system throughput joint optimization goal, and proposes a multi-objective deep reinforcement learning based time-frequency two-dimensional resource allocation(MODRL-TF) algorithm to adapt dynamic changed the number of users and the timeliness requirements. Simulation results show that the proposed algorithm could provide higher comprehensive utilization rate of multi-dimensional resources,and could achieve multi-objective joint optimization,and could obtain better timeliness than traditional heuristic algorithms, such as genetic algorithm(GA)and ant colony optimization algorithm(ACO). 展开更多
关键词 multi-beam satellite communications time-frequency resource allocation multi-objective optimization deep reinforcement learning
下载PDF
Deep reinforcement learning-based resource allocation for D2D communications in heterogeneous cellular networks 被引量:1
19
作者 Yuan Zhi Jie Tian +2 位作者 Xiaofang Deng Jingping Qiao Dianjie Lu 《Digital Communications and Networks》 SCIE CSCD 2022年第5期834-842,共9页
Device-to-Device(D2D)communication-enabled Heterogeneous Cellular Networks(HCNs)have been a promising technology for satisfying the growing demands of smart mobile devices in fifth-generation mobile networks.The intro... Device-to-Device(D2D)communication-enabled Heterogeneous Cellular Networks(HCNs)have been a promising technology for satisfying the growing demands of smart mobile devices in fifth-generation mobile networks.The introduction of Millimeter Wave(mm-wave)communications into D2D-enabled HCNs allows higher system capacity and user data rates to be achieved.However,interference among cellular and D2D links remains severe due to spectrum sharing.In this paper,to guarantee user Quality of Service(QoS)requirements and effectively manage the interference among users,we focus on investigating the joint optimization problem of mode selection and channel allocation in D2D-enabled HCNs with mm-wave and cellular bands.The optimization problem is formulated as the maximization of the system sum-rate under QoS constraints of both cellular and D2D users in HCNs.To solve it,a distributed multiagent deep Q-network algorithm is proposed,where the reward function is redefined according to the optimization objective.In addition,to reduce signaling overhead,a partial information sharing strategy that does not observe global information is proposed for D2D agents to select the optimal mode and channel through learning.Simulation results illustrate that the proposed joint optimization algorithm possesses good convergence and achieves better system performance compared with other existing schemes. 展开更多
关键词 Deep reinforcement learning Heterogeneous cellular networks Device-to-device communication Millimeter wave communication resource allocation
下载PDF
Efficient Virtual Resource Allocation in Mobile Edge Networks Based on Machine Learning 被引量:2
20
作者 Li Li Yifei Wei +1 位作者 Lianping Zhang Xiaojun Wang 《Journal of Cyber Security》 2020年第3期141-150,共10页
The rapid growth of Internet content,applications and services require more computing and storage capacity and higher bandwidth.Traditionally,internet services are provided from the cloud(i.e.,from far away)and consum... The rapid growth of Internet content,applications and services require more computing and storage capacity and higher bandwidth.Traditionally,internet services are provided from the cloud(i.e.,from far away)and consumed on increasingly smart devices.Edge computing and caching provides these services from nearby smart devices.Blending both approaches should combine the power of cloud services and the responsiveness of edge networks.This paper investigates how to intelligently use the caching and computing capabilities of edge nodes/cloudlets through the use of artificial intelligence-based policies.We first analyze the scenarios of mobile edge networks with edge computing and caching abilities,then design a paradigm of virtualized edge network which includes an efficient way of isolating traffic flow in physical network layer.We develop the caching and communicating resource virtualization in virtual layer,and formulate the dynamic resource allocation problem into a reinforcement learning model,with the proposed self-adaptive and self-learning management,more flexible,better performance and more secure network services with lower cost will be obtained.Simulation results and analyzes show that addressing cached contents in proper edge nodes through a trained model is more efficient than requiring them from the cloud. 展开更多
关键词 Artificial Intelligence reinforcement learning edge computing edge caching energy saving resource allocation
下载PDF
上一页 1 2 189 下一页 到第
使用帮助 返回顶部