The performance of massive MIMO systems relies heavily on the availability of Channel State Information at the Transmitter(CSIT).A large amount of work has been devoted to reducing the CSIT acquisition overhead at the...The performance of massive MIMO systems relies heavily on the availability of Channel State Information at the Transmitter(CSIT).A large amount of work has been devoted to reducing the CSIT acquisition overhead at the pilot training and/or CsI feedback stage.In fact,the downlink communication generally includes three stages,i.e.,pilot training,CsI feedback,and data transmission.These three stages are mutually related and jointly determine the overall system performance.Unfortunately,there exist few studies on the reduction of csIT acquisition overhead from the global point of view.In this paper,we integrate the Minimum Mean Square Error(MMSE)channel estimation,Random Vector Quantization(RVQ)based limited feedback and Maximal Ratio Combining(MRC)precoding into a unified framework for investigating the resource allocation problem.In particular,we first approximate the covariance matrix of the quantization error with a simple expression and derive an analytical expression of the received Signal-to-Noise Ratio(SNR)based on the deterministic equivalence theory.Then the three performance metrics(the spectral efficiency,energy efficiency,and total energy consumption)oriented problems are formulated analytically.With practical system requirements,these three metrics can be collaboratively optimized.Finally,we propose an optimization solver to derive the optimal partition of channel coherence time.Experiment results verify the benefits of the proposed resource allocation schemes under three different scenarios and illustrate the tradeoff of resource allocation between three stages.展开更多
A real-time adaptive roles allocation method based on reinforcement learning is proposed to improve humanrobot cooperation performance for a curtain wall installation task.This method breaks the traditional idea that ...A real-time adaptive roles allocation method based on reinforcement learning is proposed to improve humanrobot cooperation performance for a curtain wall installation task.This method breaks the traditional idea that the robot is regarded as the follower or only adjusts the leader and the follower in cooperation.In this paper,a self-learning method is proposed which can dynamically adapt and continuously adjust the initiative weight of the robot according to the change of the task.Firstly,the physical human-robot cooperation model,including the role factor is built.Then,a reinforcement learningmodel that can adjust the role factor in real time is established,and a reward and actionmodel is designed.The role factor can be adjusted continuously according to the comprehensive performance of the human-robot interaction force and the robot’s Jerk during the repeated installation.Finally,the roles adjustment rule established above continuously improves the comprehensive performance.Experiments of the dynamic roles allocation and the effect of the performance weighting coefficient on the result have been verified.The results show that the proposed method can realize the role adaptation and achieve the dual optimization goal of reducing the sum of the cooperator force and the robot’s Jerk.展开更多
With the rapid development of Network Function Virtualization(NFV),the problem of low resource utilizationin traditional data centers is gradually being addressed.However,existing research does not optimize both local...With the rapid development of Network Function Virtualization(NFV),the problem of low resource utilizationin traditional data centers is gradually being addressed.However,existing research does not optimize both localand global allocation of resources in data centers.Hence,we propose an adaptive hybrid optimization strategy thatcombines dynamic programming and neural networks to improve resource utilization and service quality in datacenters.Our approach encompasses a service function chain simulation generator,a parallel architecture servicesystem,a dynamic programming strategy formaximizing the utilization of local server resources,a neural networkfor predicting the global utilization rate of resources and a global resource optimization strategy for bottleneck andredundant resources.With the implementation of our local and global resource allocation strategies,the systemperformance is significantly optimized through simulation.展开更多
Users and edge servers are not fullymutually trusted inmobile edge computing(MEC),and hence blockchain can be introduced to provide trustableMEC.In blockchain-basedMEC,each edge server functions as a node in bothMEC a...Users and edge servers are not fullymutually trusted inmobile edge computing(MEC),and hence blockchain can be introduced to provide trustableMEC.In blockchain-basedMEC,each edge server functions as a node in bothMEC and blockchain,processing users’tasks and then uploading the task related information to the blockchain.That is,each edge server runs both users’offloaded tasks and blockchain tasks simultaneously.Note that there is a trade-off between the resource allocation for MEC and blockchain tasks.Therefore,the allocation of the resources of edge servers to the blockchain and theMEC is crucial for the processing delay of blockchain-based MEC.Most of the existing research tackles the problem of resource allocation in either blockchain or MEC,which leads to unfavorable performance of the blockchain-based MEC system.In this paper,we study how to allocate the computing resources of edge servers to the MEC and blockchain tasks with the aimtominimize the total systemprocessing delay.For the problem,we propose a computing resource Allocation algorithmfor Blockchain-based MEC(ABM)which utilizes the Slater’s condition,Karush-Kuhn-Tucker(KKT)conditions,partial derivatives of the Lagrangian function and subgradient projection method to obtain the solution.Simulation results show that ABM converges and effectively reduces the processing delay of blockchain-based MEC.展开更多
Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks.However,edge computing servers(ECSs)from different operators may not trust each other,and thus t...Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks.However,edge computing servers(ECSs)from different operators may not trust each other,and thus the incentives for collaboration cannot be guaranteed.In this paper,we propose a consortium blockchain enabled collaborative edge computing framework,where users can offload computing tasks to ECSs from different operators.To minimize the total delay of users,we formulate a joint task offloading and resource optimization problem,under the constraint of the computing capability of each ECS.We apply the Tammer decomposition method and heuristic optimization algorithms to obtain the optimal solution.Finally,we propose a reputation based node selection approach to facilitate the consensus process,and also consider a completion time based primary node selection to avoid monopolization of certain edge node and enhance the security of the blockchain.Simulation results validate the effectiveness of the proposed algorithm,and the total delay can be reduced by up to 40%compared with the non-cooperative case.展开更多
With the development of vehicles towards intelligence and connectivity,vehicular data is diversifying and growing dramatically.A task allocation model and algorithm for heterogeneous Intelligent Connected Vehicle(ICV)...With the development of vehicles towards intelligence and connectivity,vehicular data is diversifying and growing dramatically.A task allocation model and algorithm for heterogeneous Intelligent Connected Vehicle(ICV)applications are proposed for the dispersed computing network composed of heterogeneous task vehicles and Network Computing Points(NCPs).Considering the amount of task data and the idle resources of NCPs,a computing resource scheduling model for NCPs is established.Taking the heterogeneous task execution delay threshold as a constraint,the optimization problem is described as the problem of maximizing the utilization of computing resources by NCPs.The proposed problem is proven to be NP-hard by using the method of reduction to a 0-1 knapsack problem.A many-to-many matching algorithm based on resource preferences is proposed.The algorithm first establishes the mutual preference lists based on the adaptability of the task requirements and the resources provided by NCPs.This enables the filtering out of un-schedulable NCPs in the initial stage of matching,reducing the solution space dimension.To solve the matching problem between ICVs and NCPs,a new manyto-many matching algorithm is proposed to obtain a unique and stable optimal matching result.The simulation results demonstrate that the proposed scheme can improve the resource utilization of NCPs by an average of 9.6%compared to the reference scheme,and the total performance can be improved by up to 15.9%.展开更多
Crowdsourcing technology is widely recognized for its effectiveness in task scheduling and resource allocation.While traditional methods for task allocation can help reduce costs and improve efficiency,they may encoun...Crowdsourcing technology is widely recognized for its effectiveness in task scheduling and resource allocation.While traditional methods for task allocation can help reduce costs and improve efficiency,they may encounter challenges when dealing with abnormal data flow nodes,leading to decreased allocation accuracy and efficiency.To address these issues,this study proposes a novel two-part invalid detection task allocation framework.In the first step,an anomaly detection model is developed using a dynamic self-attentive GAN to identify anomalous data.Compared to the baseline method,the model achieves an approximately 4%increase in the F1 value on the public dataset.In the second step of the framework,task allocation modeling is performed using a twopart graph matching method.This phase introduces a P-queue KM algorithm that implements a more efficient optimization strategy.The allocation efficiency is improved by approximately 23.83%compared to the baseline method.Empirical results confirm the effectiveness of the proposed framework in detecting abnormal data nodes,enhancing allocation precision,and achieving efficient allocation.展开更多
To improve the efficiency and fairness of the spectrum allocation for ground communication assisted by unmanned aerial vehicles(UAVs),a joint optimization method for on-demand deployment and spectrum allocation of UAV...To improve the efficiency and fairness of the spectrum allocation for ground communication assisted by unmanned aerial vehicles(UAVs),a joint optimization method for on-demand deployment and spectrum allocation of UAVs is proposed,which is modeled as a mixed-integer non-convex optimization problem(MINCOP).An algorithm to estimate the minimum number of required UAVs is firstly proposed based on the pre-estimation and simulated annealing.The MINCOP is then decomposed into three sub-problems based on the block coordinate descent method,including the spectrum allocation of UAVs,the association between UAVs and ground users,and the deployment of UAVs.Specifically,the optimal spectrum allocation is derived based on the interference mitigation and channel reuse.The association between UAVs and ground users is optimized based on local iterated optimization.A particle-based optimization algorithm is proposed to resolve the subproblem of the UAVs deployment.Simulation results show that the proposed method could effectively improve the minimum transmission rate of UAVs as well as user fairness of spectrum allocation.展开更多
Inter-datacenter elastic optical networks(EON)need to provide the service for the requests of cloud computing that require not only connectivity and computing resources but also network survivability.In this paper,to ...Inter-datacenter elastic optical networks(EON)need to provide the service for the requests of cloud computing that require not only connectivity and computing resources but also network survivability.In this paper,to realize joint allocation of computing and connectivity resources in survivable inter-datacenter EONs,a survivable routing,modulation level,spectrum,and computing resource allocation algorithm(SRMLSCRA)algorithm and three datacenter selection strategies,i.e.Computing Resource First(CRF),Shortest Path First(SPF)and Random Destination(RD),are proposed for different scenarios.Unicast and manycast are applied to the communication of computing requests,and the routing strategies are calculated respectively.Simulation results show that SRMLCRA-CRF can serve the largest amount of protected computing tasks,and the requested calculation blocking probability is reduced by 29.2%,28.3%and 30.5%compared with SRMLSCRA-SPF,SRMLSCRA-RD and the benchmark EPS-RMSA algorithms respectively.Therefore,it is more applicable to the networks with huge calculations.Besides,SRMLSCRA-SPF consumes the least spectrum,thereby exhibiting its suitability for scenarios where the amount of calculation is small and communication resources are scarce.The results demonstrate that the proposed methods realize the joint allocation of computing and connectivity resources,and could provide efficient protection for services under single-link failure and occupy less spectrum.展开更多
In this paper,we optimize the spectrum efficiency(SE)of uplink massive multiple-input multiple-output(MIMO)system with imperfect channel state information(CSI)over Rayleigh fading channel.The SE optimization problem i...In this paper,we optimize the spectrum efficiency(SE)of uplink massive multiple-input multiple-output(MIMO)system with imperfect channel state information(CSI)over Rayleigh fading channel.The SE optimization problem is formulated under the constraints of maximum power and minimum rate of each user.Then,we develop a near-optimal power allocation(PA)scheme by using the successive convex approximation(SCA)method,Lagrange multiplier method,and block coordinate descent(BCD)method,and it can obtain almost the same SE as the benchmark scheme with lower complexity.Since this scheme needs three-layer iteration,a suboptimal PA scheme is developed to further reduce the complexity,where the characteristic of massive MIMO(i.e.,numerous receive antennas)is utilized for convex reformulation,and the rate constraint is converted to linear constraints.This suboptimal scheme only needs single-layer iteration,thus has lower complexity than the near-optimal scheme.Finally,we joint design the pilot power and data power to further improve the performance,and propose an two-stage algorithm to obtain joint PA.Simulation results verify the effectiveness of the proposed schemes,and superior SE performance is achieved.展开更多
To meet the communication services with diverse requirements,dynamic resource allocation has shown increasing importance.In this paper,we consider the multi-slot and multi-user resource allocation(MSMU-RA)in a downlin...To meet the communication services with diverse requirements,dynamic resource allocation has shown increasing importance.In this paper,we consider the multi-slot and multi-user resource allocation(MSMU-RA)in a downlink cellular scenario with the aim of maximizing system spectral efficiency while guaranteeing user fairness.We first model the MSMURA problem as a dual-sequence decision-making process,and then solve it by a novel Transformerbased deep reinforcement learning(TDRL)approach.Specifically,the proposed TDRL approach can be achieved based on two aspects:1)To adapt to the dynamic wireless environment,the proximal policy optimization(PPO)algorithm is used to optimize the multi-slot RA strategy.2)To avoid co-channel interference,the Transformer-based PPO algorithm is presented to obtain the optimal multi-user RA scheme by exploring the mapping between user sequence and resource sequence.Experimental results show that:i)the proposed approach outperforms both the traditional and DRL methods in spectral efficiency and user fairness,ii)the proposed algorithm is superior to DRL approaches in terms of convergence speed and generalization performance.展开更多
Unmanned Aerial Vehicles(UAvs)as aerial base stations to provide communication services for ground users is a flexible and cost-effective paradigm in B5G.Besides,dynamic resource allocation and multi-connectivity can ...Unmanned Aerial Vehicles(UAvs)as aerial base stations to provide communication services for ground users is a flexible and cost-effective paradigm in B5G.Besides,dynamic resource allocation and multi-connectivity can be adopted to further harness the potentials of UAVs in improving communication capacity,in such situations such that the interference among users becomes a pivotal disincentive requiring effective solutions.To this end,we investigate the Joint UAV-User Association,Channel Allocation,and transmission Power Control(J-UACAPC)problem in a multi-connectivity-enabled UAV network with constrained backhaul links,where each UAV can determine the reusable channels and transmission power to serve the selected ground users.The goal was to mitigate co-channel interference while maximizing long-term system utility.The problem was modeled as a cooperative stochastic game with hybrid discrete-continuous action space.A Multi-Agent Hybrid Deep Reinforcement Learning(MAHDRL)algorithm was proposed to address this problem.Extensive simulation results demonstrated the effectiveness of the proposed algorithm and showed that it has a higher system utility than the baseline methods.展开更多
The cloud platform has limited defense resources to fully protect the edge servers used to process crowd sensing data in Internet of Things.To guarantee the network's overall security,we present a network defense ...The cloud platform has limited defense resources to fully protect the edge servers used to process crowd sensing data in Internet of Things.To guarantee the network's overall security,we present a network defense resource allocation with multi-armed bandits to maximize the network's overall benefit.Firstly,we propose the method for dynamic setting of node defense resource thresholds to obtain the defender(attacker)benefit function of edge servers(nodes)and distribution.Secondly,we design a defense resource sharing mechanism for neighboring nodes to obtain the defense capability of nodes.Subsequently,we use the decomposability and Lipschitz conti-nuity of the defender's total expected utility to reduce the difference between the utility's discrete and continuous arms and analyze the difference theoretically.Finally,experimental results show that the method maximizes the defender's total expected utility and reduces the difference between the discrete and continuous arms of the utility.展开更多
In Beyond the Fifth Generation(B5G)heterogeneous edge networks,numerous users are multiplexed on a channel or served on the same frequency resource block,in which case the transmitter applies coding and the receiver u...In Beyond the Fifth Generation(B5G)heterogeneous edge networks,numerous users are multiplexed on a channel or served on the same frequency resource block,in which case the transmitter applies coding and the receiver uses interference cancellation.Unfortunately,uncoordinated radio resource allocation can reduce system throughput and lead to user inequity,for this reason,in this paper,channel allocation and power allocation problems are formulated to maximize the system sum rate and minimum user achievable rate.Since the construction model is non-convex and the response variables are high-dimensional,a distributed Deep Reinforcement Learning(DRL)framework called distributed Proximal Policy Optimization(PPO)is proposed to allocate or assign resources.Specifically,several simulated agents are trained in a heterogeneous environment to find robust behaviors that perform well in channel assignment and power allocation.Moreover,agents in the collection stage slow down,which hinders the learning of other agents.Therefore,a preemption strategy is further proposed in this paper to optimize the distributed PPO,form DP-PPO and successfully mitigate the straggler problem.The experimental results show that our mechanism named DP-PPO improves the performance over other DRL methods.展开更多
Quantum key distribution(QKD)is a technology that can resist the threat of quantum computers to existing conventional cryptographic protocols.However,due to the stringent requirements of the quantum key generation env...Quantum key distribution(QKD)is a technology that can resist the threat of quantum computers to existing conventional cryptographic protocols.However,due to the stringent requirements of the quantum key generation environment,the generated quantum keys are considered valuable,and the slow key generation rate conflicts with the high-speed data transmission in traditional optical networks.In this paper,for the QKD network with a trusted relay,which is mainly based on point-to-point quantum keys and has complex changes in network resources,we aim to allocate resources reasonably for data packet distribution.Firstly,we formulate a linear programming constraint model for the key resource allocation(KRA)problem based on the time-slot scheduling.Secondly,we propose a new scheduling scheme based on the graded key security requirements(GKSR)and a new micro-log key storage algorithm for effective storage and management of key resources.Finally,we propose a key resource consumption(KRC)routing optimization algorithm to properly allocate time slots,routes,and key resources.Simulation results show that the proposed scheme significantly improves the key distribution success rate and key resource utilization rate,among others.展开更多
Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinfor...Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinforce-ment learning(DRL)theory and an improved Multi-Agent Deep Deterministic Policy Gradient(MADDPG-D2)algorithm with a dual experience replay pool and a dual noise based on multi-agent architecture is proposed to improve the efficiency of DTA.The algorithm is based on the traditional Multi-Agent Deep Deterministic Policy Gradient(MADDPG)algorithm,and considers the introduction of a double noise mechanism to increase the action exploration space in the early stage of the algorithm,and the introduction of a double experience pool to improve the data utilization rate;at the same time,in order to accelerate the training speed and efficiency of the agents,and to solve the cold-start problem of the training,the a priori knowledge technology is applied to the training of the algorithm.Finally,the MADDPG-D2 algorithm is compared and analyzed based on the digital battlefield of ground and air confrontation.The experimental results show that the agents trained by the MADDPG-D2 algorithm have higher win rates and average rewards,can utilize the resources more reasonably,and better solve the problem of the traditional single agent algorithms facing the difficulty of solving the problem in the high-dimensional decision space.The MADDPG-D2 algorithm based on multi-agent architecture proposed in this paper has certain superiority and rationality in DTA.展开更多
With the rapid development of urban rail transit,the existing track detection has some problems such as low efficiency and insufficient detection coverage,so an intelligent and automatic track detectionmethod based on...With the rapid development of urban rail transit,the existing track detection has some problems such as low efficiency and insufficient detection coverage,so an intelligent and automatic track detectionmethod based onUAV is urgently needed to avoid major safety accidents.At the same time,the geographical distribution of IoT devices results in the inefficient use of the significant computing potential held by a large number of devices.As a result,the Dispersed Computing(DCOMP)architecture enables collaborative computing between devices in the Internet of Everything(IoE),promotes low-latency and efficient cross-wide applications,and meets users’growing needs for computing performance and service quality.This paper focuses on examining the resource allocation challenge within a dispersed computing environment that utilizes UAV inspection tracks.Furthermore,the system takes into account both resource constraints and computational constraints and transforms the optimization problem into an energy minimization problem with computational constraints.The Markov Decision Process(MDP)model is employed to capture the connection between the dispersed computing resource allocation strategy and the system environment.Subsequently,a method based on Double Deep Q-Network(DDQN)is introduced to derive the optimal policy.Simultaneously,an experience replay mechanism is implemented to tackle the issue of increasing dimensionality.The experimental simulations validate the efficacy of the method across various scenarios.展开更多
In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Se...In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Secondary users(SUs)in the cognitive network are multiplexed by a new Power Domain Sparse Code Multiple Access(PD-SCMA)scheme,and the physical resources of the cognitive base station are virtualized into two types of slices:enhanced mobile broadband(eMBB)slice and ultrareliable low latency communication(URLLC)slice.We design the Double Deep Q Network(DDQN)network output the optimal codebook assignment scheme and simultaneously use the Deep Deterministic Policy Gradient(DDPG)network output the optimal power allocation scheme.The objective is to jointly optimize the spectral efficiency of the system and the Quality of Service(QoS)of SUs.Simulation results show that the proposed algorithm outperforms the CNDDQN algorithm and modified JEERA algorithm in terms of spectral efficiency and QoS satisfaction.Additionally,compared with the Power Domain Non-orthogonal Multiple Access(PD-NOMA)slices and the Sparse Code Multiple Access(SCMA)slices,the PD-SCMA slices can dramatically enhance spectral efficiency and increase the number of accessible users.展开更多
Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal ...Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.展开更多
As cloud computing usage grows,cloud data centers play an increasingly important role.To maximize resource utilization,ensure service quality,and enhance system performance,it is crucial to allocate tasks and manage p...As cloud computing usage grows,cloud data centers play an increasingly important role.To maximize resource utilization,ensure service quality,and enhance system performance,it is crucial to allocate tasks and manage performance effectively.The purpose of this study is to provide an extensive analysis of task allocation and performance management techniques employed in cloud data centers.The aim is to systematically categorize and organize previous research by identifying the cloud computing methodologies,categories,and gaps.A literature review was conducted,which included the analysis of 463 task allocations and 480 performance management papers.The review revealed three task allocation research topics and seven performance management methods.Task allocation research areas are resource allocation,load-Balancing,and scheduling.Performance management includes monitoring and control,power and energy management,resource utilization optimization,quality of service management,fault management,virtual machine management,and network management.The study proposes new techniques to enhance cloud computing work allocation and performance management.Short-comings in each approach can guide future research.The research’s findings on cloud data center task allocation and performance management can assist academics,practitioners,and cloud service providers in optimizing their systems for dependability,cost-effectiveness,and scalability.Innovative methodologies can steer future research to fill gaps in the literature.展开更多
基金supported by the foundation of National Key Laboratory of Electromagnetic Environment(Grant No.JCKY2020210C 614240304)Natural Science Foundation of ZheJiang province(LQY20F010001)+1 种基金the National Natural Science Foundation of China under grant numbers 82004499State Key Laboratory of Millimeter Waves under grant numbers K202012.
文摘The performance of massive MIMO systems relies heavily on the availability of Channel State Information at the Transmitter(CSIT).A large amount of work has been devoted to reducing the CSIT acquisition overhead at the pilot training and/or CsI feedback stage.In fact,the downlink communication generally includes three stages,i.e.,pilot training,CsI feedback,and data transmission.These three stages are mutually related and jointly determine the overall system performance.Unfortunately,there exist few studies on the reduction of csIT acquisition overhead from the global point of view.In this paper,we integrate the Minimum Mean Square Error(MMSE)channel estimation,Random Vector Quantization(RVQ)based limited feedback and Maximal Ratio Combining(MRC)precoding into a unified framework for investigating the resource allocation problem.In particular,we first approximate the covariance matrix of the quantization error with a simple expression and derive an analytical expression of the received Signal-to-Noise Ratio(SNR)based on the deterministic equivalence theory.Then the three performance metrics(the spectral efficiency,energy efficiency,and total energy consumption)oriented problems are formulated analytically.With practical system requirements,these three metrics can be collaboratively optimized.Finally,we propose an optimization solver to derive the optimal partition of channel coherence time.Experiment results verify the benefits of the proposed resource allocation schemes under three different scenarios and illustrate the tradeoff of resource allocation between three stages.
基金The research has been generously supported by Tianjin Education Commission Scientific Research Program(2020KJ056),ChinaTianjin Science and Technology Planning Project(22YDTPJC00970),China.The authors would like to express their sincere appreciation for all support provided.
文摘A real-time adaptive roles allocation method based on reinforcement learning is proposed to improve humanrobot cooperation performance for a curtain wall installation task.This method breaks the traditional idea that the robot is regarded as the follower or only adjusts the leader and the follower in cooperation.In this paper,a self-learning method is proposed which can dynamically adapt and continuously adjust the initiative weight of the robot according to the change of the task.Firstly,the physical human-robot cooperation model,including the role factor is built.Then,a reinforcement learningmodel that can adjust the role factor in real time is established,and a reward and actionmodel is designed.The role factor can be adjusted continuously according to the comprehensive performance of the human-robot interaction force and the robot’s Jerk during the repeated installation.Finally,the roles adjustment rule established above continuously improves the comprehensive performance.Experiments of the dynamic roles allocation and the effect of the performance weighting coefficient on the result have been verified.The results show that the proposed method can realize the role adaptation and achieve the dual optimization goal of reducing the sum of the cooperator force and the robot’s Jerk.
基金the Fundamental Research Program of Guangdong,China,under Grants 2020B1515310023 and 2023A1515011281in part by the National Natural Science Foundation of China under Grant 61571005.
文摘With the rapid development of Network Function Virtualization(NFV),the problem of low resource utilizationin traditional data centers is gradually being addressed.However,existing research does not optimize both localand global allocation of resources in data centers.Hence,we propose an adaptive hybrid optimization strategy thatcombines dynamic programming and neural networks to improve resource utilization and service quality in datacenters.Our approach encompasses a service function chain simulation generator,a parallel architecture servicesystem,a dynamic programming strategy formaximizing the utilization of local server resources,a neural networkfor predicting the global utilization rate of resources and a global resource optimization strategy for bottleneck andredundant resources.With the implementation of our local and global resource allocation strategies,the systemperformance is significantly optimized through simulation.
基金supported by the Key Research and Development Project in Anhui Province of China(Grant No.202304a05020059)the Fundamental Research Funds for the Central Universities of China(Grant No.PA2023GDSK0055)the Project of Anhui Province Economic and Information Bureau(Grant No.JB20099).
文摘Users and edge servers are not fullymutually trusted inmobile edge computing(MEC),and hence blockchain can be introduced to provide trustableMEC.In blockchain-basedMEC,each edge server functions as a node in bothMEC and blockchain,processing users’tasks and then uploading the task related information to the blockchain.That is,each edge server runs both users’offloaded tasks and blockchain tasks simultaneously.Note that there is a trade-off between the resource allocation for MEC and blockchain tasks.Therefore,the allocation of the resources of edge servers to the blockchain and theMEC is crucial for the processing delay of blockchain-based MEC.Most of the existing research tackles the problem of resource allocation in either blockchain or MEC,which leads to unfavorable performance of the blockchain-based MEC system.In this paper,we study how to allocate the computing resources of edge servers to the MEC and blockchain tasks with the aimtominimize the total systemprocessing delay.For the problem,we propose a computing resource Allocation algorithmfor Blockchain-based MEC(ABM)which utilizes the Slater’s condition,Karush-Kuhn-Tucker(KKT)conditions,partial derivatives of the Lagrangian function and subgradient projection method to obtain the solution.Simulation results show that ABM converges and effectively reduces the processing delay of blockchain-based MEC.
基金supported in part by the National Key R&D Program of China under Grant 2020YFB1005900the National Natural Science Foundation of China under Grant 62001220+3 种基金the Jiangsu Provincial Key Research and Development Program under Grants BE2022068the Natural Science Foundation of Jiangsu Province under Grants BK20200440the Future Network Scientific Research Fund Project FNSRFP-2021-YB-03the Young Elite Scientist Sponsorship Program,China Association for Science and Technology.
文摘Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks.However,edge computing servers(ECSs)from different operators may not trust each other,and thus the incentives for collaboration cannot be guaranteed.In this paper,we propose a consortium blockchain enabled collaborative edge computing framework,where users can offload computing tasks to ECSs from different operators.To minimize the total delay of users,we formulate a joint task offloading and resource optimization problem,under the constraint of the computing capability of each ECS.We apply the Tammer decomposition method and heuristic optimization algorithms to obtain the optimal solution.Finally,we propose a reputation based node selection approach to facilitate the consensus process,and also consider a completion time based primary node selection to avoid monopolization of certain edge node and enhance the security of the blockchain.Simulation results validate the effectiveness of the proposed algorithm,and the total delay can be reduced by up to 40%compared with the non-cooperative case.
基金supported by the National Natural Science Foundation of China(Grant No.62072031)the Applied Basic Research Foundation of Yunnan Province(Grant No.2019FD071)the Yunnan Scientific Research Foundation Project(Grant 2019J0187).
文摘With the development of vehicles towards intelligence and connectivity,vehicular data is diversifying and growing dramatically.A task allocation model and algorithm for heterogeneous Intelligent Connected Vehicle(ICV)applications are proposed for the dispersed computing network composed of heterogeneous task vehicles and Network Computing Points(NCPs).Considering the amount of task data and the idle resources of NCPs,a computing resource scheduling model for NCPs is established.Taking the heterogeneous task execution delay threshold as a constraint,the optimization problem is described as the problem of maximizing the utilization of computing resources by NCPs.The proposed problem is proven to be NP-hard by using the method of reduction to a 0-1 knapsack problem.A many-to-many matching algorithm based on resource preferences is proposed.The algorithm first establishes the mutual preference lists based on the adaptability of the task requirements and the resources provided by NCPs.This enables the filtering out of un-schedulable NCPs in the initial stage of matching,reducing the solution space dimension.To solve the matching problem between ICVs and NCPs,a new manyto-many matching algorithm is proposed to obtain a unique and stable optimal matching result.The simulation results demonstrate that the proposed scheme can improve the resource utilization of NCPs by an average of 9.6%compared to the reference scheme,and the total performance can be improved by up to 15.9%.
基金National Natural Science Foundation of China(62072392).
文摘Crowdsourcing technology is widely recognized for its effectiveness in task scheduling and resource allocation.While traditional methods for task allocation can help reduce costs and improve efficiency,they may encounter challenges when dealing with abnormal data flow nodes,leading to decreased allocation accuracy and efficiency.To address these issues,this study proposes a novel two-part invalid detection task allocation framework.In the first step,an anomaly detection model is developed using a dynamic self-attentive GAN to identify anomalous data.Compared to the baseline method,the model achieves an approximately 4%increase in the F1 value on the public dataset.In the second step of the framework,task allocation modeling is performed using a twopart graph matching method.This phase introduces a P-queue KM algorithm that implements a more efficient optimization strategy.The allocation efficiency is improved by approximately 23.83%compared to the baseline method.Empirical results confirm the effectiveness of the proposed framework in detecting abnormal data nodes,enhancing allocation precision,and achieving efficient allocation.
基金supported by Project funded by China Postdoctoral Science Foundation(No.2021MD703980)。
文摘To improve the efficiency and fairness of the spectrum allocation for ground communication assisted by unmanned aerial vehicles(UAVs),a joint optimization method for on-demand deployment and spectrum allocation of UAVs is proposed,which is modeled as a mixed-integer non-convex optimization problem(MINCOP).An algorithm to estimate the minimum number of required UAVs is firstly proposed based on the pre-estimation and simulated annealing.The MINCOP is then decomposed into three sub-problems based on the block coordinate descent method,including the spectrum allocation of UAVs,the association between UAVs and ground users,and the deployment of UAVs.Specifically,the optimal spectrum allocation is derived based on the interference mitigation and channel reuse.The association between UAVs and ground users is optimized based on local iterated optimization.A particle-based optimization algorithm is proposed to resolve the subproblem of the UAVs deployment.Simulation results show that the proposed method could effectively improve the minimum transmission rate of UAVs as well as user fairness of spectrum allocation.
基金supported by the National Natural Science Foundation of China(No.62001045)Beijing Municipal Natural Science Foundation(No.4214059)+1 种基金Fund of State Key Laboratory of IPOC(BUPT)(No.IPOC2021ZT17)Fundamental Research Funds for the Central Universities(No.2022RC09).
文摘Inter-datacenter elastic optical networks(EON)need to provide the service for the requests of cloud computing that require not only connectivity and computing resources but also network survivability.In this paper,to realize joint allocation of computing and connectivity resources in survivable inter-datacenter EONs,a survivable routing,modulation level,spectrum,and computing resource allocation algorithm(SRMLSCRA)algorithm and three datacenter selection strategies,i.e.Computing Resource First(CRF),Shortest Path First(SPF)and Random Destination(RD),are proposed for different scenarios.Unicast and manycast are applied to the communication of computing requests,and the routing strategies are calculated respectively.Simulation results show that SRMLCRA-CRF can serve the largest amount of protected computing tasks,and the requested calculation blocking probability is reduced by 29.2%,28.3%and 30.5%compared with SRMLSCRA-SPF,SRMLSCRA-RD and the benchmark EPS-RMSA algorithms respectively.Therefore,it is more applicable to the networks with huge calculations.Besides,SRMLSCRA-SPF consumes the least spectrum,thereby exhibiting its suitability for scenarios where the amount of calculation is small and communication resources are scarce.The results demonstrate that the proposed methods realize the joint allocation of computing and connectivity resources,and could provide efficient protection for services under single-link failure and occupy less spectrum.
基金supported by the Fundamental Research Funds for the Central Universities of NUAA(No.kfjj20200414)Natural Science Foundation of Jiangsu Province in China(No.BK20181289).
文摘In this paper,we optimize the spectrum efficiency(SE)of uplink massive multiple-input multiple-output(MIMO)system with imperfect channel state information(CSI)over Rayleigh fading channel.The SE optimization problem is formulated under the constraints of maximum power and minimum rate of each user.Then,we develop a near-optimal power allocation(PA)scheme by using the successive convex approximation(SCA)method,Lagrange multiplier method,and block coordinate descent(BCD)method,and it can obtain almost the same SE as the benchmark scheme with lower complexity.Since this scheme needs three-layer iteration,a suboptimal PA scheme is developed to further reduce the complexity,where the characteristic of massive MIMO(i.e.,numerous receive antennas)is utilized for convex reformulation,and the rate constraint is converted to linear constraints.This suboptimal scheme only needs single-layer iteration,thus has lower complexity than the near-optimal scheme.Finally,we joint design the pilot power and data power to further improve the performance,and propose an two-stage algorithm to obtain joint PA.Simulation results verify the effectiveness of the proposed schemes,and superior SE performance is achieved.
基金supported by the National Natural Science Foundation of China(No.62071354)the Key Research and Development Program of Shaanxi(No.2022ZDLGY05-08)supported by the ISN State Key Laboratory。
文摘To meet the communication services with diverse requirements,dynamic resource allocation has shown increasing importance.In this paper,we consider the multi-slot and multi-user resource allocation(MSMU-RA)in a downlink cellular scenario with the aim of maximizing system spectral efficiency while guaranteeing user fairness.We first model the MSMURA problem as a dual-sequence decision-making process,and then solve it by a novel Transformerbased deep reinforcement learning(TDRL)approach.Specifically,the proposed TDRL approach can be achieved based on two aspects:1)To adapt to the dynamic wireless environment,the proximal policy optimization(PPO)algorithm is used to optimize the multi-slot RA strategy.2)To avoid co-channel interference,the Transformer-based PPO algorithm is presented to obtain the optimal multi-user RA scheme by exploring the mapping between user sequence and resource sequence.Experimental results show that:i)the proposed approach outperforms both the traditional and DRL methods in spectral efficiency and user fairness,ii)the proposed algorithm is superior to DRL approaches in terms of convergence speed and generalization performance.
基金supported in part by the National Natural Science Foundation of China(grant nos.61971365,61871339,62171392)Digital Fujian Province Key Laboratory of IoT Communication,Architecture and Safety Technology(grant no.2010499)+1 种基金the State Key Program of the National Natural Science Foundation of China(grant no.61731012)the Natural Science Foundation of Fujian Province of China No.2021J01004.
文摘Unmanned Aerial Vehicles(UAvs)as aerial base stations to provide communication services for ground users is a flexible and cost-effective paradigm in B5G.Besides,dynamic resource allocation and multi-connectivity can be adopted to further harness the potentials of UAVs in improving communication capacity,in such situations such that the interference among users becomes a pivotal disincentive requiring effective solutions.To this end,we investigate the Joint UAV-User Association,Channel Allocation,and transmission Power Control(J-UACAPC)problem in a multi-connectivity-enabled UAV network with constrained backhaul links,where each UAV can determine the reusable channels and transmission power to serve the selected ground users.The goal was to mitigate co-channel interference while maximizing long-term system utility.The problem was modeled as a cooperative stochastic game with hybrid discrete-continuous action space.A Multi-Agent Hybrid Deep Reinforcement Learning(MAHDRL)algorithm was proposed to address this problem.Extensive simulation results demonstrated the effectiveness of the proposed algorithm and showed that it has a higher system utility than the baseline methods.
基金supported by the National Natural Science Foundation of China(NSFC)[grant numbers 62172377,61872205]the Shandong Provincial Natural Science Foundation[grant number ZR2019MF018]the Startup Research Foundation for Distinguished Scholars No.202112016.
文摘The cloud platform has limited defense resources to fully protect the edge servers used to process crowd sensing data in Internet of Things.To guarantee the network's overall security,we present a network defense resource allocation with multi-armed bandits to maximize the network's overall benefit.Firstly,we propose the method for dynamic setting of node defense resource thresholds to obtain the defender(attacker)benefit function of edge servers(nodes)and distribution.Secondly,we design a defense resource sharing mechanism for neighboring nodes to obtain the defense capability of nodes.Subsequently,we use the decomposability and Lipschitz conti-nuity of the defender's total expected utility to reduce the difference between the utility's discrete and continuous arms and analyze the difference theoretically.Finally,experimental results show that the method maximizes the defender's total expected utility and reduces the difference between the discrete and continuous arms of the utility.
基金supported by the Key Research and Development Program of China(No.2022YFC3005401)Key Research and Development Program of China,Yunnan Province(No.202203AA080009,202202AF080003)Postgraduate Research&Practice Innovation Program of Jiangsu Province(No.KYCX21_0482).
文摘In Beyond the Fifth Generation(B5G)heterogeneous edge networks,numerous users are multiplexed on a channel or served on the same frequency resource block,in which case the transmitter applies coding and the receiver uses interference cancellation.Unfortunately,uncoordinated radio resource allocation can reduce system throughput and lead to user inequity,for this reason,in this paper,channel allocation and power allocation problems are formulated to maximize the system sum rate and minimum user achievable rate.Since the construction model is non-convex and the response variables are high-dimensional,a distributed Deep Reinforcement Learning(DRL)framework called distributed Proximal Policy Optimization(PPO)is proposed to allocate or assign resources.Specifically,several simulated agents are trained in a heterogeneous environment to find robust behaviors that perform well in channel assignment and power allocation.Moreover,agents in the collection stage slow down,which hinders the learning of other agents.Therefore,a preemption strategy is further proposed in this paper to optimize the distributed PPO,form DP-PPO and successfully mitigate the straggler problem.The experimental results show that our mechanism named DP-PPO improves the performance over other DRL methods.
基金Project supported by the Natural Science Foundation of Jilin Province of China(Grant No.20210101417JC).
文摘Quantum key distribution(QKD)is a technology that can resist the threat of quantum computers to existing conventional cryptographic protocols.However,due to the stringent requirements of the quantum key generation environment,the generated quantum keys are considered valuable,and the slow key generation rate conflicts with the high-speed data transmission in traditional optical networks.In this paper,for the QKD network with a trusted relay,which is mainly based on point-to-point quantum keys and has complex changes in network resources,we aim to allocate resources reasonably for data packet distribution.Firstly,we formulate a linear programming constraint model for the key resource allocation(KRA)problem based on the time-slot scheduling.Secondly,we propose a new scheduling scheme based on the graded key security requirements(GKSR)and a new micro-log key storage algorithm for effective storage and management of key resources.Finally,we propose a key resource consumption(KRC)routing optimization algorithm to properly allocate time slots,routes,and key resources.Simulation results show that the proposed scheme significantly improves the key distribution success rate and key resource utilization rate,among others.
基金This research was funded by the Project of the National Natural Science Foundation of China,Grant Number 62106283.
文摘Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinforce-ment learning(DRL)theory and an improved Multi-Agent Deep Deterministic Policy Gradient(MADDPG-D2)algorithm with a dual experience replay pool and a dual noise based on multi-agent architecture is proposed to improve the efficiency of DTA.The algorithm is based on the traditional Multi-Agent Deep Deterministic Policy Gradient(MADDPG)algorithm,and considers the introduction of a double noise mechanism to increase the action exploration space in the early stage of the algorithm,and the introduction of a double experience pool to improve the data utilization rate;at the same time,in order to accelerate the training speed and efficiency of the agents,and to solve the cold-start problem of the training,the a priori knowledge technology is applied to the training of the algorithm.Finally,the MADDPG-D2 algorithm is compared and analyzed based on the digital battlefield of ground and air confrontation.The experimental results show that the agents trained by the MADDPG-D2 algorithm have higher win rates and average rewards,can utilize the resources more reasonably,and better solve the problem of the traditional single agent algorithms facing the difficulty of solving the problem in the high-dimensional decision space.The MADDPG-D2 algorithm based on multi-agent architecture proposed in this paper has certain superiority and rationality in DTA.
文摘With the rapid development of urban rail transit,the existing track detection has some problems such as low efficiency and insufficient detection coverage,so an intelligent and automatic track detectionmethod based onUAV is urgently needed to avoid major safety accidents.At the same time,the geographical distribution of IoT devices results in the inefficient use of the significant computing potential held by a large number of devices.As a result,the Dispersed Computing(DCOMP)architecture enables collaborative computing between devices in the Internet of Everything(IoE),promotes low-latency and efficient cross-wide applications,and meets users’growing needs for computing performance and service quality.This paper focuses on examining the resource allocation challenge within a dispersed computing environment that utilizes UAV inspection tracks.Furthermore,the system takes into account both resource constraints and computational constraints and transforms the optimization problem into an energy minimization problem with computational constraints.The Markov Decision Process(MDP)model is employed to capture the connection between the dispersed computing resource allocation strategy and the system environment.Subsequently,a method based on Double Deep Q-Network(DDQN)is introduced to derive the optimal policy.Simultaneously,an experience replay mechanism is implemented to tackle the issue of increasing dimensionality.The experimental simulations validate the efficacy of the method across various scenarios.
基金supported by the National Natural Science Foundation of China(Grant No.61971057).
文摘In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Secondary users(SUs)in the cognitive network are multiplexed by a new Power Domain Sparse Code Multiple Access(PD-SCMA)scheme,and the physical resources of the cognitive base station are virtualized into two types of slices:enhanced mobile broadband(eMBB)slice and ultrareliable low latency communication(URLLC)slice.We design the Double Deep Q Network(DDQN)network output the optimal codebook assignment scheme and simultaneously use the Deep Deterministic Policy Gradient(DDPG)network output the optimal power allocation scheme.The objective is to jointly optimize the spectral efficiency of the system and the Quality of Service(QoS)of SUs.Simulation results show that the proposed algorithm outperforms the CNDDQN algorithm and modified JEERA algorithm in terms of spectral efficiency and QoS satisfaction.Additionally,compared with the Power Domain Non-orthogonal Multiple Access(PD-NOMA)slices and the Sparse Code Multiple Access(SCMA)slices,the PD-SCMA slices can dramatically enhance spectral efficiency and increase the number of accessible users.
基金supported by National Key Research and Development Program of China(2018YFC1504502).
文摘Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.
基金supported by the Ministerio Espanol de Ciencia e Innovación under Project Number PID2020-115570GB-C22,MCIN/AEI/10.13039/501100011033by the Cátedra de Empresa Tecnología para las Personas(UGR-Fujitsu).
文摘As cloud computing usage grows,cloud data centers play an increasingly important role.To maximize resource utilization,ensure service quality,and enhance system performance,it is crucial to allocate tasks and manage performance effectively.The purpose of this study is to provide an extensive analysis of task allocation and performance management techniques employed in cloud data centers.The aim is to systematically categorize and organize previous research by identifying the cloud computing methodologies,categories,and gaps.A literature review was conducted,which included the analysis of 463 task allocations and 480 performance management papers.The review revealed three task allocation research topics and seven performance management methods.Task allocation research areas are resource allocation,load-Balancing,and scheduling.Performance management includes monitoring and control,power and energy management,resource utilization optimization,quality of service management,fault management,virtual machine management,and network management.The study proposes new techniques to enhance cloud computing work allocation and performance management.Short-comings in each approach can guide future research.The research’s findings on cloud data center task allocation and performance management can assist academics,practitioners,and cloud service providers in optimizing their systems for dependability,cost-effectiveness,and scalability.Innovative methodologies can steer future research to fill gaps in the literature.