This research is the first application of Unmanned Aerial Vehicles(UAVs)equipped with Multi-access Edge Computing(MEC)servers to offshore wind farms,providing a new task offloading solution to address the challenge of...This research is the first application of Unmanned Aerial Vehicles(UAVs)equipped with Multi-access Edge Computing(MEC)servers to offshore wind farms,providing a new task offloading solution to address the challenge of scarce edge servers in offshore wind farms.The proposed strategy is to offload the computational tasks in this scenario to other MEC servers and compute them proportionally,which effectively reduces the computational pressure on local MEC servers when wind turbine data are abnormal.Finally,the task offloading problem is modeled as a multi-intelligent deep reinforcement learning problem,and a task offloading model based on MultiAgent Deep Reinforcement Learning(MADRL)is established.The Adaptive Genetic Algorithm(AGA)is used to explore the action space of the Deep Deterministic Policy Gradient(DDPG),which effectively solves the problem of slow convergence of the DDPG algorithm in the high-dimensional action space.The simulation results show that the proposed algorithm,AGA-DDPG,saves approximately 61.8%,55%,21%,and 33%of the overall overhead compared to local MEC,random offloading,TD3,and DDPG,respectively.The proposed strategy is potentially important for improving real-time monitoring,big data analysis,and predictive maintenance of offshore wind farm operation and maintenance systems.展开更多
In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task ...In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task offloading is often overlooked.It is frequently assumed that vehicles can be accurately modeled during actual motion processes.However,in vehicular dynamic environments,both the tasks generated by the vehicles and the vehicles’surroundings are constantly changing,making it difficult to achieve real-time modeling for actual dynamic vehicular network scenarios.Taking into account the actual dynamic vehicular scenarios,this paper considers the real-time non-uniform movement of vehicles and proposes a vehicular task dynamic offloading and scheduling algorithm for single-task multi-vehicle vehicular network scenarios,attempting to solve the dynamic decision-making problem in task offloading process.The optimization objective is to minimize the average task completion time,which is formulated as a multi-constrained non-linear programming problem.Due to the mobility of vehicles,a constraint model is applied in the decision-making process to dynamically determine whether the communication range is sufficient for task offloading and transmission.Finally,the proposed vehicular task dynamic offloading and scheduling algorithm based on muti-agent deep deterministic policy gradient(MADDPG)is applied to solve the optimal solution of the optimization problem.Simulation results show that the algorithm proposed in this paper is able to achieve lower latency task computation offloading.Meanwhile,the average task completion time of the proposed algorithm in this paper can be improved by 7.6%compared to the performance of the MADDPG scheme and 51.1%compared to the performance of deep deterministic policy gradient(DDPG).展开更多
Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks.However,edge computing servers(ECSs)from different operators may not trust each other,and thus t...Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks.However,edge computing servers(ECSs)from different operators may not trust each other,and thus the incentives for collaboration cannot be guaranteed.In this paper,we propose a consortium blockchain enabled collaborative edge computing framework,where users can offload computing tasks to ECSs from different operators.To minimize the total delay of users,we formulate a joint task offloading and resource optimization problem,under the constraint of the computing capability of each ECS.We apply the Tammer decomposition method and heuristic optimization algorithms to obtain the optimal solution.Finally,we propose a reputation based node selection approach to facilitate the consensus process,and also consider a completion time based primary node selection to avoid monopolization of certain edge node and enhance the security of the blockchain.Simulation results validate the effectiveness of the proposed algorithm,and the total delay can be reduced by up to 40%compared with the non-cooperative case.展开更多
Vehicular edge computing(VEC)is emerging as a promising solution paradigm to meet the requirements of compute-intensive applications in internet of vehicle(IoV).Non-orthogonal multiple access(NOMA)has advantages in im...Vehicular edge computing(VEC)is emerging as a promising solution paradigm to meet the requirements of compute-intensive applications in internet of vehicle(IoV).Non-orthogonal multiple access(NOMA)has advantages in improving spectrum efficiency and dealing with bandwidth scarcity and cost.It is an encouraging progress combining VEC and NOMA.In this paper,we jointly optimize task offloading decision and resource allocation to maximize the service utility of the NOMA-VEC system.To solve the optimization problem,we propose a multiagent deep graph reinforcement learning algorithm.The algorithm extracts the topological features and relationship information between agents from the system state as observations,outputs task offloading decision and resource allocation simultaneously with local policy network,which is updated by a local learner.Simulation results demonstrate that the proposed method achieves a 1.52%∼5.80%improvement compared with the benchmark algorithms in system service utility.展开更多
In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer t...In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer task offloading.For many resource-constrained devices,the computation of many types of tasks is not feasible because they cannot support such computations as they do not have enough available memory and processing capacity.In this scenario,it is worth considering transferring these tasks to resource-rich platforms,such as Edge Data Centers or remote cloud servers.For different reasons,it is more exciting and appropriate to download various tasks to specific download destinations depending on the properties and state of the environment and the nature of the functions.At the same time,establishing an optimal offloading policy,which ensures that all tasks are executed within the required latency and avoids excessive workload on specific computing centers is not easy.This study presents two alternatives to solve the offloading decision paradigm by introducing two well-known algorithms,Graph Neural Networks(GNN)and Deep Q-Network(DQN).It applies the alternatives on a well-known Edge Computing simulator called PureEdgeSimand compares them with the two defaultmethods,Trade-Off and Round Robin.Experiments showed that variants offer a slight improvement in task success rate and workload distribution.In terms of energy efficiency,they provided similar results.Finally,the success rates of different computing centers are tested,and the lack of capacity of remote cloud servers to respond to applications in real-time is demonstrated.These novel ways of finding a download strategy in a local networking environment are unique as they emulate the state and structure of the environment innovatively,considering the quality of its connections and constant updates.The download score defined in this research is a crucial feature for determining the quality of a download path in the GNN training process and has not previously been proposed.Simultaneously,the suitability of Reinforcement Learning(RL)techniques is demonstrated due to the dynamism of the network environment,considering all the key factors that affect the decision to offload a given task,including the actual state of all devices.展开更多
Crowdsourcing technology is widely recognized for its effectiveness in task scheduling and resource allocation.While traditional methods for task allocation can help reduce costs and improve efficiency,they may encoun...Crowdsourcing technology is widely recognized for its effectiveness in task scheduling and resource allocation.While traditional methods for task allocation can help reduce costs and improve efficiency,they may encounter challenges when dealing with abnormal data flow nodes,leading to decreased allocation accuracy and efficiency.To address these issues,this study proposes a novel two-part invalid detection task allocation framework.In the first step,an anomaly detection model is developed using a dynamic self-attentive GAN to identify anomalous data.Compared to the baseline method,the model achieves an approximately 4%increase in the F1 value on the public dataset.In the second step of the framework,task allocation modeling is performed using a twopart graph matching method.This phase introduces a P-queue KM algorithm that implements a more efficient optimization strategy.The allocation efficiency is improved by approximately 23.83%compared to the baseline method.Empirical results confirm the effectiveness of the proposed framework in detecting abnormal data nodes,enhancing allocation precision,and achieving efficient allocation.展开更多
Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinfor...Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinforce-ment learning(DRL)theory and an improved Multi-Agent Deep Deterministic Policy Gradient(MADDPG-D2)algorithm with a dual experience replay pool and a dual noise based on multi-agent architecture is proposed to improve the efficiency of DTA.The algorithm is based on the traditional Multi-Agent Deep Deterministic Policy Gradient(MADDPG)algorithm,and considers the introduction of a double noise mechanism to increase the action exploration space in the early stage of the algorithm,and the introduction of a double experience pool to improve the data utilization rate;at the same time,in order to accelerate the training speed and efficiency of the agents,and to solve the cold-start problem of the training,the a priori knowledge technology is applied to the training of the algorithm.Finally,the MADDPG-D2 algorithm is compared and analyzed based on the digital battlefield of ground and air confrontation.The experimental results show that the agents trained by the MADDPG-D2 algorithm have higher win rates and average rewards,can utilize the resources more reasonably,and better solve the problem of the traditional single agent algorithms facing the difficulty of solving the problem in the high-dimensional decision space.The MADDPG-D2 algorithm based on multi-agent architecture proposed in this paper has certain superiority and rationality in DTA.展开更多
Introduction: The uncontrolled management of waste electrical and electronic equipment (W3E) causes respiratory problems in the handlers of this waste. The objective was to study the stains associated with respiratory...Introduction: The uncontrolled management of waste electrical and electronic equipment (W3E) causes respiratory problems in the handlers of this waste. The objective was to study the stains associated with respiratory symptoms in W3E handlers. Methods: The study was cross-sectional with an analytical focus on W3E handlers in the informal sector in Ouagadougou. A peer-validated questionnaire collected data on a sample of 161 manipulators. Results: the most common W3E processing tasks were the purchase or sale of W3E (67.70%), its repair (39.75%) and its collection (31.06%). The prevalence of cough was 21.74%, that of wheezing 14.91%, phlegm 12.50% and dyspnea at rest 10.56%. In bivariate analysis, there were significant associations at the 5% level between W3E repair and phlegm (p-value = 0.044), between W3E burning and wheezing (p-value = 0.011) and between W3E and cough (p-value = 0.01). The final logistic regression models suggested that the burning of W3E and the melting of lead batteries represented risk factors for the occurrence of cough with respective prevalence ratios of 4.57 and 4.63. Conclusion: raising awareness on the wearing of personal protective equipment, in particular masks adapted by W3E handlers, favoring those who are dedicated to the burning of electronic waste and the melting of lead could make it possible to reduce the risk of occurrence of respiratory symptoms.展开更多
Pooling,unpooling/specialization,and discretionary task completion are typical operational strategies in queueing systems that arise in healthcare,call centers,and online sales.These strategies may have advantages and...Pooling,unpooling/specialization,and discretionary task completion are typical operational strategies in queueing systems that arise in healthcare,call centers,and online sales.These strategies may have advantages and disadvantages in different operational environments.This paper uses the M/M/1 and M/M/2 queues to study the impact of pooling,specialization,and discretionary task completion on the average queue length.Closed-form solutions for the average M/M/2 queue length are derived.Computational examples illustrate how the average queue length changes with the strength of pooling,specialization,and discretionary task completion.Finally,several conjectures are made in the paper.展开更多
With the development of vehicles towards intelligence and connectivity,vehicular data is diversifying and growing dramatically.A task allocation model and algorithm for heterogeneous Intelligent Connected Vehicle(ICV)...With the development of vehicles towards intelligence and connectivity,vehicular data is diversifying and growing dramatically.A task allocation model and algorithm for heterogeneous Intelligent Connected Vehicle(ICV)applications are proposed for the dispersed computing network composed of heterogeneous task vehicles and Network Computing Points(NCPs).Considering the amount of task data and the idle resources of NCPs,a computing resource scheduling model for NCPs is established.Taking the heterogeneous task execution delay threshold as a constraint,the optimization problem is described as the problem of maximizing the utilization of computing resources by NCPs.The proposed problem is proven to be NP-hard by using the method of reduction to a 0-1 knapsack problem.A many-to-many matching algorithm based on resource preferences is proposed.The algorithm first establishes the mutual preference lists based on the adaptability of the task requirements and the resources provided by NCPs.This enables the filtering out of un-schedulable NCPs in the initial stage of matching,reducing the solution space dimension.To solve the matching problem between ICVs and NCPs,a new manyto-many matching algorithm is proposed to obtain a unique and stable optimal matching result.The simulation results demonstrate that the proposed scheme can improve the resource utilization of NCPs by an average of 9.6%compared to the reference scheme,and the total performance can be improved by up to 15.9%.展开更多
Generating diverse and factual text is challenging and is receiving increasing attention.By sampling from the latent space,variational autoencoder-based models have recently enhanced the diversity of generated text.Ho...Generating diverse and factual text is challenging and is receiving increasing attention.By sampling from the latent space,variational autoencoder-based models have recently enhanced the diversity of generated text.However,existing research predominantly depends on summarizationmodels to offer paragraph-level semantic information for enhancing factual correctness.The challenge lies in effectively generating factual text using sentence-level variational autoencoder-based models.In this paper,a novel model called fact-aware conditional variational autoencoder is proposed to balance the factual correctness and diversity of generated text.Specifically,our model encodes the input sentences and uses them as facts to build a conditional variational autoencoder network.By training a conditional variational autoencoder network,the model is enabled to generate text based on input facts.Building upon this foundation,the input text is passed to the discriminator along with the generated text.By employing adversarial training,the model is encouraged to generate text that is indistinguishable to the discriminator,thereby enhancing the quality of the generated text.To further improve the factual correctness,inspired by the natural language inference system,the entailment recognition task is introduced to be trained together with the discriminator via multi-task learning.Moreover,based on the entailment recognition results,a penalty term is further proposed to reconstruct the loss of our model,forcing the generator to generate text consistent with the facts.Experimental results demonstrate that compared with competitivemodels,ourmodel has achieved substantial improvements in both the quality and factual correctness of the text,despite only sacrificing a small amount of diversity.Furthermore,when considering a comprehensive evaluation of diversity and quality metrics,our model has also demonstrated the best performance.展开更多
Based on the wave attack task planning method in static complex environment and the rolling optimization framework, an online task planning method in dynamic complex environment based on rolling optimization is propos...Based on the wave attack task planning method in static complex environment and the rolling optimization framework, an online task planning method in dynamic complex environment based on rolling optimization is proposed. In the process of online task planning in dynamic complex environment,online task planning is based on event triggering including target information update event, new target addition event, target failure event, weapon failure event, etc., and the methods include defense area reanalysis, parameter space update, and mission re-planning. Simulation is conducted for different events and the result shows that the index value of the attack scenario after re-planning is better than that before re-planning and according to the probability distribution of statistical simulation method, the index value distribution after re-planning is obviously in the region of high index value, and the index value gap before and after re-planning is related to the degree of posture change.展开更多
In response to the uncertainty of information of the injured in post disaster situations,considering constraints such as random chance and the quantity of rescue resource,the split deliv-ery vehicle routing problem wi...In response to the uncertainty of information of the injured in post disaster situations,considering constraints such as random chance and the quantity of rescue resource,the split deliv-ery vehicle routing problem with stochastic demands(SDVRPSD)model and the multi-depot split delivery heterogeneous vehicle routing problem with stochastic demands(MDSDHVRPSD)model are established.A two-stage hybrid variable neighborhood tabu search algorithm is designed for unmanned vehicle task planning to minimize the path cost of rescue plans.Simulation experiments show that the solution obtained by the algorithm can effectively reduce the rescue vehicle path cost and the rescue task completion time,with high optimization quality and certain portability.展开更多
The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support ...The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support of task-offloading in Multi-access Edge Computing(MEC).However,existing task-offloading optimization methods typically assume that MEC’s computing resources are unlimited,and there is a lack of research on the optimization of task-offloading when MEC resources are exhausted.In addition,existing solutions only decide whether to accept the offloaded task request based on the single decision result of the current time slot,but lack support for multiple retry in subsequent time slots.It is resulting in TD missing potential offloading opportunities in the future.To fill this gap,we propose a Two-Stage Offloading Decision-making Framework(TSODF)with request holding and dynamic eviction.Long Short-Term Memory(LSTM)-based task-offloading request prediction and MEC resource release estimation are integrated to infer the probability of a request being accepted in the subsequent time slot.The framework learns optimized decision-making experiences continuously to increase the success rate of task offloading based on deep learning technology.Simulation results show that TSODF reduces total TD’s energy consumption and delay for task execution and improves task offloading rate and system resource utilization compared to the benchmark method.展开更多
The Multi-access Edge Cloud(MEC) networks extend cloud computing services and capabilities to the edge of the networks. By bringing computation and storage capabilities closer to end-users and connected devices, MEC n...The Multi-access Edge Cloud(MEC) networks extend cloud computing services and capabilities to the edge of the networks. By bringing computation and storage capabilities closer to end-users and connected devices, MEC networks can support a wide range of applications. MEC networks can also leverage various types of resources, including computation resources, network resources, radio resources,and location-based resources, to provide multidimensional resources for intelligent applications in 5/6G.However, tasks generated by users often consist of multiple subtasks that require different types of resources. It is a challenging problem to offload multiresource task requests to the edge cloud aiming at maximizing benefits due to the heterogeneity of resources provided by devices. To address this issue,we mathematically model the task requests with multiple subtasks. Then, the problem of task offloading of multi-resource task requests is proved to be NP-hard. Furthermore, we propose a novel Dual-Agent Deep Reinforcement Learning algorithm with Node First and Link features(NF_L_DA_DRL) based on the policy network, to optimize the benefits generated by offloading multi-resource task requests in MEC networks. Finally, simulation results show that the proposed algorithm can effectively improve the benefit of task offloading with higher resource utilization compared with baseline algorithms.展开更多
This paper reviews task scheduling frameworks,methods,and evaluation metrics of central processing unit-graphics processing unit(CPU-GPU)heterogeneous clusters.Task scheduling of CPU-GPU heterogeneous clusters can be ...This paper reviews task scheduling frameworks,methods,and evaluation metrics of central processing unit-graphics processing unit(CPU-GPU)heterogeneous clusters.Task scheduling of CPU-GPU heterogeneous clusters can be carried out on the system level,nodelevel,and device level.Most task-scheduling technologies are heuristic based on the experts’experience,while some technologies are based on statistic methods using machine learning,deep learning,or reinforcement learning.Many metrics have been adopted to evaluate and compare different task scheduling technologies that try to optimize different goals of task scheduling.Although statistic task scheduling has reached fewer research achievements than heuristic task scheduling,the statistic task scheduling still has significant research potential.展开更多
As cloud computing usage grows,cloud data centers play an increasingly important role.To maximize resource utilization,ensure service quality,and enhance system performance,it is crucial to allocate tasks and manage p...As cloud computing usage grows,cloud data centers play an increasingly important role.To maximize resource utilization,ensure service quality,and enhance system performance,it is crucial to allocate tasks and manage performance effectively.The purpose of this study is to provide an extensive analysis of task allocation and performance management techniques employed in cloud data centers.The aim is to systematically categorize and organize previous research by identifying the cloud computing methodologies,categories,and gaps.A literature review was conducted,which included the analysis of 463 task allocations and 480 performance management papers.The review revealed three task allocation research topics and seven performance management methods.Task allocation research areas are resource allocation,load-Balancing,and scheduling.Performance management includes monitoring and control,power and energy management,resource utilization optimization,quality of service management,fault management,virtual machine management,and network management.The study proposes new techniques to enhance cloud computing work allocation and performance management.Short-comings in each approach can guide future research.The research’s findings on cloud data center task allocation and performance management can assist academics,practitioners,and cloud service providers in optimizing their systems for dependability,cost-effectiveness,and scalability.Innovative methodologies can steer future research to fill gaps in the literature.展开更多
More devices in the Intelligent Internet of Things(AIoT)result in an increased number of tasks that require low latency and real-time responsiveness,leading to an increased demand for computational resources.Cloud com...More devices in the Intelligent Internet of Things(AIoT)result in an increased number of tasks that require low latency and real-time responsiveness,leading to an increased demand for computational resources.Cloud computing’s low-latency performance issues in AIoT scenarios have led researchers to explore fog computing as a complementary extension.However,the effective allocation of resources for task execution within fog environments,characterized by limitations and heterogeneity in computational resources,remains a formidable challenge.To tackle this challenge,in this study,we integrate fog computing and cloud computing.We begin by establishing a fog-cloud environment framework,followed by the formulation of a mathematical model for task scheduling.Lastly,we introduce an enhanced hybrid Equilibrium Optimizer(EHEO)tailored for AIoT task scheduling.The overarching objective is to decrease both the makespan and energy consumption of the fog-cloud system while accounting for task deadlines.The proposed EHEO method undergoes a thorough evaluation against multiple benchmark algorithms,encompassing metrics likemakespan,total energy consumption,success rate,and average waiting time.Comprehensive experimental results unequivocally demonstrate the superior performance of EHEO across all assessed metrics.Notably,in the most favorable conditions,EHEO significantly diminishes both the makespan and energy consumption by approximately 50%and 35.5%,respectively,compared to the secondbest performing approach,which affirms its efficacy in advancing the efficiency of AIoT task scheduling within fog-cloud networks.展开更多
A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process u...A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA.展开更多
The Internet of Medical Things(Io MT) is regarded as a critical technology for intelligent healthcare in the foreseeable 6G era. Nevertheless, due to the limited computing power capability of edge devices and task-rel...The Internet of Medical Things(Io MT) is regarded as a critical technology for intelligent healthcare in the foreseeable 6G era. Nevertheless, due to the limited computing power capability of edge devices and task-related coupling relationships, Io MT faces unprecedented challenges. Considering the associative connections among tasks, this paper proposes a computing offloading policy for multiple-user devices(UDs) considering device-to-device(D2D) communication and a multi-access edge computing(MEC)technique under the scenario of Io MT. Specifically,to minimize the total delay and energy consumption concerning the requirement of Io MT, we first analyze and model the detailed local execution, MEC execution, D2D execution, and associated tasks offloading exchange model. Consequently, the associated tasks’ offloading scheme of multi-UDs is formulated as a mixed-integer nonconvex optimization problem. Considering the advantages of deep reinforcement learning(DRL) in processing tasks related to coupling relationships, a Double DQN based associative tasks computing offloading(DDATO) algorithm is then proposed to obtain the optimal solution, which can make the best offloading decision under the condition that tasks of UDs are associative. Furthermore, to reduce the complexity of the DDATO algorithm, the cacheaided procedure is intentionally introduced before the data training process. This avoids redundant offloading and computing procedures concerning tasks that previously have already been cached by other UDs. In addition, we use a dynamic ε-greedy strategy in the action selection section of the algorithm, thus preventing the algorithm from falling into a locally optimal solution. Simulation results demonstrate that compared with other existing methods for associative task models concerning different structures in the Io MT network, the proposed algorithm can lower the total cost more effectively and efficiently while also providing a tradeoff between delay and energy consumption tolerance.展开更多
基金supported in part by the National Natural Science Foundation of China under grant 61861007the Guizhou Province Science and Technology Planning Project ZK[2021]303+2 种基金the Guizhou Province Science Technology Support Plan under grant[2022]264,[2023]096,[2023]409 and[2023]412the Science Technology Project of POWERCHINA Guizhou Engineering Co.,Ltd.(DJ-ZDXM-2022-44)the Project of POWERCHINA Guiyang Engineering Corporation Limited(YJ2022-12).
文摘This research is the first application of Unmanned Aerial Vehicles(UAVs)equipped with Multi-access Edge Computing(MEC)servers to offshore wind farms,providing a new task offloading solution to address the challenge of scarce edge servers in offshore wind farms.The proposed strategy is to offload the computational tasks in this scenario to other MEC servers and compute them proportionally,which effectively reduces the computational pressure on local MEC servers when wind turbine data are abnormal.Finally,the task offloading problem is modeled as a multi-intelligent deep reinforcement learning problem,and a task offloading model based on MultiAgent Deep Reinforcement Learning(MADRL)is established.The Adaptive Genetic Algorithm(AGA)is used to explore the action space of the Deep Deterministic Policy Gradient(DDPG),which effectively solves the problem of slow convergence of the DDPG algorithm in the high-dimensional action space.The simulation results show that the proposed algorithm,AGA-DDPG,saves approximately 61.8%,55%,21%,and 33%of the overall overhead compared to local MEC,random offloading,TD3,and DDPG,respectively.The proposed strategy is potentially important for improving real-time monitoring,big data analysis,and predictive maintenance of offshore wind farm operation and maintenance systems.
文摘In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task offloading is often overlooked.It is frequently assumed that vehicles can be accurately modeled during actual motion processes.However,in vehicular dynamic environments,both the tasks generated by the vehicles and the vehicles’surroundings are constantly changing,making it difficult to achieve real-time modeling for actual dynamic vehicular network scenarios.Taking into account the actual dynamic vehicular scenarios,this paper considers the real-time non-uniform movement of vehicles and proposes a vehicular task dynamic offloading and scheduling algorithm for single-task multi-vehicle vehicular network scenarios,attempting to solve the dynamic decision-making problem in task offloading process.The optimization objective is to minimize the average task completion time,which is formulated as a multi-constrained non-linear programming problem.Due to the mobility of vehicles,a constraint model is applied in the decision-making process to dynamically determine whether the communication range is sufficient for task offloading and transmission.Finally,the proposed vehicular task dynamic offloading and scheduling algorithm based on muti-agent deep deterministic policy gradient(MADDPG)is applied to solve the optimal solution of the optimization problem.Simulation results show that the algorithm proposed in this paper is able to achieve lower latency task computation offloading.Meanwhile,the average task completion time of the proposed algorithm in this paper can be improved by 7.6%compared to the performance of the MADDPG scheme and 51.1%compared to the performance of deep deterministic policy gradient(DDPG).
基金supported in part by the National Key R&D Program of China under Grant 2020YFB1005900the National Natural Science Foundation of China under Grant 62001220+3 种基金the Jiangsu Provincial Key Research and Development Program under Grants BE2022068the Natural Science Foundation of Jiangsu Province under Grants BK20200440the Future Network Scientific Research Fund Project FNSRFP-2021-YB-03the Young Elite Scientist Sponsorship Program,China Association for Science and Technology.
文摘Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks.However,edge computing servers(ECSs)from different operators may not trust each other,and thus the incentives for collaboration cannot be guaranteed.In this paper,we propose a consortium blockchain enabled collaborative edge computing framework,where users can offload computing tasks to ECSs from different operators.To minimize the total delay of users,we formulate a joint task offloading and resource optimization problem,under the constraint of the computing capability of each ECS.We apply the Tammer decomposition method and heuristic optimization algorithms to obtain the optimal solution.Finally,we propose a reputation based node selection approach to facilitate the consensus process,and also consider a completion time based primary node selection to avoid monopolization of certain edge node and enhance the security of the blockchain.Simulation results validate the effectiveness of the proposed algorithm,and the total delay can be reduced by up to 40%compared with the non-cooperative case.
基金supported by the Talent Fund of Beijing Jiaotong University(No.2023XKRC028)CCFLenovo Blue Ocean Research Fund and Beijing Natural Science Foundation under Grant(No.L221003).
文摘Vehicular edge computing(VEC)is emerging as a promising solution paradigm to meet the requirements of compute-intensive applications in internet of vehicle(IoV).Non-orthogonal multiple access(NOMA)has advantages in improving spectrum efficiency and dealing with bandwidth scarcity and cost.It is an encouraging progress combining VEC and NOMA.In this paper,we jointly optimize task offloading decision and resource allocation to maximize the service utility of the NOMA-VEC system.To solve the optimization problem,we propose a multiagent deep graph reinforcement learning algorithm.The algorithm extracts the topological features and relationship information between agents from the system state as observations,outputs task offloading decision and resource allocation simultaneously with local policy network,which is updated by a local learner.Simulation results demonstrate that the proposed method achieves a 1.52%∼5.80%improvement compared with the benchmark algorithms in system service utility.
基金funding from TECNALIA,Basque Research and Technology Alliance(BRTA)supported by the project aOptimization of Deep Learning algorithms for Edge IoT devices for sensorization and control in Buildings and Infrastructures(EMBED)funded by the Gipuzkoa Provincial Council and approved under the 2023 call of the Guipuzcoan Network of Science,Technology and Innovation Program with File Number 2023-CIEN-000051-01.
文摘In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer task offloading.For many resource-constrained devices,the computation of many types of tasks is not feasible because they cannot support such computations as they do not have enough available memory and processing capacity.In this scenario,it is worth considering transferring these tasks to resource-rich platforms,such as Edge Data Centers or remote cloud servers.For different reasons,it is more exciting and appropriate to download various tasks to specific download destinations depending on the properties and state of the environment and the nature of the functions.At the same time,establishing an optimal offloading policy,which ensures that all tasks are executed within the required latency and avoids excessive workload on specific computing centers is not easy.This study presents two alternatives to solve the offloading decision paradigm by introducing two well-known algorithms,Graph Neural Networks(GNN)and Deep Q-Network(DQN).It applies the alternatives on a well-known Edge Computing simulator called PureEdgeSimand compares them with the two defaultmethods,Trade-Off and Round Robin.Experiments showed that variants offer a slight improvement in task success rate and workload distribution.In terms of energy efficiency,they provided similar results.Finally,the success rates of different computing centers are tested,and the lack of capacity of remote cloud servers to respond to applications in real-time is demonstrated.These novel ways of finding a download strategy in a local networking environment are unique as they emulate the state and structure of the environment innovatively,considering the quality of its connections and constant updates.The download score defined in this research is a crucial feature for determining the quality of a download path in the GNN training process and has not previously been proposed.Simultaneously,the suitability of Reinforcement Learning(RL)techniques is demonstrated due to the dynamism of the network environment,considering all the key factors that affect the decision to offload a given task,including the actual state of all devices.
基金National Natural Science Foundation of China(62072392).
文摘Crowdsourcing technology is widely recognized for its effectiveness in task scheduling and resource allocation.While traditional methods for task allocation can help reduce costs and improve efficiency,they may encounter challenges when dealing with abnormal data flow nodes,leading to decreased allocation accuracy and efficiency.To address these issues,this study proposes a novel two-part invalid detection task allocation framework.In the first step,an anomaly detection model is developed using a dynamic self-attentive GAN to identify anomalous data.Compared to the baseline method,the model achieves an approximately 4%increase in the F1 value on the public dataset.In the second step of the framework,task allocation modeling is performed using a twopart graph matching method.This phase introduces a P-queue KM algorithm that implements a more efficient optimization strategy.The allocation efficiency is improved by approximately 23.83%compared to the baseline method.Empirical results confirm the effectiveness of the proposed framework in detecting abnormal data nodes,enhancing allocation precision,and achieving efficient allocation.
基金This research was funded by the Project of the National Natural Science Foundation of China,Grant Number 62106283.
文摘Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinforce-ment learning(DRL)theory and an improved Multi-Agent Deep Deterministic Policy Gradient(MADDPG-D2)algorithm with a dual experience replay pool and a dual noise based on multi-agent architecture is proposed to improve the efficiency of DTA.The algorithm is based on the traditional Multi-Agent Deep Deterministic Policy Gradient(MADDPG)algorithm,and considers the introduction of a double noise mechanism to increase the action exploration space in the early stage of the algorithm,and the introduction of a double experience pool to improve the data utilization rate;at the same time,in order to accelerate the training speed and efficiency of the agents,and to solve the cold-start problem of the training,the a priori knowledge technology is applied to the training of the algorithm.Finally,the MADDPG-D2 algorithm is compared and analyzed based on the digital battlefield of ground and air confrontation.The experimental results show that the agents trained by the MADDPG-D2 algorithm have higher win rates and average rewards,can utilize the resources more reasonably,and better solve the problem of the traditional single agent algorithms facing the difficulty of solving the problem in the high-dimensional decision space.The MADDPG-D2 algorithm based on multi-agent architecture proposed in this paper has certain superiority and rationality in DTA.
文摘Introduction: The uncontrolled management of waste electrical and electronic equipment (W3E) causes respiratory problems in the handlers of this waste. The objective was to study the stains associated with respiratory symptoms in W3E handlers. Methods: The study was cross-sectional with an analytical focus on W3E handlers in the informal sector in Ouagadougou. A peer-validated questionnaire collected data on a sample of 161 manipulators. Results: the most common W3E processing tasks were the purchase or sale of W3E (67.70%), its repair (39.75%) and its collection (31.06%). The prevalence of cough was 21.74%, that of wheezing 14.91%, phlegm 12.50% and dyspnea at rest 10.56%. In bivariate analysis, there were significant associations at the 5% level between W3E repair and phlegm (p-value = 0.044), between W3E burning and wheezing (p-value = 0.011) and between W3E and cough (p-value = 0.01). The final logistic regression models suggested that the burning of W3E and the melting of lead batteries represented risk factors for the occurrence of cough with respective prevalence ratios of 4.57 and 4.63. Conclusion: raising awareness on the wearing of personal protective equipment, in particular masks adapted by W3E handlers, favoring those who are dedicated to the burning of electronic waste and the melting of lead could make it possible to reduce the risk of occurrence of respiratory symptoms.
文摘Pooling,unpooling/specialization,and discretionary task completion are typical operational strategies in queueing systems that arise in healthcare,call centers,and online sales.These strategies may have advantages and disadvantages in different operational environments.This paper uses the M/M/1 and M/M/2 queues to study the impact of pooling,specialization,and discretionary task completion on the average queue length.Closed-form solutions for the average M/M/2 queue length are derived.Computational examples illustrate how the average queue length changes with the strength of pooling,specialization,and discretionary task completion.Finally,several conjectures are made in the paper.
基金supported by the National Natural Science Foundation of China(Grant No.62072031)the Applied Basic Research Foundation of Yunnan Province(Grant No.2019FD071)the Yunnan Scientific Research Foundation Project(Grant 2019J0187).
文摘With the development of vehicles towards intelligence and connectivity,vehicular data is diversifying and growing dramatically.A task allocation model and algorithm for heterogeneous Intelligent Connected Vehicle(ICV)applications are proposed for the dispersed computing network composed of heterogeneous task vehicles and Network Computing Points(NCPs).Considering the amount of task data and the idle resources of NCPs,a computing resource scheduling model for NCPs is established.Taking the heterogeneous task execution delay threshold as a constraint,the optimization problem is described as the problem of maximizing the utilization of computing resources by NCPs.The proposed problem is proven to be NP-hard by using the method of reduction to a 0-1 knapsack problem.A many-to-many matching algorithm based on resource preferences is proposed.The algorithm first establishes the mutual preference lists based on the adaptability of the task requirements and the resources provided by NCPs.This enables the filtering out of un-schedulable NCPs in the initial stage of matching,reducing the solution space dimension.To solve the matching problem between ICVs and NCPs,a new manyto-many matching algorithm is proposed to obtain a unique and stable optimal matching result.The simulation results demonstrate that the proposed scheme can improve the resource utilization of NCPs by an average of 9.6%compared to the reference scheme,and the total performance can be improved by up to 15.9%.
基金supported by the Science and Technology Department of Sichuan Province(No.2021YFG0156).
文摘Generating diverse and factual text is challenging and is receiving increasing attention.By sampling from the latent space,variational autoencoder-based models have recently enhanced the diversity of generated text.However,existing research predominantly depends on summarizationmodels to offer paragraph-level semantic information for enhancing factual correctness.The challenge lies in effectively generating factual text using sentence-level variational autoencoder-based models.In this paper,a novel model called fact-aware conditional variational autoencoder is proposed to balance the factual correctness and diversity of generated text.Specifically,our model encodes the input sentences and uses them as facts to build a conditional variational autoencoder network.By training a conditional variational autoencoder network,the model is enabled to generate text based on input facts.Building upon this foundation,the input text is passed to the discriminator along with the generated text.By employing adversarial training,the model is encouraged to generate text that is indistinguishable to the discriminator,thereby enhancing the quality of the generated text.To further improve the factual correctness,inspired by the natural language inference system,the entailment recognition task is introduced to be trained together with the discriminator via multi-task learning.Moreover,based on the entailment recognition results,a penalty term is further proposed to reconstruct the loss of our model,forcing the generator to generate text consistent with the facts.Experimental results demonstrate that compared with competitivemodels,ourmodel has achieved substantial improvements in both the quality and factual correctness of the text,despite only sacrificing a small amount of diversity.Furthermore,when considering a comprehensive evaluation of diversity and quality metrics,our model has also demonstrated the best performance.
文摘Based on the wave attack task planning method in static complex environment and the rolling optimization framework, an online task planning method in dynamic complex environment based on rolling optimization is proposed. In the process of online task planning in dynamic complex environment,online task planning is based on event triggering including target information update event, new target addition event, target failure event, weapon failure event, etc., and the methods include defense area reanalysis, parameter space update, and mission re-planning. Simulation is conducted for different events and the result shows that the index value of the attack scenario after re-planning is better than that before re-planning and according to the probability distribution of statistical simulation method, the index value distribution after re-planning is obviously in the region of high index value, and the index value gap before and after re-planning is related to the degree of posture change.
基金supported by the National Natural Science Foundation of China(No.61903036)。
文摘In response to the uncertainty of information of the injured in post disaster situations,considering constraints such as random chance and the quantity of rescue resource,the split deliv-ery vehicle routing problem with stochastic demands(SDVRPSD)model and the multi-depot split delivery heterogeneous vehicle routing problem with stochastic demands(MDSDHVRPSD)model are established.A two-stage hybrid variable neighborhood tabu search algorithm is designed for unmanned vehicle task planning to minimize the path cost of rescue plans.Simulation experiments show that the solution obtained by the algorithm can effectively reduce the rescue vehicle path cost and the rescue task completion time,with high optimization quality and certain portability.
文摘The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support of task-offloading in Multi-access Edge Computing(MEC).However,existing task-offloading optimization methods typically assume that MEC’s computing resources are unlimited,and there is a lack of research on the optimization of task-offloading when MEC resources are exhausted.In addition,existing solutions only decide whether to accept the offloaded task request based on the single decision result of the current time slot,but lack support for multiple retry in subsequent time slots.It is resulting in TD missing potential offloading opportunities in the future.To fill this gap,we propose a Two-Stage Offloading Decision-making Framework(TSODF)with request holding and dynamic eviction.Long Short-Term Memory(LSTM)-based task-offloading request prediction and MEC resource release estimation are integrated to infer the probability of a request being accepted in the subsequent time slot.The framework learns optimized decision-making experiences continuously to increase the success rate of task offloading based on deep learning technology.Simulation results show that TSODF reduces total TD’s energy consumption and delay for task execution and improves task offloading rate and system resource utilization compared to the benchmark method.
基金supported in part by the National Natural Science Foundation of China under Grants 62201105,62331017,and 62075024in part by the Natural Science Foundation of Chongqing under Grant cstc2021jcyj-msxmX0404+1 种基金in part by the Chongqing Municipal Education Commission under Grant KJQN202100643in part by Guangdong Basic and Applied Basic Research Foundation under Grant 2022A1515110056.
文摘The Multi-access Edge Cloud(MEC) networks extend cloud computing services and capabilities to the edge of the networks. By bringing computation and storage capabilities closer to end-users and connected devices, MEC networks can support a wide range of applications. MEC networks can also leverage various types of resources, including computation resources, network resources, radio resources,and location-based resources, to provide multidimensional resources for intelligent applications in 5/6G.However, tasks generated by users often consist of multiple subtasks that require different types of resources. It is a challenging problem to offload multiresource task requests to the edge cloud aiming at maximizing benefits due to the heterogeneity of resources provided by devices. To address this issue,we mathematically model the task requests with multiple subtasks. Then, the problem of task offloading of multi-resource task requests is proved to be NP-hard. Furthermore, we propose a novel Dual-Agent Deep Reinforcement Learning algorithm with Node First and Link features(NF_L_DA_DRL) based on the policy network, to optimize the benefits generated by offloading multi-resource task requests in MEC networks. Finally, simulation results show that the proposed algorithm can effectively improve the benefit of task offloading with higher resource utilization compared with baseline algorithms.
基金supported by ZTE‑University‑Institute Fund Project under Grant No.IA20230629009.
文摘This paper reviews task scheduling frameworks,methods,and evaluation metrics of central processing unit-graphics processing unit(CPU-GPU)heterogeneous clusters.Task scheduling of CPU-GPU heterogeneous clusters can be carried out on the system level,nodelevel,and device level.Most task-scheduling technologies are heuristic based on the experts’experience,while some technologies are based on statistic methods using machine learning,deep learning,or reinforcement learning.Many metrics have been adopted to evaluate and compare different task scheduling technologies that try to optimize different goals of task scheduling.Although statistic task scheduling has reached fewer research achievements than heuristic task scheduling,the statistic task scheduling still has significant research potential.
基金supported by the Ministerio Espanol de Ciencia e Innovación under Project Number PID2020-115570GB-C22,MCIN/AEI/10.13039/501100011033by the Cátedra de Empresa Tecnología para las Personas(UGR-Fujitsu).
文摘As cloud computing usage grows,cloud data centers play an increasingly important role.To maximize resource utilization,ensure service quality,and enhance system performance,it is crucial to allocate tasks and manage performance effectively.The purpose of this study is to provide an extensive analysis of task allocation and performance management techniques employed in cloud data centers.The aim is to systematically categorize and organize previous research by identifying the cloud computing methodologies,categories,and gaps.A literature review was conducted,which included the analysis of 463 task allocations and 480 performance management papers.The review revealed three task allocation research topics and seven performance management methods.Task allocation research areas are resource allocation,load-Balancing,and scheduling.Performance management includes monitoring and control,power and energy management,resource utilization optimization,quality of service management,fault management,virtual machine management,and network management.The study proposes new techniques to enhance cloud computing work allocation and performance management.Short-comings in each approach can guide future research.The research’s findings on cloud data center task allocation and performance management can assist academics,practitioners,and cloud service providers in optimizing their systems for dependability,cost-effectiveness,and scalability.Innovative methodologies can steer future research to fill gaps in the literature.
基金in part by the Hubei Natural Science and Research Project under Grant 2020418in part by the 2021 Light of Taihu Science and Technology Projectin part by the 2022 Wuxi Science and Technology Innovation and Entrepreneurship Program.
文摘More devices in the Intelligent Internet of Things(AIoT)result in an increased number of tasks that require low latency and real-time responsiveness,leading to an increased demand for computational resources.Cloud computing’s low-latency performance issues in AIoT scenarios have led researchers to explore fog computing as a complementary extension.However,the effective allocation of resources for task execution within fog environments,characterized by limitations and heterogeneity in computational resources,remains a formidable challenge.To tackle this challenge,in this study,we integrate fog computing and cloud computing.We begin by establishing a fog-cloud environment framework,followed by the formulation of a mathematical model for task scheduling.Lastly,we introduce an enhanced hybrid Equilibrium Optimizer(EHEO)tailored for AIoT task scheduling.The overarching objective is to decrease both the makespan and energy consumption of the fog-cloud system while accounting for task deadlines.The proposed EHEO method undergoes a thorough evaluation against multiple benchmark algorithms,encompassing metrics likemakespan,total energy consumption,success rate,and average waiting time.Comprehensive experimental results unequivocally demonstrate the superior performance of EHEO across all assessed metrics.Notably,in the most favorable conditions,EHEO significantly diminishes both the makespan and energy consumption by approximately 50%and 35.5%,respectively,compared to the secondbest performing approach,which affirms its efficacy in advancing the efficiency of AIoT task scheduling within fog-cloud networks.
基金This work was supported by the National Key Research and Development Program of China(2021YFB2900603)the National Natural Science Foundation of China(61831008).
文摘A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA.
基金supported by National Natural Science Foundation of China(Grant No.62071377,62101442,62201456)Natural Science Foundation of Shaanxi Province(Grant No.2023-YBGY-036,2022JQ-687)The Graduate Student Innovation Foundation Project of Xi’an University of Posts and Telecommunications under Grant CXJJDL2022003.
文摘The Internet of Medical Things(Io MT) is regarded as a critical technology for intelligent healthcare in the foreseeable 6G era. Nevertheless, due to the limited computing power capability of edge devices and task-related coupling relationships, Io MT faces unprecedented challenges. Considering the associative connections among tasks, this paper proposes a computing offloading policy for multiple-user devices(UDs) considering device-to-device(D2D) communication and a multi-access edge computing(MEC)technique under the scenario of Io MT. Specifically,to minimize the total delay and energy consumption concerning the requirement of Io MT, we first analyze and model the detailed local execution, MEC execution, D2D execution, and associated tasks offloading exchange model. Consequently, the associated tasks’ offloading scheme of multi-UDs is formulated as a mixed-integer nonconvex optimization problem. Considering the advantages of deep reinforcement learning(DRL) in processing tasks related to coupling relationships, a Double DQN based associative tasks computing offloading(DDATO) algorithm is then proposed to obtain the optimal solution, which can make the best offloading decision under the condition that tasks of UDs are associative. Furthermore, to reduce the complexity of the DDATO algorithm, the cacheaided procedure is intentionally introduced before the data training process. This avoids redundant offloading and computing procedures concerning tasks that previously have already been cached by other UDs. In addition, we use a dynamic ε-greedy strategy in the action selection section of the algorithm, thus preventing the algorithm from falling into a locally optimal solution. Simulation results demonstrate that compared with other existing methods for associative task models concerning different structures in the Io MT network, the proposed algorithm can lower the total cost more effectively and efficiently while also providing a tradeoff between delay and energy consumption tolerance.