In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task ...In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task offloading is often overlooked.It is frequently assumed that vehicles can be accurately modeled during actual motion processes.However,in vehicular dynamic environments,both the tasks generated by the vehicles and the vehicles’surroundings are constantly changing,making it difficult to achieve real-time modeling for actual dynamic vehicular network scenarios.Taking into account the actual dynamic vehicular scenarios,this paper considers the real-time non-uniform movement of vehicles and proposes a vehicular task dynamic offloading and scheduling algorithm for single-task multi-vehicle vehicular network scenarios,attempting to solve the dynamic decision-making problem in task offloading process.The optimization objective is to minimize the average task completion time,which is formulated as a multi-constrained non-linear programming problem.Due to the mobility of vehicles,a constraint model is applied in the decision-making process to dynamically determine whether the communication range is sufficient for task offloading and transmission.Finally,the proposed vehicular task dynamic offloading and scheduling algorithm based on muti-agent deep deterministic policy gradient(MADDPG)is applied to solve the optimal solution of the optimization problem.Simulation results show that the algorithm proposed in this paper is able to achieve lower latency task computation offloading.Meanwhile,the average task completion time of the proposed algorithm in this paper can be improved by 7.6%compared to the performance of the MADDPG scheme and 51.1%compared to the performance of deep deterministic policy gradient(DDPG).展开更多
Introduction: The uncontrolled management of waste electrical and electronic equipment (W3E) causes respiratory problems in the handlers of this waste. The objective was to study the stains associated with respiratory...Introduction: The uncontrolled management of waste electrical and electronic equipment (W3E) causes respiratory problems in the handlers of this waste. The objective was to study the stains associated with respiratory symptoms in W3E handlers. Methods: The study was cross-sectional with an analytical focus on W3E handlers in the informal sector in Ouagadougou. A peer-validated questionnaire collected data on a sample of 161 manipulators. Results: the most common W3E processing tasks were the purchase or sale of W3E (67.70%), its repair (39.75%) and its collection (31.06%). The prevalence of cough was 21.74%, that of wheezing 14.91%, phlegm 12.50% and dyspnea at rest 10.56%. In bivariate analysis, there were significant associations at the 5% level between W3E repair and phlegm (p-value = 0.044), between W3E burning and wheezing (p-value = 0.011) and between W3E and cough (p-value = 0.01). The final logistic regression models suggested that the burning of W3E and the melting of lead batteries represented risk factors for the occurrence of cough with respective prevalence ratios of 4.57 and 4.63. Conclusion: raising awareness on the wearing of personal protective equipment, in particular masks adapted by W3E handlers, favoring those who are dedicated to the burning of electronic waste and the melting of lead could make it possible to reduce the risk of occurrence of respiratory symptoms.展开更多
The Internet of Medical Things(Io MT) is regarded as a critical technology for intelligent healthcare in the foreseeable 6G era. Nevertheless, due to the limited computing power capability of edge devices and task-rel...The Internet of Medical Things(Io MT) is regarded as a critical technology for intelligent healthcare in the foreseeable 6G era. Nevertheless, due to the limited computing power capability of edge devices and task-related coupling relationships, Io MT faces unprecedented challenges. Considering the associative connections among tasks, this paper proposes a computing offloading policy for multiple-user devices(UDs) considering device-to-device(D2D) communication and a multi-access edge computing(MEC)technique under the scenario of Io MT. Specifically,to minimize the total delay and energy consumption concerning the requirement of Io MT, we first analyze and model the detailed local execution, MEC execution, D2D execution, and associated tasks offloading exchange model. Consequently, the associated tasks’ offloading scheme of multi-UDs is formulated as a mixed-integer nonconvex optimization problem. Considering the advantages of deep reinforcement learning(DRL) in processing tasks related to coupling relationships, a Double DQN based associative tasks computing offloading(DDATO) algorithm is then proposed to obtain the optimal solution, which can make the best offloading decision under the condition that tasks of UDs are associative. Furthermore, to reduce the complexity of the DDATO algorithm, the cacheaided procedure is intentionally introduced before the data training process. This avoids redundant offloading and computing procedures concerning tasks that previously have already been cached by other UDs. In addition, we use a dynamic ε-greedy strategy in the action selection section of the algorithm, thus preventing the algorithm from falling into a locally optimal solution. Simulation results demonstrate that compared with other existing methods for associative task models concerning different structures in the Io MT network, the proposed algorithm can lower the total cost more effectively and efficiently while also providing a tradeoff between delay and energy consumption tolerance.展开更多
Constrained multi-objective optimization problems(CMOPs) include the optimization of objective functions and the satisfaction of constraint conditions, which challenge the solvers.To solve CMOPs, constrained multi-obj...Constrained multi-objective optimization problems(CMOPs) include the optimization of objective functions and the satisfaction of constraint conditions, which challenge the solvers.To solve CMOPs, constrained multi-objective evolutionary algorithms(CMOEAs) have been developed. However, most of them tend to converge into local areas due to the loss of diversity. Evolutionary multitasking(EMT) is new model of solving complex optimization problems, through the knowledge transfer between the source task and other related tasks. Inspired by EMT, this paper develops a new EMT-based CMOEA to solve CMOPs, in which the main task, a global auxiliary task, and a local auxiliary task are created and optimized by one specific population respectively. The main task focuses on finding the feasible Pareto front(PF), and global and local auxiliary tasks are used to respectively enhance global and local diversity. Moreover, the global auxiliary task is used to implement the global search by ignoring constraints, so as to help the population of the main task pass through infeasible obstacles. The local auxiliary task is used to provide local diversity around the population of the main task, so as to exploit promising regions. Through the knowledge transfer among the three tasks, the search ability of the population of the main task will be significantly improved. Compared with other state-of-the-art CMOEAs, the experimental results on three benchmark test suites demonstrate the superior or competitive performance of the proposed CMOEA.展开更多
This paper studies the coordinated planning of transmission tasks in the heterogeneous space networks to enable efficient sharing of ground stations cross satellite systems.Specifically,we first formulate the coordina...This paper studies the coordinated planning of transmission tasks in the heterogeneous space networks to enable efficient sharing of ground stations cross satellite systems.Specifically,we first formulate the coordinated planning problem into a mixed integer liner programming(MILP)problem based on time expanded graph.Then,the problem is transferred and reformulated into a consensus optimization framework which can be solved by satellite systems parallelly.With alternating direction method of multipliers(ADMM),a semi-distributed coordinated transmission task planning algorithm is proposed,in which each satellite system plans its own tasks based on local information and limited communication with the coordination center.Simulation results demonstrate that compared with the centralized and fully-distributed methods,the proposed semi-distributed coordinated method can strike a better balance among task complete rate,complexity,and the amount of information required to be exchanged.展开更多
Well organized datacentres with interconnected servers constitute the cloud computing infrastructure.User requests are submitted through an interface to these servers that provide service to them in an on-demand basis...Well organized datacentres with interconnected servers constitute the cloud computing infrastructure.User requests are submitted through an interface to these servers that provide service to them in an on-demand basis.The scientific applications that get executed at cloud by making use of the heterogeneous resources being allocated to them in a dynamic manner are grouped under NP hard problem category.Task scheduling in cloud poses numerous challenges impacting the cloud performance.If not handled properly,user satisfaction becomes questionable.More recently researchers had come up with meta-heuristic type of solutions for enriching the task scheduling activity in the cloud environment.The prime aim of task scheduling is to utilize the resources available in an optimal manner and reduce the time span of task execution.An improvised seagull optimization algorithm which combines the features of the Cuckoo search(CS)and seagull optimization algorithm(SOA)had been proposed in this work to enhance the performance of the scheduling activity inside the cloud computing environment.The proposed algorithm aims to minimize the cost and time parameters that are spent during task scheduling in the heterogeneous cloud environment.Performance evaluation of the proposed algorithm had been performed using the Cloudsim 3.0 toolkit by comparing it with Multi objective-Ant Colony Optimization(MO-ACO),ACO and Min-Min algorithms.The proposed SOA-CS technique had produced an improvement of 1.06%,4.2%,and 2.4%for makespan and had reduced the overall cost to the extent of 1.74%,3.93%and 2.77%when compared with PSO,ACO,IDEA algorithms respectively when 300 vms are considered.The comparative simulation results obtained had shown that the proposed improvised seagull optimization algorithm fares better than other contemporaries.展开更多
Cloud computing plays a significant role in Information Technology(IT)industry to deliver scalable resources as a service.One of the most important factor to increase the performance of the cloud server is maximizing t...Cloud computing plays a significant role in Information Technology(IT)industry to deliver scalable resources as a service.One of the most important factor to increase the performance of the cloud server is maximizing the resource utilization in task scheduling.The main advantage of this scheduling is to max-imize the performance and minimize the time loss.Various researchers examined numerous scheduling methods to achieve Quality of Service(QoS)and to reduce execution time.However,it had disadvantages in terms of low throughput and high response time.Hence,this study aimed to schedule the task efficiently and to eliminate the faults in scheduling the tasks to the Virtual Machines(VMs).For this purpose,the research proposed novel Particle Swarm Optimization-Bandwidth Aware divisible Task(PSO-BATS)scheduling with Multi-Layered Regression Host Employment(MLRHE)to sort out the issues of task scheduling and ease the scheduling operation by load balancing.The proposed efficient sche-duling provides benefits to both cloud users and servers.The performance evalua-tion is undertaken with respect to cost,Performance Improvement Rate(PIR)and makespan which revealed the efficiency of the proposed method.Additionally,comparative analysis is undertaken which confirmed the performance of the intro-duced system than conventional system for scheduling tasks with highflexibility.展开更多
AI(Artificial Intelligence)workloads are proliferating in modernreal-time systems.As the tasks of AI workloads fluctuate over time,resourceplanning policies used for traditional fixed real-time tasks should be reexami...AI(Artificial Intelligence)workloads are proliferating in modernreal-time systems.As the tasks of AI workloads fluctuate over time,resourceplanning policies used for traditional fixed real-time tasks should be reexamined.In particular,it is difficult to immediately handle changes inreal-time tasks without violating the deadline constraints.To cope with thissituation,this paper analyzes the task situations of AI workloads and findsthe following two observations.First,resource planning for AI workloadsis a complicated search problem that requires much time for optimization.Second,although the task set of an AI workload may change over time,thepossible combinations of the task sets are known in advance.Based on theseobservations,this paper proposes a new resource planning scheme for AIworkloads that supports the re-planning of resources.Instead of generatingresource plans on the fly,the proposed scheme pre-determines resourceplans for various combinations of tasks.Thus,in any case,the workload isimmediately executed according to the resource plan maintained.Specifically,the proposed scheme maintains an optimized CPU(Central Processing Unit)and memory resource plan using genetic algorithms and applies it as soonas the workload changes.The proposed scheme is implemented in the opensourcesimulator SimRTS for the validation of its effectiveness.Simulationexperiments show that the proposed scheme reduces the energy consumptionof CPU and memory by 45.5%on average without deadline misses.展开更多
Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks.However,edge computing servers(ECSs)from different operators may not trust each other,and thus t...Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks.However,edge computing servers(ECSs)from different operators may not trust each other,and thus the incentives for collaboration cannot be guaranteed.In this paper,we propose a consortium blockchain enabled collaborative edge computing framework,where users can offload computing tasks to ECSs from different operators.To minimize the total delay of users,we formulate a joint task offloading and resource optimization problem,under the constraint of the computing capability of each ECS.We apply the Tammer decomposition method and heuristic optimization algorithms to obtain the optimal solution.Finally,we propose a reputation based node selection approach to facilitate the consensus process,and also consider a completion time based primary node selection to avoid monopolization of certain edge node and enhance the security of the blockchain.Simulation results validate the effectiveness of the proposed algorithm,and the total delay can be reduced by up to 40%compared with the non-cooperative case.展开更多
Vehicular edge computing(VEC)is emerging as a promising solution paradigm to meet the requirements of compute-intensive applications in internet of vehicle(IoV).Non-orthogonal multiple access(NOMA)has advantages in im...Vehicular edge computing(VEC)is emerging as a promising solution paradigm to meet the requirements of compute-intensive applications in internet of vehicle(IoV).Non-orthogonal multiple access(NOMA)has advantages in improving spectrum efficiency and dealing with bandwidth scarcity and cost.It is an encouraging progress combining VEC and NOMA.In this paper,we jointly optimize task offloading decision and resource allocation to maximize the service utility of the NOMA-VEC system.To solve the optimization problem,we propose a multiagent deep graph reinforcement learning algorithm.The algorithm extracts the topological features and relationship information between agents from the system state as observations,outputs task offloading decision and resource allocation simultaneously with local policy network,which is updated by a local learner.Simulation results demonstrate that the proposed method achieves a 1.52%∼5.80%improvement compared with the benchmark algorithms in system service utility.展开更多
In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer t...In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer task offloading.For many resource-constrained devices,the computation of many types of tasks is not feasible because they cannot support such computations as they do not have enough available memory and processing capacity.In this scenario,it is worth considering transferring these tasks to resource-rich platforms,such as Edge Data Centers or remote cloud servers.For different reasons,it is more exciting and appropriate to download various tasks to specific download destinations depending on the properties and state of the environment and the nature of the functions.At the same time,establishing an optimal offloading policy,which ensures that all tasks are executed within the required latency and avoids excessive workload on specific computing centers is not easy.This study presents two alternatives to solve the offloading decision paradigm by introducing two well-known algorithms,Graph Neural Networks(GNN)and Deep Q-Network(DQN).It applies the alternatives on a well-known Edge Computing simulator called PureEdgeSimand compares them with the two defaultmethods,Trade-Off and Round Robin.Experiments showed that variants offer a slight improvement in task success rate and workload distribution.In terms of energy efficiency,they provided similar results.Finally,the success rates of different computing centers are tested,and the lack of capacity of remote cloud servers to respond to applications in real-time is demonstrated.These novel ways of finding a download strategy in a local networking environment are unique as they emulate the state and structure of the environment innovatively,considering the quality of its connections and constant updates.The download score defined in this research is a crucial feature for determining the quality of a download path in the GNN training process and has not previously been proposed.Simultaneously,the suitability of Reinforcement Learning(RL)techniques is demonstrated due to the dynamism of the network environment,considering all the key factors that affect the decision to offload a given task,including the actual state of all devices.展开更多
Crowdsourcing technology is widely recognized for its effectiveness in task scheduling and resource allocation.While traditional methods for task allocation can help reduce costs and improve efficiency,they may encoun...Crowdsourcing technology is widely recognized for its effectiveness in task scheduling and resource allocation.While traditional methods for task allocation can help reduce costs and improve efficiency,they may encounter challenges when dealing with abnormal data flow nodes,leading to decreased allocation accuracy and efficiency.To address these issues,this study proposes a novel two-part invalid detection task allocation framework.In the first step,an anomaly detection model is developed using a dynamic self-attentive GAN to identify anomalous data.Compared to the baseline method,the model achieves an approximately 4%increase in the F1 value on the public dataset.In the second step of the framework,task allocation modeling is performed using a twopart graph matching method.This phase introduces a P-queue KM algorithm that implements a more efficient optimization strategy.The allocation efficiency is improved by approximately 23.83%compared to the baseline method.Empirical results confirm the effectiveness of the proposed framework in detecting abnormal data nodes,enhancing allocation precision,and achieving efficient allocation.展开更多
With the development of vehicles towards intelligence and connectivity,vehicular data is diversifying and growing dramatically.A task allocation model and algorithm for heterogeneous Intelligent Connected Vehicle(ICV)...With the development of vehicles towards intelligence and connectivity,vehicular data is diversifying and growing dramatically.A task allocation model and algorithm for heterogeneous Intelligent Connected Vehicle(ICV)applications are proposed for the dispersed computing network composed of heterogeneous task vehicles and Network Computing Points(NCPs).Considering the amount of task data and the idle resources of NCPs,a computing resource scheduling model for NCPs is established.Taking the heterogeneous task execution delay threshold as a constraint,the optimization problem is described as the problem of maximizing the utilization of computing resources by NCPs.The proposed problem is proven to be NP-hard by using the method of reduction to a 0-1 knapsack problem.A many-to-many matching algorithm based on resource preferences is proposed.The algorithm first establishes the mutual preference lists based on the adaptability of the task requirements and the resources provided by NCPs.This enables the filtering out of un-schedulable NCPs in the initial stage of matching,reducing the solution space dimension.To solve the matching problem between ICVs and NCPs,a new manyto-many matching algorithm is proposed to obtain a unique and stable optimal matching result.The simulation results demonstrate that the proposed scheme can improve the resource utilization of NCPs by an average of 9.6%compared to the reference scheme,and the total performance can be improved by up to 15.9%.展开更多
Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinfor...Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinforce-ment learning(DRL)theory and an improved Multi-Agent Deep Deterministic Policy Gradient(MADDPG-D2)algorithm with a dual experience replay pool and a dual noise based on multi-agent architecture is proposed to improve the efficiency of DTA.The algorithm is based on the traditional Multi-Agent Deep Deterministic Policy Gradient(MADDPG)algorithm,and considers the introduction of a double noise mechanism to increase the action exploration space in the early stage of the algorithm,and the introduction of a double experience pool to improve the data utilization rate;at the same time,in order to accelerate the training speed and efficiency of the agents,and to solve the cold-start problem of the training,the a priori knowledge technology is applied to the training of the algorithm.Finally,the MADDPG-D2 algorithm is compared and analyzed based on the digital battlefield of ground and air confrontation.The experimental results show that the agents trained by the MADDPG-D2 algorithm have higher win rates and average rewards,can utilize the resources more reasonably,and better solve the problem of the traditional single agent algorithms facing the difficulty of solving the problem in the high-dimensional decision space.The MADDPG-D2 algorithm based on multi-agent architecture proposed in this paper has certain superiority and rationality in DTA.展开更多
Pooling,unpooling/specialization,and discretionary task completion are typical operational strategies in queueing systems that arise in healthcare,call centers,and online sales.These strategies may have advantages and...Pooling,unpooling/specialization,and discretionary task completion are typical operational strategies in queueing systems that arise in healthcare,call centers,and online sales.These strategies may have advantages and disadvantages in different operational environments.This paper uses the M/M/1 and M/M/2 queues to study the impact of pooling,specialization,and discretionary task completion on the average queue length.Closed-form solutions for the average M/M/2 queue length are derived.Computational examples illustrate how the average queue length changes with the strength of pooling,specialization,and discretionary task completion.Finally,several conjectures are made in the paper.展开更多
Generating diverse and factual text is challenging and is receiving increasing attention.By sampling from the latent space,variational autoencoder-based models have recently enhanced the diversity of generated text.Ho...Generating diverse and factual text is challenging and is receiving increasing attention.By sampling from the latent space,variational autoencoder-based models have recently enhanced the diversity of generated text.However,existing research predominantly depends on summarizationmodels to offer paragraph-level semantic information for enhancing factual correctness.The challenge lies in effectively generating factual text using sentence-level variational autoencoder-based models.In this paper,a novel model called fact-aware conditional variational autoencoder is proposed to balance the factual correctness and diversity of generated text.Specifically,our model encodes the input sentences and uses them as facts to build a conditional variational autoencoder network.By training a conditional variational autoencoder network,the model is enabled to generate text based on input facts.Building upon this foundation,the input text is passed to the discriminator along with the generated text.By employing adversarial training,the model is encouraged to generate text that is indistinguishable to the discriminator,thereby enhancing the quality of the generated text.To further improve the factual correctness,inspired by the natural language inference system,the entailment recognition task is introduced to be trained together with the discriminator via multi-task learning.Moreover,based on the entailment recognition results,a penalty term is further proposed to reconstruct the loss of our model,forcing the generator to generate text consistent with the facts.Experimental results demonstrate that compared with competitivemodels,ourmodel has achieved substantial improvements in both the quality and factual correctness of the text,despite only sacrificing a small amount of diversity.Furthermore,when considering a comprehensive evaluation of diversity and quality metrics,our model has also demonstrated the best performance.展开更多
Based on the wave attack task planning method in static complex environment and the rolling optimization framework, an online task planning method in dynamic complex environment based on rolling optimization is propos...Based on the wave attack task planning method in static complex environment and the rolling optimization framework, an online task planning method in dynamic complex environment based on rolling optimization is proposed. In the process of online task planning in dynamic complex environment,online task planning is based on event triggering including target information update event, new target addition event, target failure event, weapon failure event, etc., and the methods include defense area reanalysis, parameter space update, and mission re-planning. Simulation is conducted for different events and the result shows that the index value of the attack scenario after re-planning is better than that before re-planning and according to the probability distribution of statistical simulation method, the index value distribution after re-planning is obviously in the region of high index value, and the index value gap before and after re-planning is related to the degree of posture change.展开更多
In response to the uncertainty of information of the injured in post disaster situations,considering constraints such as random chance and the quantity of rescue resource,the split deliv-ery vehicle routing problem wi...In response to the uncertainty of information of the injured in post disaster situations,considering constraints such as random chance and the quantity of rescue resource,the split deliv-ery vehicle routing problem with stochastic demands(SDVRPSD)model and the multi-depot split delivery heterogeneous vehicle routing problem with stochastic demands(MDSDHVRPSD)model are established.A two-stage hybrid variable neighborhood tabu search algorithm is designed for unmanned vehicle task planning to minimize the path cost of rescue plans.Simulation experiments show that the solution obtained by the algorithm can effectively reduce the rescue vehicle path cost and the rescue task completion time,with high optimization quality and certain portability.展开更多
The Multi-access Edge Cloud(MEC) networks extend cloud computing services and capabilities to the edge of the networks. By bringing computation and storage capabilities closer to end-users and connected devices, MEC n...The Multi-access Edge Cloud(MEC) networks extend cloud computing services and capabilities to the edge of the networks. By bringing computation and storage capabilities closer to end-users and connected devices, MEC networks can support a wide range of applications. MEC networks can also leverage various types of resources, including computation resources, network resources, radio resources,and location-based resources, to provide multidimensional resources for intelligent applications in 5/6G.However, tasks generated by users often consist of multiple subtasks that require different types of resources. It is a challenging problem to offload multiresource task requests to the edge cloud aiming at maximizing benefits due to the heterogeneity of resources provided by devices. To address this issue,we mathematically model the task requests with multiple subtasks. Then, the problem of task offloading of multi-resource task requests is proved to be NP-hard. Furthermore, we propose a novel Dual-Agent Deep Reinforcement Learning algorithm with Node First and Link features(NF_L_DA_DRL) based on the policy network, to optimize the benefits generated by offloading multi-resource task requests in MEC networks. Finally, simulation results show that the proposed algorithm can effectively improve the benefit of task offloading with higher resource utilization compared with baseline algorithms.展开更多
The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support ...The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support of task-offloading in Multi-access Edge Computing(MEC).However,existing task-offloading optimization methods typically assume that MEC’s computing resources are unlimited,and there is a lack of research on the optimization of task-offloading when MEC resources are exhausted.In addition,existing solutions only decide whether to accept the offloaded task request based on the single decision result of the current time slot,but lack support for multiple retry in subsequent time slots.It is resulting in TD missing potential offloading opportunities in the future.To fill this gap,we propose a Two-Stage Offloading Decision-making Framework(TSODF)with request holding and dynamic eviction.Long Short-Term Memory(LSTM)-based task-offloading request prediction and MEC resource release estimation are integrated to infer the probability of a request being accepted in the subsequent time slot.The framework learns optimized decision-making experiences continuously to increase the success rate of task offloading based on deep learning technology.Simulation results show that TSODF reduces total TD’s energy consumption and delay for task execution and improves task offloading rate and system resource utilization compared to the benchmark method.展开更多
文摘In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task offloading is often overlooked.It is frequently assumed that vehicles can be accurately modeled during actual motion processes.However,in vehicular dynamic environments,both the tasks generated by the vehicles and the vehicles’surroundings are constantly changing,making it difficult to achieve real-time modeling for actual dynamic vehicular network scenarios.Taking into account the actual dynamic vehicular scenarios,this paper considers the real-time non-uniform movement of vehicles and proposes a vehicular task dynamic offloading and scheduling algorithm for single-task multi-vehicle vehicular network scenarios,attempting to solve the dynamic decision-making problem in task offloading process.The optimization objective is to minimize the average task completion time,which is formulated as a multi-constrained non-linear programming problem.Due to the mobility of vehicles,a constraint model is applied in the decision-making process to dynamically determine whether the communication range is sufficient for task offloading and transmission.Finally,the proposed vehicular task dynamic offloading and scheduling algorithm based on muti-agent deep deterministic policy gradient(MADDPG)is applied to solve the optimal solution of the optimization problem.Simulation results show that the algorithm proposed in this paper is able to achieve lower latency task computation offloading.Meanwhile,the average task completion time of the proposed algorithm in this paper can be improved by 7.6%compared to the performance of the MADDPG scheme and 51.1%compared to the performance of deep deterministic policy gradient(DDPG).
文摘Introduction: The uncontrolled management of waste electrical and electronic equipment (W3E) causes respiratory problems in the handlers of this waste. The objective was to study the stains associated with respiratory symptoms in W3E handlers. Methods: The study was cross-sectional with an analytical focus on W3E handlers in the informal sector in Ouagadougou. A peer-validated questionnaire collected data on a sample of 161 manipulators. Results: the most common W3E processing tasks were the purchase or sale of W3E (67.70%), its repair (39.75%) and its collection (31.06%). The prevalence of cough was 21.74%, that of wheezing 14.91%, phlegm 12.50% and dyspnea at rest 10.56%. In bivariate analysis, there were significant associations at the 5% level between W3E repair and phlegm (p-value = 0.044), between W3E burning and wheezing (p-value = 0.011) and between W3E and cough (p-value = 0.01). The final logistic regression models suggested that the burning of W3E and the melting of lead batteries represented risk factors for the occurrence of cough with respective prevalence ratios of 4.57 and 4.63. Conclusion: raising awareness on the wearing of personal protective equipment, in particular masks adapted by W3E handlers, favoring those who are dedicated to the burning of electronic waste and the melting of lead could make it possible to reduce the risk of occurrence of respiratory symptoms.
基金supported by National Natural Science Foundation of China(Grant No.62071377,62101442,62201456)Natural Science Foundation of Shaanxi Province(Grant No.2023-YBGY-036,2022JQ-687)The Graduate Student Innovation Foundation Project of Xi’an University of Posts and Telecommunications under Grant CXJJDL2022003.
文摘The Internet of Medical Things(Io MT) is regarded as a critical technology for intelligent healthcare in the foreseeable 6G era. Nevertheless, due to the limited computing power capability of edge devices and task-related coupling relationships, Io MT faces unprecedented challenges. Considering the associative connections among tasks, this paper proposes a computing offloading policy for multiple-user devices(UDs) considering device-to-device(D2D) communication and a multi-access edge computing(MEC)technique under the scenario of Io MT. Specifically,to minimize the total delay and energy consumption concerning the requirement of Io MT, we first analyze and model the detailed local execution, MEC execution, D2D execution, and associated tasks offloading exchange model. Consequently, the associated tasks’ offloading scheme of multi-UDs is formulated as a mixed-integer nonconvex optimization problem. Considering the advantages of deep reinforcement learning(DRL) in processing tasks related to coupling relationships, a Double DQN based associative tasks computing offloading(DDATO) algorithm is then proposed to obtain the optimal solution, which can make the best offloading decision under the condition that tasks of UDs are associative. Furthermore, to reduce the complexity of the DDATO algorithm, the cacheaided procedure is intentionally introduced before the data training process. This avoids redundant offloading and computing procedures concerning tasks that previously have already been cached by other UDs. In addition, we use a dynamic ε-greedy strategy in the action selection section of the algorithm, thus preventing the algorithm from falling into a locally optimal solution. Simulation results demonstrate that compared with other existing methods for associative task models concerning different structures in the Io MT network, the proposed algorithm can lower the total cost more effectively and efficiently while also providing a tradeoff between delay and energy consumption tolerance.
基金supported in part by the National Natural Science Fund for Outstanding Young Scholars of China (61922072)the National Natural Science Foundation of China (62176238, 61806179, 61876169, 61976237)+2 种基金China Postdoctoral Science Foundation (2020M682347)the Training Program of Young Backbone Teachers in Colleges and Universities in Henan Province (2020GGJS006)Henan Provincial Young Talents Lifting Project (2021HYTP007)。
文摘Constrained multi-objective optimization problems(CMOPs) include the optimization of objective functions and the satisfaction of constraint conditions, which challenge the solvers.To solve CMOPs, constrained multi-objective evolutionary algorithms(CMOEAs) have been developed. However, most of them tend to converge into local areas due to the loss of diversity. Evolutionary multitasking(EMT) is new model of solving complex optimization problems, through the knowledge transfer between the source task and other related tasks. Inspired by EMT, this paper develops a new EMT-based CMOEA to solve CMOPs, in which the main task, a global auxiliary task, and a local auxiliary task are created and optimized by one specific population respectively. The main task focuses on finding the feasible Pareto front(PF), and global and local auxiliary tasks are used to respectively enhance global and local diversity. Moreover, the global auxiliary task is used to implement the global search by ignoring constraints, so as to help the population of the main task pass through infeasible obstacles. The local auxiliary task is used to provide local diversity around the population of the main task, so as to exploit promising regions. Through the knowledge transfer among the three tasks, the search ability of the population of the main task will be significantly improved. Compared with other state-of-the-art CMOEAs, the experimental results on three benchmark test suites demonstrate the superior or competitive performance of the proposed CMOEA.
基金supported in part by the NSF China under Grant(61701365,61801365,62001347)in part by Natural Science Foundation of Shaanxi Province(2020JQ-686)+4 种基金in part by the China Postdoctoral Science Foundation under Grant(2018M643581,2019TQ0210,2019TQ0241,2020M673344)in part by Young Talent fund of University Association for Science and Technology in Shaanxi,China(20200112)in part by Key Research and Development Program in Shaanxi Province of China(2021GY066)in part by Postdoctoral Foundation in Shaanxi Province of China(2018BSHEDZZ47)the Fundamental Research Funds for the Central Universities。
文摘This paper studies the coordinated planning of transmission tasks in the heterogeneous space networks to enable efficient sharing of ground stations cross satellite systems.Specifically,we first formulate the coordinated planning problem into a mixed integer liner programming(MILP)problem based on time expanded graph.Then,the problem is transferred and reformulated into a consensus optimization framework which can be solved by satellite systems parallelly.With alternating direction method of multipliers(ADMM),a semi-distributed coordinated transmission task planning algorithm is proposed,in which each satellite system plans its own tasks based on local information and limited communication with the coordination center.Simulation results demonstrate that compared with the centralized and fully-distributed methods,the proposed semi-distributed coordinated method can strike a better balance among task complete rate,complexity,and the amount of information required to be exchanged.
文摘Well organized datacentres with interconnected servers constitute the cloud computing infrastructure.User requests are submitted through an interface to these servers that provide service to them in an on-demand basis.The scientific applications that get executed at cloud by making use of the heterogeneous resources being allocated to them in a dynamic manner are grouped under NP hard problem category.Task scheduling in cloud poses numerous challenges impacting the cloud performance.If not handled properly,user satisfaction becomes questionable.More recently researchers had come up with meta-heuristic type of solutions for enriching the task scheduling activity in the cloud environment.The prime aim of task scheduling is to utilize the resources available in an optimal manner and reduce the time span of task execution.An improvised seagull optimization algorithm which combines the features of the Cuckoo search(CS)and seagull optimization algorithm(SOA)had been proposed in this work to enhance the performance of the scheduling activity inside the cloud computing environment.The proposed algorithm aims to minimize the cost and time parameters that are spent during task scheduling in the heterogeneous cloud environment.Performance evaluation of the proposed algorithm had been performed using the Cloudsim 3.0 toolkit by comparing it with Multi objective-Ant Colony Optimization(MO-ACO),ACO and Min-Min algorithms.The proposed SOA-CS technique had produced an improvement of 1.06%,4.2%,and 2.4%for makespan and had reduced the overall cost to the extent of 1.74%,3.93%and 2.77%when compared with PSO,ACO,IDEA algorithms respectively when 300 vms are considered.The comparative simulation results obtained had shown that the proposed improvised seagull optimization algorithm fares better than other contemporaries.
文摘Cloud computing plays a significant role in Information Technology(IT)industry to deliver scalable resources as a service.One of the most important factor to increase the performance of the cloud server is maximizing the resource utilization in task scheduling.The main advantage of this scheduling is to max-imize the performance and minimize the time loss.Various researchers examined numerous scheduling methods to achieve Quality of Service(QoS)and to reduce execution time.However,it had disadvantages in terms of low throughput and high response time.Hence,this study aimed to schedule the task efficiently and to eliminate the faults in scheduling the tasks to the Virtual Machines(VMs).For this purpose,the research proposed novel Particle Swarm Optimization-Bandwidth Aware divisible Task(PSO-BATS)scheduling with Multi-Layered Regression Host Employment(MLRHE)to sort out the issues of task scheduling and ease the scheduling operation by load balancing.The proposed efficient sche-duling provides benefits to both cloud users and servers.The performance evalua-tion is undertaken with respect to cost,Performance Improvement Rate(PIR)and makespan which revealed the efficiency of the proposed method.Additionally,comparative analysis is undertaken which confirmed the performance of the intro-duced system than conventional system for scheduling tasks with highflexibility.
基金This work was partly supported by the Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by theKorean government(MSIT)(No.2021-0-02068,Artificial Intelligence Innovation Hub)(No.RS-2022-00155966,Artificial Intelligence Convergence Innovation Human Resources Development(Ewha University)).
文摘AI(Artificial Intelligence)workloads are proliferating in modernreal-time systems.As the tasks of AI workloads fluctuate over time,resourceplanning policies used for traditional fixed real-time tasks should be reexamined.In particular,it is difficult to immediately handle changes inreal-time tasks without violating the deadline constraints.To cope with thissituation,this paper analyzes the task situations of AI workloads and findsthe following two observations.First,resource planning for AI workloadsis a complicated search problem that requires much time for optimization.Second,although the task set of an AI workload may change over time,thepossible combinations of the task sets are known in advance.Based on theseobservations,this paper proposes a new resource planning scheme for AIworkloads that supports the re-planning of resources.Instead of generatingresource plans on the fly,the proposed scheme pre-determines resourceplans for various combinations of tasks.Thus,in any case,the workload isimmediately executed according to the resource plan maintained.Specifically,the proposed scheme maintains an optimized CPU(Central Processing Unit)and memory resource plan using genetic algorithms and applies it as soonas the workload changes.The proposed scheme is implemented in the opensourcesimulator SimRTS for the validation of its effectiveness.Simulationexperiments show that the proposed scheme reduces the energy consumptionof CPU and memory by 45.5%on average without deadline misses.
基金supported in part by the National Key R&D Program of China under Grant 2020YFB1005900the National Natural Science Foundation of China under Grant 62001220+3 种基金the Jiangsu Provincial Key Research and Development Program under Grants BE2022068the Natural Science Foundation of Jiangsu Province under Grants BK20200440the Future Network Scientific Research Fund Project FNSRFP-2021-YB-03the Young Elite Scientist Sponsorship Program,China Association for Science and Technology.
文摘Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks.However,edge computing servers(ECSs)from different operators may not trust each other,and thus the incentives for collaboration cannot be guaranteed.In this paper,we propose a consortium blockchain enabled collaborative edge computing framework,where users can offload computing tasks to ECSs from different operators.To minimize the total delay of users,we formulate a joint task offloading and resource optimization problem,under the constraint of the computing capability of each ECS.We apply the Tammer decomposition method and heuristic optimization algorithms to obtain the optimal solution.Finally,we propose a reputation based node selection approach to facilitate the consensus process,and also consider a completion time based primary node selection to avoid monopolization of certain edge node and enhance the security of the blockchain.Simulation results validate the effectiveness of the proposed algorithm,and the total delay can be reduced by up to 40%compared with the non-cooperative case.
基金supported by the Talent Fund of Beijing Jiaotong University(No.2023XKRC028)CCFLenovo Blue Ocean Research Fund and Beijing Natural Science Foundation under Grant(No.L221003).
文摘Vehicular edge computing(VEC)is emerging as a promising solution paradigm to meet the requirements of compute-intensive applications in internet of vehicle(IoV).Non-orthogonal multiple access(NOMA)has advantages in improving spectrum efficiency and dealing with bandwidth scarcity and cost.It is an encouraging progress combining VEC and NOMA.In this paper,we jointly optimize task offloading decision and resource allocation to maximize the service utility of the NOMA-VEC system.To solve the optimization problem,we propose a multiagent deep graph reinforcement learning algorithm.The algorithm extracts the topological features and relationship information between agents from the system state as observations,outputs task offloading decision and resource allocation simultaneously with local policy network,which is updated by a local learner.Simulation results demonstrate that the proposed method achieves a 1.52%∼5.80%improvement compared with the benchmark algorithms in system service utility.
基金funding from TECNALIA,Basque Research and Technology Alliance(BRTA)supported by the project aOptimization of Deep Learning algorithms for Edge IoT devices for sensorization and control in Buildings and Infrastructures(EMBED)funded by the Gipuzkoa Provincial Council and approved under the 2023 call of the Guipuzcoan Network of Science,Technology and Innovation Program with File Number 2023-CIEN-000051-01.
文摘In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer task offloading.For many resource-constrained devices,the computation of many types of tasks is not feasible because they cannot support such computations as they do not have enough available memory and processing capacity.In this scenario,it is worth considering transferring these tasks to resource-rich platforms,such as Edge Data Centers or remote cloud servers.For different reasons,it is more exciting and appropriate to download various tasks to specific download destinations depending on the properties and state of the environment and the nature of the functions.At the same time,establishing an optimal offloading policy,which ensures that all tasks are executed within the required latency and avoids excessive workload on specific computing centers is not easy.This study presents two alternatives to solve the offloading decision paradigm by introducing two well-known algorithms,Graph Neural Networks(GNN)and Deep Q-Network(DQN).It applies the alternatives on a well-known Edge Computing simulator called PureEdgeSimand compares them with the two defaultmethods,Trade-Off and Round Robin.Experiments showed that variants offer a slight improvement in task success rate and workload distribution.In terms of energy efficiency,they provided similar results.Finally,the success rates of different computing centers are tested,and the lack of capacity of remote cloud servers to respond to applications in real-time is demonstrated.These novel ways of finding a download strategy in a local networking environment are unique as they emulate the state and structure of the environment innovatively,considering the quality of its connections and constant updates.The download score defined in this research is a crucial feature for determining the quality of a download path in the GNN training process and has not previously been proposed.Simultaneously,the suitability of Reinforcement Learning(RL)techniques is demonstrated due to the dynamism of the network environment,considering all the key factors that affect the decision to offload a given task,including the actual state of all devices.
基金National Natural Science Foundation of China(62072392).
文摘Crowdsourcing technology is widely recognized for its effectiveness in task scheduling and resource allocation.While traditional methods for task allocation can help reduce costs and improve efficiency,they may encounter challenges when dealing with abnormal data flow nodes,leading to decreased allocation accuracy and efficiency.To address these issues,this study proposes a novel two-part invalid detection task allocation framework.In the first step,an anomaly detection model is developed using a dynamic self-attentive GAN to identify anomalous data.Compared to the baseline method,the model achieves an approximately 4%increase in the F1 value on the public dataset.In the second step of the framework,task allocation modeling is performed using a twopart graph matching method.This phase introduces a P-queue KM algorithm that implements a more efficient optimization strategy.The allocation efficiency is improved by approximately 23.83%compared to the baseline method.Empirical results confirm the effectiveness of the proposed framework in detecting abnormal data nodes,enhancing allocation precision,and achieving efficient allocation.
基金supported by the National Natural Science Foundation of China(Grant No.62072031)the Applied Basic Research Foundation of Yunnan Province(Grant No.2019FD071)the Yunnan Scientific Research Foundation Project(Grant 2019J0187).
文摘With the development of vehicles towards intelligence and connectivity,vehicular data is diversifying and growing dramatically.A task allocation model and algorithm for heterogeneous Intelligent Connected Vehicle(ICV)applications are proposed for the dispersed computing network composed of heterogeneous task vehicles and Network Computing Points(NCPs).Considering the amount of task data and the idle resources of NCPs,a computing resource scheduling model for NCPs is established.Taking the heterogeneous task execution delay threshold as a constraint,the optimization problem is described as the problem of maximizing the utilization of computing resources by NCPs.The proposed problem is proven to be NP-hard by using the method of reduction to a 0-1 knapsack problem.A many-to-many matching algorithm based on resource preferences is proposed.The algorithm first establishes the mutual preference lists based on the adaptability of the task requirements and the resources provided by NCPs.This enables the filtering out of un-schedulable NCPs in the initial stage of matching,reducing the solution space dimension.To solve the matching problem between ICVs and NCPs,a new manyto-many matching algorithm is proposed to obtain a unique and stable optimal matching result.The simulation results demonstrate that the proposed scheme can improve the resource utilization of NCPs by an average of 9.6%compared to the reference scheme,and the total performance can be improved by up to 15.9%.
基金This research was funded by the Project of the National Natural Science Foundation of China,Grant Number 62106283.
文摘Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinforce-ment learning(DRL)theory and an improved Multi-Agent Deep Deterministic Policy Gradient(MADDPG-D2)algorithm with a dual experience replay pool and a dual noise based on multi-agent architecture is proposed to improve the efficiency of DTA.The algorithm is based on the traditional Multi-Agent Deep Deterministic Policy Gradient(MADDPG)algorithm,and considers the introduction of a double noise mechanism to increase the action exploration space in the early stage of the algorithm,and the introduction of a double experience pool to improve the data utilization rate;at the same time,in order to accelerate the training speed and efficiency of the agents,and to solve the cold-start problem of the training,the a priori knowledge technology is applied to the training of the algorithm.Finally,the MADDPG-D2 algorithm is compared and analyzed based on the digital battlefield of ground and air confrontation.The experimental results show that the agents trained by the MADDPG-D2 algorithm have higher win rates and average rewards,can utilize the resources more reasonably,and better solve the problem of the traditional single agent algorithms facing the difficulty of solving the problem in the high-dimensional decision space.The MADDPG-D2 algorithm based on multi-agent architecture proposed in this paper has certain superiority and rationality in DTA.
文摘Pooling,unpooling/specialization,and discretionary task completion are typical operational strategies in queueing systems that arise in healthcare,call centers,and online sales.These strategies may have advantages and disadvantages in different operational environments.This paper uses the M/M/1 and M/M/2 queues to study the impact of pooling,specialization,and discretionary task completion on the average queue length.Closed-form solutions for the average M/M/2 queue length are derived.Computational examples illustrate how the average queue length changes with the strength of pooling,specialization,and discretionary task completion.Finally,several conjectures are made in the paper.
基金supported by the Science and Technology Department of Sichuan Province(No.2021YFG0156).
文摘Generating diverse and factual text is challenging and is receiving increasing attention.By sampling from the latent space,variational autoencoder-based models have recently enhanced the diversity of generated text.However,existing research predominantly depends on summarizationmodels to offer paragraph-level semantic information for enhancing factual correctness.The challenge lies in effectively generating factual text using sentence-level variational autoencoder-based models.In this paper,a novel model called fact-aware conditional variational autoencoder is proposed to balance the factual correctness and diversity of generated text.Specifically,our model encodes the input sentences and uses them as facts to build a conditional variational autoencoder network.By training a conditional variational autoencoder network,the model is enabled to generate text based on input facts.Building upon this foundation,the input text is passed to the discriminator along with the generated text.By employing adversarial training,the model is encouraged to generate text that is indistinguishable to the discriminator,thereby enhancing the quality of the generated text.To further improve the factual correctness,inspired by the natural language inference system,the entailment recognition task is introduced to be trained together with the discriminator via multi-task learning.Moreover,based on the entailment recognition results,a penalty term is further proposed to reconstruct the loss of our model,forcing the generator to generate text consistent with the facts.Experimental results demonstrate that compared with competitivemodels,ourmodel has achieved substantial improvements in both the quality and factual correctness of the text,despite only sacrificing a small amount of diversity.Furthermore,when considering a comprehensive evaluation of diversity and quality metrics,our model has also demonstrated the best performance.
文摘Based on the wave attack task planning method in static complex environment and the rolling optimization framework, an online task planning method in dynamic complex environment based on rolling optimization is proposed. In the process of online task planning in dynamic complex environment,online task planning is based on event triggering including target information update event, new target addition event, target failure event, weapon failure event, etc., and the methods include defense area reanalysis, parameter space update, and mission re-planning. Simulation is conducted for different events and the result shows that the index value of the attack scenario after re-planning is better than that before re-planning and according to the probability distribution of statistical simulation method, the index value distribution after re-planning is obviously in the region of high index value, and the index value gap before and after re-planning is related to the degree of posture change.
基金supported by the National Natural Science Foundation of China(No.61903036)。
文摘In response to the uncertainty of information of the injured in post disaster situations,considering constraints such as random chance and the quantity of rescue resource,the split deliv-ery vehicle routing problem with stochastic demands(SDVRPSD)model and the multi-depot split delivery heterogeneous vehicle routing problem with stochastic demands(MDSDHVRPSD)model are established.A two-stage hybrid variable neighborhood tabu search algorithm is designed for unmanned vehicle task planning to minimize the path cost of rescue plans.Simulation experiments show that the solution obtained by the algorithm can effectively reduce the rescue vehicle path cost and the rescue task completion time,with high optimization quality and certain portability.
基金supported in part by the National Natural Science Foundation of China under Grants 62201105,62331017,and 62075024in part by the Natural Science Foundation of Chongqing under Grant cstc2021jcyj-msxmX0404+1 种基金in part by the Chongqing Municipal Education Commission under Grant KJQN202100643in part by Guangdong Basic and Applied Basic Research Foundation under Grant 2022A1515110056.
文摘The Multi-access Edge Cloud(MEC) networks extend cloud computing services and capabilities to the edge of the networks. By bringing computation and storage capabilities closer to end-users and connected devices, MEC networks can support a wide range of applications. MEC networks can also leverage various types of resources, including computation resources, network resources, radio resources,and location-based resources, to provide multidimensional resources for intelligent applications in 5/6G.However, tasks generated by users often consist of multiple subtasks that require different types of resources. It is a challenging problem to offload multiresource task requests to the edge cloud aiming at maximizing benefits due to the heterogeneity of resources provided by devices. To address this issue,we mathematically model the task requests with multiple subtasks. Then, the problem of task offloading of multi-resource task requests is proved to be NP-hard. Furthermore, we propose a novel Dual-Agent Deep Reinforcement Learning algorithm with Node First and Link features(NF_L_DA_DRL) based on the policy network, to optimize the benefits generated by offloading multi-resource task requests in MEC networks. Finally, simulation results show that the proposed algorithm can effectively improve the benefit of task offloading with higher resource utilization compared with baseline algorithms.
文摘The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support of task-offloading in Multi-access Edge Computing(MEC).However,existing task-offloading optimization methods typically assume that MEC’s computing resources are unlimited,and there is a lack of research on the optimization of task-offloading when MEC resources are exhausted.In addition,existing solutions only decide whether to accept the offloaded task request based on the single decision result of the current time slot,but lack support for multiple retry in subsequent time slots.It is resulting in TD missing potential offloading opportunities in the future.To fill this gap,we propose a Two-Stage Offloading Decision-making Framework(TSODF)with request holding and dynamic eviction.Long Short-Term Memory(LSTM)-based task-offloading request prediction and MEC resource release estimation are integrated to infer the probability of a request being accepted in the subsequent time slot.The framework learns optimized decision-making experiences continuously to increase the success rate of task offloading based on deep learning technology.Simulation results show that TSODF reduces total TD’s energy consumption and delay for task execution and improves task offloading rate and system resource utilization compared to the benchmark method.