Low-light images suffer from low quality due to poor lighting conditions,noise pollution,and improper settings of cameras.To enhance low-light images,most existing methods rely on normal-light images for guidance but ...Low-light images suffer from low quality due to poor lighting conditions,noise pollution,and improper settings of cameras.To enhance low-light images,most existing methods rely on normal-light images for guidance but the collection of suitable normal-light images is difficult.In contrast,a self-supervised method breaks free from the reliance on normal-light data,resulting in more convenience and better generalization.Existing self-supervised methods primarily focus on illumination adjustment and design pixel-based adjustment methods,resulting in remnants of other degradations,uneven brightness and artifacts.In response,this paper proposes a self-supervised enhancement method,termed as SLIE.It can handle multiple degradations including illumination attenuation,noise pollution,and color shift,all in a self-supervised manner.Illumination attenuation is estimated based on physical principles and local neighborhood information.The removal and correction of noise and color shift removal are solely realized with noisy images and images with color shifts.Finally,the comprehensive and fully self-supervised approach can achieve better adaptability and generalization.It is applicable to various low light conditions,and can reproduce the original color of scenes in natural light.Extensive experiments conducted on four public datasets demonstrate the superiority of SLIE to thirteen state-of-the-art methods.Our code is available at https://github.com/hanna-xu/SLIE.展开更多
The federated self-supervised framework is a distributed machine learning method that combines federated learning and self-supervised learning, which can effectively solve the problem of traditional federated learning...The federated self-supervised framework is a distributed machine learning method that combines federated learning and self-supervised learning, which can effectively solve the problem of traditional federated learning being difficult to process large-scale unlabeled data. The existing federated self-supervision framework has problems with low communication efficiency and high communication delay between clients and central servers. Therefore, we added edge servers to the federated self-supervision framework to reduce the pressure on the central server caused by frequent communication between both ends. A communication compression scheme using gradient quantization and sparsification was proposed to optimize the communication of the entire framework, and the algorithm of the sparse communication compression module was improved. Experiments have proved that the learning rate changes of the improved sparse communication compression module are smoother and more stable. Our communication compression scheme effectively reduced the overall communication overhead.展开更多
Few-shot intent detection is a practical challenge task,because new intents are frequently emerging and collecting large-scale data for them could be costly.Meta-learning,a promising technique for leveraging data from...Few-shot intent detection is a practical challenge task,because new intents are frequently emerging and collecting large-scale data for them could be costly.Meta-learning,a promising technique for leveraging data from previous tasks to enable efficient learning of new tasks,has been a popular way to tackle this problem.However,the existing meta-learning models have been evidenced to be overfitting when the meta-training tasks are insufficient.To overcome this challenge,we present a novel self-supervised task augmentation with meta-learning framework,namely STAM.Firstly,we introduce the task augmentation,which explores two different strategies and combines them to extend meta-training tasks.Secondly,we devise two auxiliary losses for integrating self-supervised learning into meta-learning to learn more generalizable and transferable features.Experimental results show that STAM can achieve consistent and considerable performance improvement to existing state-of-the-art methods on four datasets.展开更多
State of health(SoH) estimation plays a key role in smart battery health prognostic and management.However,poor generalization,lack of labeled data,and unused measurements during aging are still major challenges to ac...State of health(SoH) estimation plays a key role in smart battery health prognostic and management.However,poor generalization,lack of labeled data,and unused measurements during aging are still major challenges to accurate SoH estimation.Toward this end,this paper proposes a self-supervised learning framework to boost the performance of battery SoH estimation.Different from traditional data-driven methods which rely on a considerable training dataset obtained from numerous battery cells,the proposed method achieves accurate and robust estimations using limited labeled data.A filter-based data preprocessing technique,which enables the extraction of partial capacity-voltage curves under dynamic charging profiles,is applied at first.Unsupervised learning is then used to learn the aging characteristics from the unlabeled data through an auto-encoder-decoder.The learned network parameters are transferred to the downstream SoH estimation task and are fine-tuned with very few sparsely labeled data,which boosts the performance of the estimation framework.The proposed method has been validated under different battery chemistries,formats,operating conditions,and ambient.The estimation accuracy can be guaranteed by using only three labeled data from the initial 20% life cycles,with overall errors less than 1.14% and error distribution of all testing scenarios maintaining less than 4%,and robustness increases with aging.Comparisons with other pure supervised machine learning methods demonstrate the superiority of the proposed method.This simple and data-efficient estimation framework is promising in real-world applications under a variety of scenarios.展开更多
In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task ...In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task offloading is often overlooked.It is frequently assumed that vehicles can be accurately modeled during actual motion processes.However,in vehicular dynamic environments,both the tasks generated by the vehicles and the vehicles’surroundings are constantly changing,making it difficult to achieve real-time modeling for actual dynamic vehicular network scenarios.Taking into account the actual dynamic vehicular scenarios,this paper considers the real-time non-uniform movement of vehicles and proposes a vehicular task dynamic offloading and scheduling algorithm for single-task multi-vehicle vehicular network scenarios,attempting to solve the dynamic decision-making problem in task offloading process.The optimization objective is to minimize the average task completion time,which is formulated as a multi-constrained non-linear programming problem.Due to the mobility of vehicles,a constraint model is applied in the decision-making process to dynamically determine whether the communication range is sufficient for task offloading and transmission.Finally,the proposed vehicular task dynamic offloading and scheduling algorithm based on muti-agent deep deterministic policy gradient(MADDPG)is applied to solve the optimal solution of the optimization problem.Simulation results show that the algorithm proposed in this paper is able to achieve lower latency task computation offloading.Meanwhile,the average task completion time of the proposed algorithm in this paper can be improved by 7.6%compared to the performance of the MADDPG scheme and 51.1%compared to the performance of deep deterministic policy gradient(DDPG).展开更多
Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks.However,edge computing servers(ECSs)from different operators may not trust each other,and thus t...Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks.However,edge computing servers(ECSs)from different operators may not trust each other,and thus the incentives for collaboration cannot be guaranteed.In this paper,we propose a consortium blockchain enabled collaborative edge computing framework,where users can offload computing tasks to ECSs from different operators.To minimize the total delay of users,we formulate a joint task offloading and resource optimization problem,under the constraint of the computing capability of each ECS.We apply the Tammer decomposition method and heuristic optimization algorithms to obtain the optimal solution.Finally,we propose a reputation based node selection approach to facilitate the consensus process,and also consider a completion time based primary node selection to avoid monopolization of certain edge node and enhance the security of the blockchain.Simulation results validate the effectiveness of the proposed algorithm,and the total delay can be reduced by up to 40%compared with the non-cooperative case.展开更多
In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer t...In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer task offloading.For many resource-constrained devices,the computation of many types of tasks is not feasible because they cannot support such computations as they do not have enough available memory and processing capacity.In this scenario,it is worth considering transferring these tasks to resource-rich platforms,such as Edge Data Centers or remote cloud servers.For different reasons,it is more exciting and appropriate to download various tasks to specific download destinations depending on the properties and state of the environment and the nature of the functions.At the same time,establishing an optimal offloading policy,which ensures that all tasks are executed within the required latency and avoids excessive workload on specific computing centers is not easy.This study presents two alternatives to solve the offloading decision paradigm by introducing two well-known algorithms,Graph Neural Networks(GNN)and Deep Q-Network(DQN).It applies the alternatives on a well-known Edge Computing simulator called PureEdgeSimand compares them with the two defaultmethods,Trade-Off and Round Robin.Experiments showed that variants offer a slight improvement in task success rate and workload distribution.In terms of energy efficiency,they provided similar results.Finally,the success rates of different computing centers are tested,and the lack of capacity of remote cloud servers to respond to applications in real-time is demonstrated.These novel ways of finding a download strategy in a local networking environment are unique as they emulate the state and structure of the environment innovatively,considering the quality of its connections and constant updates.The download score defined in this research is a crucial feature for determining the quality of a download path in the GNN training process and has not previously been proposed.Simultaneously,the suitability of Reinforcement Learning(RL)techniques is demonstrated due to the dynamism of the network environment,considering all the key factors that affect the decision to offload a given task,including the actual state of all devices.展开更多
Vehicular edge computing(VEC)is emerging as a promising solution paradigm to meet the requirements of compute-intensive applications in internet of vehicle(IoV).Non-orthogonal multiple access(NOMA)has advantages in im...Vehicular edge computing(VEC)is emerging as a promising solution paradigm to meet the requirements of compute-intensive applications in internet of vehicle(IoV).Non-orthogonal multiple access(NOMA)has advantages in improving spectrum efficiency and dealing with bandwidth scarcity and cost.It is an encouraging progress combining VEC and NOMA.In this paper,we jointly optimize task offloading decision and resource allocation to maximize the service utility of the NOMA-VEC system.To solve the optimization problem,we propose a multiagent deep graph reinforcement learning algorithm.The algorithm extracts the topological features and relationship information between agents from the system state as observations,outputs task offloading decision and resource allocation simultaneously with local policy network,which is updated by a local learner.Simulation results demonstrate that the proposed method achieves a 1.52%∼5.80%improvement compared with the benchmark algorithms in system service utility.展开更多
Crowdsourcing technology is widely recognized for its effectiveness in task scheduling and resource allocation.While traditional methods for task allocation can help reduce costs and improve efficiency,they may encoun...Crowdsourcing technology is widely recognized for its effectiveness in task scheduling and resource allocation.While traditional methods for task allocation can help reduce costs and improve efficiency,they may encounter challenges when dealing with abnormal data flow nodes,leading to decreased allocation accuracy and efficiency.To address these issues,this study proposes a novel two-part invalid detection task allocation framework.In the first step,an anomaly detection model is developed using a dynamic self-attentive GAN to identify anomalous data.Compared to the baseline method,the model achieves an approximately 4%increase in the F1 value on the public dataset.In the second step of the framework,task allocation modeling is performed using a twopart graph matching method.This phase introduces a P-queue KM algorithm that implements a more efficient optimization strategy.The allocation efficiency is improved by approximately 23.83%compared to the baseline method.Empirical results confirm the effectiveness of the proposed framework in detecting abnormal data nodes,enhancing allocation precision,and achieving efficient allocation.展开更多
With the development of vehicles towards intelligence and connectivity,vehicular data is diversifying and growing dramatically.A task allocation model and algorithm for heterogeneous Intelligent Connected Vehicle(ICV)...With the development of vehicles towards intelligence and connectivity,vehicular data is diversifying and growing dramatically.A task allocation model and algorithm for heterogeneous Intelligent Connected Vehicle(ICV)applications are proposed for the dispersed computing network composed of heterogeneous task vehicles and Network Computing Points(NCPs).Considering the amount of task data and the idle resources of NCPs,a computing resource scheduling model for NCPs is established.Taking the heterogeneous task execution delay threshold as a constraint,the optimization problem is described as the problem of maximizing the utilization of computing resources by NCPs.The proposed problem is proven to be NP-hard by using the method of reduction to a 0-1 knapsack problem.A many-to-many matching algorithm based on resource preferences is proposed.The algorithm first establishes the mutual preference lists based on the adaptability of the task requirements and the resources provided by NCPs.This enables the filtering out of un-schedulable NCPs in the initial stage of matching,reducing the solution space dimension.To solve the matching problem between ICVs and NCPs,a new manyto-many matching algorithm is proposed to obtain a unique and stable optimal matching result.The simulation results demonstrate that the proposed scheme can improve the resource utilization of NCPs by an average of 9.6%compared to the reference scheme,and the total performance can be improved by up to 15.9%.展开更多
Introduction: The uncontrolled management of waste electrical and electronic equipment (W3E) causes respiratory problems in the handlers of this waste. The objective was to study the stains associated with respiratory...Introduction: The uncontrolled management of waste electrical and electronic equipment (W3E) causes respiratory problems in the handlers of this waste. The objective was to study the stains associated with respiratory symptoms in W3E handlers. Methods: The study was cross-sectional with an analytical focus on W3E handlers in the informal sector in Ouagadougou. A peer-validated questionnaire collected data on a sample of 161 manipulators. Results: the most common W3E processing tasks were the purchase or sale of W3E (67.70%), its repair (39.75%) and its collection (31.06%). The prevalence of cough was 21.74%, that of wheezing 14.91%, phlegm 12.50% and dyspnea at rest 10.56%. In bivariate analysis, there were significant associations at the 5% level between W3E repair and phlegm (p-value = 0.044), between W3E burning and wheezing (p-value = 0.011) and between W3E and cough (p-value = 0.01). The final logistic regression models suggested that the burning of W3E and the melting of lead batteries represented risk factors for the occurrence of cough with respective prevalence ratios of 4.57 and 4.63. Conclusion: raising awareness on the wearing of personal protective equipment, in particular masks adapted by W3E handlers, favoring those who are dedicated to the burning of electronic waste and the melting of lead could make it possible to reduce the risk of occurrence of respiratory symptoms.展开更多
Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinfor...Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinforce-ment learning(DRL)theory and an improved Multi-Agent Deep Deterministic Policy Gradient(MADDPG-D2)algorithm with a dual experience replay pool and a dual noise based on multi-agent architecture is proposed to improve the efficiency of DTA.The algorithm is based on the traditional Multi-Agent Deep Deterministic Policy Gradient(MADDPG)algorithm,and considers the introduction of a double noise mechanism to increase the action exploration space in the early stage of the algorithm,and the introduction of a double experience pool to improve the data utilization rate;at the same time,in order to accelerate the training speed and efficiency of the agents,and to solve the cold-start problem of the training,the a priori knowledge technology is applied to the training of the algorithm.Finally,the MADDPG-D2 algorithm is compared and analyzed based on the digital battlefield of ground and air confrontation.The experimental results show that the agents trained by the MADDPG-D2 algorithm have higher win rates and average rewards,can utilize the resources more reasonably,and better solve the problem of the traditional single agent algorithms facing the difficulty of solving the problem in the high-dimensional decision space.The MADDPG-D2 algorithm based on multi-agent architecture proposed in this paper has certain superiority and rationality in DTA.展开更多
Pooling,unpooling/specialization,and discretionary task completion are typical operational strategies in queueing systems that arise in healthcare,call centers,and online sales.These strategies may have advantages and...Pooling,unpooling/specialization,and discretionary task completion are typical operational strategies in queueing systems that arise in healthcare,call centers,and online sales.These strategies may have advantages and disadvantages in different operational environments.This paper uses the M/M/1 and M/M/2 queues to study the impact of pooling,specialization,and discretionary task completion on the average queue length.Closed-form solutions for the average M/M/2 queue length are derived.Computational examples illustrate how the average queue length changes with the strength of pooling,specialization,and discretionary task completion.Finally,several conjectures are made in the paper.展开更多
Generating diverse and factual text is challenging and is receiving increasing attention.By sampling from the latent space,variational autoencoder-based models have recently enhanced the diversity of generated text.Ho...Generating diverse and factual text is challenging and is receiving increasing attention.By sampling from the latent space,variational autoencoder-based models have recently enhanced the diversity of generated text.However,existing research predominantly depends on summarizationmodels to offer paragraph-level semantic information for enhancing factual correctness.The challenge lies in effectively generating factual text using sentence-level variational autoencoder-based models.In this paper,a novel model called fact-aware conditional variational autoencoder is proposed to balance the factual correctness and diversity of generated text.Specifically,our model encodes the input sentences and uses them as facts to build a conditional variational autoencoder network.By training a conditional variational autoencoder network,the model is enabled to generate text based on input facts.Building upon this foundation,the input text is passed to the discriminator along with the generated text.By employing adversarial training,the model is encouraged to generate text that is indistinguishable to the discriminator,thereby enhancing the quality of the generated text.To further improve the factual correctness,inspired by the natural language inference system,the entailment recognition task is introduced to be trained together with the discriminator via multi-task learning.Moreover,based on the entailment recognition results,a penalty term is further proposed to reconstruct the loss of our model,forcing the generator to generate text consistent with the facts.Experimental results demonstrate that compared with competitivemodels,ourmodel has achieved substantial improvements in both the quality and factual correctness of the text,despite only sacrificing a small amount of diversity.Furthermore,when considering a comprehensive evaluation of diversity and quality metrics,our model has also demonstrated the best performance.展开更多
Based on the wave attack task planning method in static complex environment and the rolling optimization framework, an online task planning method in dynamic complex environment based on rolling optimization is propos...Based on the wave attack task planning method in static complex environment and the rolling optimization framework, an online task planning method in dynamic complex environment based on rolling optimization is proposed. In the process of online task planning in dynamic complex environment,online task planning is based on event triggering including target information update event, new target addition event, target failure event, weapon failure event, etc., and the methods include defense area reanalysis, parameter space update, and mission re-planning. Simulation is conducted for different events and the result shows that the index value of the attack scenario after re-planning is better than that before re-planning and according to the probability distribution of statistical simulation method, the index value distribution after re-planning is obviously in the region of high index value, and the index value gap before and after re-planning is related to the degree of posture change.展开更多
The Multi-access Edge Cloud(MEC) networks extend cloud computing services and capabilities to the edge of the networks. By bringing computation and storage capabilities closer to end-users and connected devices, MEC n...The Multi-access Edge Cloud(MEC) networks extend cloud computing services and capabilities to the edge of the networks. By bringing computation and storage capabilities closer to end-users and connected devices, MEC networks can support a wide range of applications. MEC networks can also leverage various types of resources, including computation resources, network resources, radio resources,and location-based resources, to provide multidimensional resources for intelligent applications in 5/6G.However, tasks generated by users often consist of multiple subtasks that require different types of resources. It is a challenging problem to offload multiresource task requests to the edge cloud aiming at maximizing benefits due to the heterogeneity of resources provided by devices. To address this issue,we mathematically model the task requests with multiple subtasks. Then, the problem of task offloading of multi-resource task requests is proved to be NP-hard. Furthermore, we propose a novel Dual-Agent Deep Reinforcement Learning algorithm with Node First and Link features(NF_L_DA_DRL) based on the policy network, to optimize the benefits generated by offloading multi-resource task requests in MEC networks. Finally, simulation results show that the proposed algorithm can effectively improve the benefit of task offloading with higher resource utilization compared with baseline algorithms.展开更多
The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support ...The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support of task-offloading in Multi-access Edge Computing(MEC).However,existing task-offloading optimization methods typically assume that MEC’s computing resources are unlimited,and there is a lack of research on the optimization of task-offloading when MEC resources are exhausted.In addition,existing solutions only decide whether to accept the offloaded task request based on the single decision result of the current time slot,but lack support for multiple retry in subsequent time slots.It is resulting in TD missing potential offloading opportunities in the future.To fill this gap,we propose a Two-Stage Offloading Decision-making Framework(TSODF)with request holding and dynamic eviction.Long Short-Term Memory(LSTM)-based task-offloading request prediction and MEC resource release estimation are integrated to infer the probability of a request being accepted in the subsequent time slot.The framework learns optimized decision-making experiences continuously to increase the success rate of task offloading based on deep learning technology.Simulation results show that TSODF reduces total TD’s energy consumption and delay for task execution and improves task offloading rate and system resource utilization compared to the benchmark method.展开更多
This paper reviews task scheduling frameworks,methods,and evaluation metrics of central processing unit-graphics processing unit(CPU-GPU)heterogeneous clusters.Task scheduling of CPU-GPU heterogeneous clusters can be ...This paper reviews task scheduling frameworks,methods,and evaluation metrics of central processing unit-graphics processing unit(CPU-GPU)heterogeneous clusters.Task scheduling of CPU-GPU heterogeneous clusters can be carried out on the system level,nodelevel,and device level.Most task-scheduling technologies are heuristic based on the experts’experience,while some technologies are based on statistic methods using machine learning,deep learning,or reinforcement learning.Many metrics have been adopted to evaluate and compare different task scheduling technologies that try to optimize different goals of task scheduling.Although statistic task scheduling has reached fewer research achievements than heuristic task scheduling,the statistic task scheduling still has significant research potential.展开更多
As cloud computing usage grows,cloud data centers play an increasingly important role.To maximize resource utilization,ensure service quality,and enhance system performance,it is crucial to allocate tasks and manage p...As cloud computing usage grows,cloud data centers play an increasingly important role.To maximize resource utilization,ensure service quality,and enhance system performance,it is crucial to allocate tasks and manage performance effectively.The purpose of this study is to provide an extensive analysis of task allocation and performance management techniques employed in cloud data centers.The aim is to systematically categorize and organize previous research by identifying the cloud computing methodologies,categories,and gaps.A literature review was conducted,which included the analysis of 463 task allocations and 480 performance management papers.The review revealed three task allocation research topics and seven performance management methods.Task allocation research areas are resource allocation,load-Balancing,and scheduling.Performance management includes monitoring and control,power and energy management,resource utilization optimization,quality of service management,fault management,virtual machine management,and network management.The study proposes new techniques to enhance cloud computing work allocation and performance management.Short-comings in each approach can guide future research.The research’s findings on cloud data center task allocation and performance management can assist academics,practitioners,and cloud service providers in optimizing their systems for dependability,cost-effectiveness,and scalability.Innovative methodologies can steer future research to fill gaps in the literature.展开更多
More devices in the Intelligent Internet of Things(AIoT)result in an increased number of tasks that require low latency and real-time responsiveness,leading to an increased demand for computational resources.Cloud com...More devices in the Intelligent Internet of Things(AIoT)result in an increased number of tasks that require low latency and real-time responsiveness,leading to an increased demand for computational resources.Cloud computing’s low-latency performance issues in AIoT scenarios have led researchers to explore fog computing as a complementary extension.However,the effective allocation of resources for task execution within fog environments,characterized by limitations and heterogeneity in computational resources,remains a formidable challenge.To tackle this challenge,in this study,we integrate fog computing and cloud computing.We begin by establishing a fog-cloud environment framework,followed by the formulation of a mathematical model for task scheduling.Lastly,we introduce an enhanced hybrid Equilibrium Optimizer(EHEO)tailored for AIoT task scheduling.The overarching objective is to decrease both the makespan and energy consumption of the fog-cloud system while accounting for task deadlines.The proposed EHEO method undergoes a thorough evaluation against multiple benchmark algorithms,encompassing metrics likemakespan,total energy consumption,success rate,and average waiting time.Comprehensive experimental results unequivocally demonstrate the superior performance of EHEO across all assessed metrics.Notably,in the most favorable conditions,EHEO significantly diminishes both the makespan and energy consumption by approximately 50%and 35.5%,respectively,compared to the secondbest performing approach,which affirms its efficacy in advancing the efficiency of AIoT task scheduling within fog-cloud networks.展开更多
基金supported by the National Natural Science Foundation of China(62276192)。
文摘Low-light images suffer from low quality due to poor lighting conditions,noise pollution,and improper settings of cameras.To enhance low-light images,most existing methods rely on normal-light images for guidance but the collection of suitable normal-light images is difficult.In contrast,a self-supervised method breaks free from the reliance on normal-light data,resulting in more convenience and better generalization.Existing self-supervised methods primarily focus on illumination adjustment and design pixel-based adjustment methods,resulting in remnants of other degradations,uneven brightness and artifacts.In response,this paper proposes a self-supervised enhancement method,termed as SLIE.It can handle multiple degradations including illumination attenuation,noise pollution,and color shift,all in a self-supervised manner.Illumination attenuation is estimated based on physical principles and local neighborhood information.The removal and correction of noise and color shift removal are solely realized with noisy images and images with color shifts.Finally,the comprehensive and fully self-supervised approach can achieve better adaptability and generalization.It is applicable to various low light conditions,and can reproduce the original color of scenes in natural light.Extensive experiments conducted on four public datasets demonstrate the superiority of SLIE to thirteen state-of-the-art methods.Our code is available at https://github.com/hanna-xu/SLIE.
文摘The federated self-supervised framework is a distributed machine learning method that combines federated learning and self-supervised learning, which can effectively solve the problem of traditional federated learning being difficult to process large-scale unlabeled data. The existing federated self-supervision framework has problems with low communication efficiency and high communication delay between clients and central servers. Therefore, we added edge servers to the federated self-supervision framework to reduce the pressure on the central server caused by frequent communication between both ends. A communication compression scheme using gradient quantization and sparsification was proposed to optimize the communication of the entire framework, and the algorithm of the sparse communication compression module was improved. Experiments have proved that the learning rate changes of the improved sparse communication compression module are smoother and more stable. Our communication compression scheme effectively reduced the overall communication overhead.
基金the National Natural Science Foundation of China under Grant Nos.61936012 and 61976114。
文摘Few-shot intent detection is a practical challenge task,because new intents are frequently emerging and collecting large-scale data for them could be costly.Meta-learning,a promising technique for leveraging data from previous tasks to enable efficient learning of new tasks,has been a popular way to tackle this problem.However,the existing meta-learning models have been evidenced to be overfitting when the meta-training tasks are insufficient.To overcome this challenge,we present a novel self-supervised task augmentation with meta-learning framework,namely STAM.Firstly,we introduce the task augmentation,which explores two different strategies and combines them to extend meta-training tasks.Secondly,we devise two auxiliary losses for integrating self-supervised learning into meta-learning to learn more generalizable and transferable features.Experimental results show that STAM can achieve consistent and considerable performance improvement to existing state-of-the-art methods on four datasets.
基金funded by the “SMART BATTERY” project, granted by Villum Foundation in 2021 (project number 222860)。
文摘State of health(SoH) estimation plays a key role in smart battery health prognostic and management.However,poor generalization,lack of labeled data,and unused measurements during aging are still major challenges to accurate SoH estimation.Toward this end,this paper proposes a self-supervised learning framework to boost the performance of battery SoH estimation.Different from traditional data-driven methods which rely on a considerable training dataset obtained from numerous battery cells,the proposed method achieves accurate and robust estimations using limited labeled data.A filter-based data preprocessing technique,which enables the extraction of partial capacity-voltage curves under dynamic charging profiles,is applied at first.Unsupervised learning is then used to learn the aging characteristics from the unlabeled data through an auto-encoder-decoder.The learned network parameters are transferred to the downstream SoH estimation task and are fine-tuned with very few sparsely labeled data,which boosts the performance of the estimation framework.The proposed method has been validated under different battery chemistries,formats,operating conditions,and ambient.The estimation accuracy can be guaranteed by using only three labeled data from the initial 20% life cycles,with overall errors less than 1.14% and error distribution of all testing scenarios maintaining less than 4%,and robustness increases with aging.Comparisons with other pure supervised machine learning methods demonstrate the superiority of the proposed method.This simple and data-efficient estimation framework is promising in real-world applications under a variety of scenarios.
文摘In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task offloading is often overlooked.It is frequently assumed that vehicles can be accurately modeled during actual motion processes.However,in vehicular dynamic environments,both the tasks generated by the vehicles and the vehicles’surroundings are constantly changing,making it difficult to achieve real-time modeling for actual dynamic vehicular network scenarios.Taking into account the actual dynamic vehicular scenarios,this paper considers the real-time non-uniform movement of vehicles and proposes a vehicular task dynamic offloading and scheduling algorithm for single-task multi-vehicle vehicular network scenarios,attempting to solve the dynamic decision-making problem in task offloading process.The optimization objective is to minimize the average task completion time,which is formulated as a multi-constrained non-linear programming problem.Due to the mobility of vehicles,a constraint model is applied in the decision-making process to dynamically determine whether the communication range is sufficient for task offloading and transmission.Finally,the proposed vehicular task dynamic offloading and scheduling algorithm based on muti-agent deep deterministic policy gradient(MADDPG)is applied to solve the optimal solution of the optimization problem.Simulation results show that the algorithm proposed in this paper is able to achieve lower latency task computation offloading.Meanwhile,the average task completion time of the proposed algorithm in this paper can be improved by 7.6%compared to the performance of the MADDPG scheme and 51.1%compared to the performance of deep deterministic policy gradient(DDPG).
基金supported in part by the National Key R&D Program of China under Grant 2020YFB1005900the National Natural Science Foundation of China under Grant 62001220+3 种基金the Jiangsu Provincial Key Research and Development Program under Grants BE2022068the Natural Science Foundation of Jiangsu Province under Grants BK20200440the Future Network Scientific Research Fund Project FNSRFP-2021-YB-03the Young Elite Scientist Sponsorship Program,China Association for Science and Technology.
文摘Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks.However,edge computing servers(ECSs)from different operators may not trust each other,and thus the incentives for collaboration cannot be guaranteed.In this paper,we propose a consortium blockchain enabled collaborative edge computing framework,where users can offload computing tasks to ECSs from different operators.To minimize the total delay of users,we formulate a joint task offloading and resource optimization problem,under the constraint of the computing capability of each ECS.We apply the Tammer decomposition method and heuristic optimization algorithms to obtain the optimal solution.Finally,we propose a reputation based node selection approach to facilitate the consensus process,and also consider a completion time based primary node selection to avoid monopolization of certain edge node and enhance the security of the blockchain.Simulation results validate the effectiveness of the proposed algorithm,and the total delay can be reduced by up to 40%compared with the non-cooperative case.
基金funding from TECNALIA,Basque Research and Technology Alliance(BRTA)supported by the project aOptimization of Deep Learning algorithms for Edge IoT devices for sensorization and control in Buildings and Infrastructures(EMBED)funded by the Gipuzkoa Provincial Council and approved under the 2023 call of the Guipuzcoan Network of Science,Technology and Innovation Program with File Number 2023-CIEN-000051-01.
文摘In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer task offloading.For many resource-constrained devices,the computation of many types of tasks is not feasible because they cannot support such computations as they do not have enough available memory and processing capacity.In this scenario,it is worth considering transferring these tasks to resource-rich platforms,such as Edge Data Centers or remote cloud servers.For different reasons,it is more exciting and appropriate to download various tasks to specific download destinations depending on the properties and state of the environment and the nature of the functions.At the same time,establishing an optimal offloading policy,which ensures that all tasks are executed within the required latency and avoids excessive workload on specific computing centers is not easy.This study presents two alternatives to solve the offloading decision paradigm by introducing two well-known algorithms,Graph Neural Networks(GNN)and Deep Q-Network(DQN).It applies the alternatives on a well-known Edge Computing simulator called PureEdgeSimand compares them with the two defaultmethods,Trade-Off and Round Robin.Experiments showed that variants offer a slight improvement in task success rate and workload distribution.In terms of energy efficiency,they provided similar results.Finally,the success rates of different computing centers are tested,and the lack of capacity of remote cloud servers to respond to applications in real-time is demonstrated.These novel ways of finding a download strategy in a local networking environment are unique as they emulate the state and structure of the environment innovatively,considering the quality of its connections and constant updates.The download score defined in this research is a crucial feature for determining the quality of a download path in the GNN training process and has not previously been proposed.Simultaneously,the suitability of Reinforcement Learning(RL)techniques is demonstrated due to the dynamism of the network environment,considering all the key factors that affect the decision to offload a given task,including the actual state of all devices.
基金supported by the Talent Fund of Beijing Jiaotong University(No.2023XKRC028)CCFLenovo Blue Ocean Research Fund and Beijing Natural Science Foundation under Grant(No.L221003).
文摘Vehicular edge computing(VEC)is emerging as a promising solution paradigm to meet the requirements of compute-intensive applications in internet of vehicle(IoV).Non-orthogonal multiple access(NOMA)has advantages in improving spectrum efficiency and dealing with bandwidth scarcity and cost.It is an encouraging progress combining VEC and NOMA.In this paper,we jointly optimize task offloading decision and resource allocation to maximize the service utility of the NOMA-VEC system.To solve the optimization problem,we propose a multiagent deep graph reinforcement learning algorithm.The algorithm extracts the topological features and relationship information between agents from the system state as observations,outputs task offloading decision and resource allocation simultaneously with local policy network,which is updated by a local learner.Simulation results demonstrate that the proposed method achieves a 1.52%∼5.80%improvement compared with the benchmark algorithms in system service utility.
基金National Natural Science Foundation of China(62072392).
文摘Crowdsourcing technology is widely recognized for its effectiveness in task scheduling and resource allocation.While traditional methods for task allocation can help reduce costs and improve efficiency,they may encounter challenges when dealing with abnormal data flow nodes,leading to decreased allocation accuracy and efficiency.To address these issues,this study proposes a novel two-part invalid detection task allocation framework.In the first step,an anomaly detection model is developed using a dynamic self-attentive GAN to identify anomalous data.Compared to the baseline method,the model achieves an approximately 4%increase in the F1 value on the public dataset.In the second step of the framework,task allocation modeling is performed using a twopart graph matching method.This phase introduces a P-queue KM algorithm that implements a more efficient optimization strategy.The allocation efficiency is improved by approximately 23.83%compared to the baseline method.Empirical results confirm the effectiveness of the proposed framework in detecting abnormal data nodes,enhancing allocation precision,and achieving efficient allocation.
基金supported by the National Natural Science Foundation of China(Grant No.62072031)the Applied Basic Research Foundation of Yunnan Province(Grant No.2019FD071)the Yunnan Scientific Research Foundation Project(Grant 2019J0187).
文摘With the development of vehicles towards intelligence and connectivity,vehicular data is diversifying and growing dramatically.A task allocation model and algorithm for heterogeneous Intelligent Connected Vehicle(ICV)applications are proposed for the dispersed computing network composed of heterogeneous task vehicles and Network Computing Points(NCPs).Considering the amount of task data and the idle resources of NCPs,a computing resource scheduling model for NCPs is established.Taking the heterogeneous task execution delay threshold as a constraint,the optimization problem is described as the problem of maximizing the utilization of computing resources by NCPs.The proposed problem is proven to be NP-hard by using the method of reduction to a 0-1 knapsack problem.A many-to-many matching algorithm based on resource preferences is proposed.The algorithm first establishes the mutual preference lists based on the adaptability of the task requirements and the resources provided by NCPs.This enables the filtering out of un-schedulable NCPs in the initial stage of matching,reducing the solution space dimension.To solve the matching problem between ICVs and NCPs,a new manyto-many matching algorithm is proposed to obtain a unique and stable optimal matching result.The simulation results demonstrate that the proposed scheme can improve the resource utilization of NCPs by an average of 9.6%compared to the reference scheme,and the total performance can be improved by up to 15.9%.
文摘Introduction: The uncontrolled management of waste electrical and electronic equipment (W3E) causes respiratory problems in the handlers of this waste. The objective was to study the stains associated with respiratory symptoms in W3E handlers. Methods: The study was cross-sectional with an analytical focus on W3E handlers in the informal sector in Ouagadougou. A peer-validated questionnaire collected data on a sample of 161 manipulators. Results: the most common W3E processing tasks were the purchase or sale of W3E (67.70%), its repair (39.75%) and its collection (31.06%). The prevalence of cough was 21.74%, that of wheezing 14.91%, phlegm 12.50% and dyspnea at rest 10.56%. In bivariate analysis, there were significant associations at the 5% level between W3E repair and phlegm (p-value = 0.044), between W3E burning and wheezing (p-value = 0.011) and between W3E and cough (p-value = 0.01). The final logistic regression models suggested that the burning of W3E and the melting of lead batteries represented risk factors for the occurrence of cough with respective prevalence ratios of 4.57 and 4.63. Conclusion: raising awareness on the wearing of personal protective equipment, in particular masks adapted by W3E handlers, favoring those who are dedicated to the burning of electronic waste and the melting of lead could make it possible to reduce the risk of occurrence of respiratory symptoms.
基金This research was funded by the Project of the National Natural Science Foundation of China,Grant Number 62106283.
文摘Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinforce-ment learning(DRL)theory and an improved Multi-Agent Deep Deterministic Policy Gradient(MADDPG-D2)algorithm with a dual experience replay pool and a dual noise based on multi-agent architecture is proposed to improve the efficiency of DTA.The algorithm is based on the traditional Multi-Agent Deep Deterministic Policy Gradient(MADDPG)algorithm,and considers the introduction of a double noise mechanism to increase the action exploration space in the early stage of the algorithm,and the introduction of a double experience pool to improve the data utilization rate;at the same time,in order to accelerate the training speed and efficiency of the agents,and to solve the cold-start problem of the training,the a priori knowledge technology is applied to the training of the algorithm.Finally,the MADDPG-D2 algorithm is compared and analyzed based on the digital battlefield of ground and air confrontation.The experimental results show that the agents trained by the MADDPG-D2 algorithm have higher win rates and average rewards,can utilize the resources more reasonably,and better solve the problem of the traditional single agent algorithms facing the difficulty of solving the problem in the high-dimensional decision space.The MADDPG-D2 algorithm based on multi-agent architecture proposed in this paper has certain superiority and rationality in DTA.
文摘Pooling,unpooling/specialization,and discretionary task completion are typical operational strategies in queueing systems that arise in healthcare,call centers,and online sales.These strategies may have advantages and disadvantages in different operational environments.This paper uses the M/M/1 and M/M/2 queues to study the impact of pooling,specialization,and discretionary task completion on the average queue length.Closed-form solutions for the average M/M/2 queue length are derived.Computational examples illustrate how the average queue length changes with the strength of pooling,specialization,and discretionary task completion.Finally,several conjectures are made in the paper.
基金supported by the Science and Technology Department of Sichuan Province(No.2021YFG0156).
文摘Generating diverse and factual text is challenging and is receiving increasing attention.By sampling from the latent space,variational autoencoder-based models have recently enhanced the diversity of generated text.However,existing research predominantly depends on summarizationmodels to offer paragraph-level semantic information for enhancing factual correctness.The challenge lies in effectively generating factual text using sentence-level variational autoencoder-based models.In this paper,a novel model called fact-aware conditional variational autoencoder is proposed to balance the factual correctness and diversity of generated text.Specifically,our model encodes the input sentences and uses them as facts to build a conditional variational autoencoder network.By training a conditional variational autoencoder network,the model is enabled to generate text based on input facts.Building upon this foundation,the input text is passed to the discriminator along with the generated text.By employing adversarial training,the model is encouraged to generate text that is indistinguishable to the discriminator,thereby enhancing the quality of the generated text.To further improve the factual correctness,inspired by the natural language inference system,the entailment recognition task is introduced to be trained together with the discriminator via multi-task learning.Moreover,based on the entailment recognition results,a penalty term is further proposed to reconstruct the loss of our model,forcing the generator to generate text consistent with the facts.Experimental results demonstrate that compared with competitivemodels,ourmodel has achieved substantial improvements in both the quality and factual correctness of the text,despite only sacrificing a small amount of diversity.Furthermore,when considering a comprehensive evaluation of diversity and quality metrics,our model has also demonstrated the best performance.
文摘Based on the wave attack task planning method in static complex environment and the rolling optimization framework, an online task planning method in dynamic complex environment based on rolling optimization is proposed. In the process of online task planning in dynamic complex environment,online task planning is based on event triggering including target information update event, new target addition event, target failure event, weapon failure event, etc., and the methods include defense area reanalysis, parameter space update, and mission re-planning. Simulation is conducted for different events and the result shows that the index value of the attack scenario after re-planning is better than that before re-planning and according to the probability distribution of statistical simulation method, the index value distribution after re-planning is obviously in the region of high index value, and the index value gap before and after re-planning is related to the degree of posture change.
基金supported in part by the National Natural Science Foundation of China under Grants 62201105,62331017,and 62075024in part by the Natural Science Foundation of Chongqing under Grant cstc2021jcyj-msxmX0404+1 种基金in part by the Chongqing Municipal Education Commission under Grant KJQN202100643in part by Guangdong Basic and Applied Basic Research Foundation under Grant 2022A1515110056.
文摘The Multi-access Edge Cloud(MEC) networks extend cloud computing services and capabilities to the edge of the networks. By bringing computation and storage capabilities closer to end-users and connected devices, MEC networks can support a wide range of applications. MEC networks can also leverage various types of resources, including computation resources, network resources, radio resources,and location-based resources, to provide multidimensional resources for intelligent applications in 5/6G.However, tasks generated by users often consist of multiple subtasks that require different types of resources. It is a challenging problem to offload multiresource task requests to the edge cloud aiming at maximizing benefits due to the heterogeneity of resources provided by devices. To address this issue,we mathematically model the task requests with multiple subtasks. Then, the problem of task offloading of multi-resource task requests is proved to be NP-hard. Furthermore, we propose a novel Dual-Agent Deep Reinforcement Learning algorithm with Node First and Link features(NF_L_DA_DRL) based on the policy network, to optimize the benefits generated by offloading multi-resource task requests in MEC networks. Finally, simulation results show that the proposed algorithm can effectively improve the benefit of task offloading with higher resource utilization compared with baseline algorithms.
文摘The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support of task-offloading in Multi-access Edge Computing(MEC).However,existing task-offloading optimization methods typically assume that MEC’s computing resources are unlimited,and there is a lack of research on the optimization of task-offloading when MEC resources are exhausted.In addition,existing solutions only decide whether to accept the offloaded task request based on the single decision result of the current time slot,but lack support for multiple retry in subsequent time slots.It is resulting in TD missing potential offloading opportunities in the future.To fill this gap,we propose a Two-Stage Offloading Decision-making Framework(TSODF)with request holding and dynamic eviction.Long Short-Term Memory(LSTM)-based task-offloading request prediction and MEC resource release estimation are integrated to infer the probability of a request being accepted in the subsequent time slot.The framework learns optimized decision-making experiences continuously to increase the success rate of task offloading based on deep learning technology.Simulation results show that TSODF reduces total TD’s energy consumption and delay for task execution and improves task offloading rate and system resource utilization compared to the benchmark method.
基金supported by ZTE‑University‑Institute Fund Project under Grant No.IA20230629009.
文摘This paper reviews task scheduling frameworks,methods,and evaluation metrics of central processing unit-graphics processing unit(CPU-GPU)heterogeneous clusters.Task scheduling of CPU-GPU heterogeneous clusters can be carried out on the system level,nodelevel,and device level.Most task-scheduling technologies are heuristic based on the experts’experience,while some technologies are based on statistic methods using machine learning,deep learning,or reinforcement learning.Many metrics have been adopted to evaluate and compare different task scheduling technologies that try to optimize different goals of task scheduling.Although statistic task scheduling has reached fewer research achievements than heuristic task scheduling,the statistic task scheduling still has significant research potential.
基金supported by the Ministerio Espanol de Ciencia e Innovación under Project Number PID2020-115570GB-C22,MCIN/AEI/10.13039/501100011033by the Cátedra de Empresa Tecnología para las Personas(UGR-Fujitsu).
文摘As cloud computing usage grows,cloud data centers play an increasingly important role.To maximize resource utilization,ensure service quality,and enhance system performance,it is crucial to allocate tasks and manage performance effectively.The purpose of this study is to provide an extensive analysis of task allocation and performance management techniques employed in cloud data centers.The aim is to systematically categorize and organize previous research by identifying the cloud computing methodologies,categories,and gaps.A literature review was conducted,which included the analysis of 463 task allocations and 480 performance management papers.The review revealed three task allocation research topics and seven performance management methods.Task allocation research areas are resource allocation,load-Balancing,and scheduling.Performance management includes monitoring and control,power and energy management,resource utilization optimization,quality of service management,fault management,virtual machine management,and network management.The study proposes new techniques to enhance cloud computing work allocation and performance management.Short-comings in each approach can guide future research.The research’s findings on cloud data center task allocation and performance management can assist academics,practitioners,and cloud service providers in optimizing their systems for dependability,cost-effectiveness,and scalability.Innovative methodologies can steer future research to fill gaps in the literature.
基金in part by the Hubei Natural Science and Research Project under Grant 2020418in part by the 2021 Light of Taihu Science and Technology Projectin part by the 2022 Wuxi Science and Technology Innovation and Entrepreneurship Program.
文摘More devices in the Intelligent Internet of Things(AIoT)result in an increased number of tasks that require low latency and real-time responsiveness,leading to an increased demand for computational resources.Cloud computing’s low-latency performance issues in AIoT scenarios have led researchers to explore fog computing as a complementary extension.However,the effective allocation of resources for task execution within fog environments,characterized by limitations and heterogeneity in computational resources,remains a formidable challenge.To tackle this challenge,in this study,we integrate fog computing and cloud computing.We begin by establishing a fog-cloud environment framework,followed by the formulation of a mathematical model for task scheduling.Lastly,we introduce an enhanced hybrid Equilibrium Optimizer(EHEO)tailored for AIoT task scheduling.The overarching objective is to decrease both the makespan and energy consumption of the fog-cloud system while accounting for task deadlines.The proposed EHEO method undergoes a thorough evaluation against multiple benchmark algorithms,encompassing metrics likemakespan,total energy consumption,success rate,and average waiting time.Comprehensive experimental results unequivocally demonstrate the superior performance of EHEO across all assessed metrics.Notably,in the most favorable conditions,EHEO significantly diminishes both the makespan and energy consumption by approximately 50%and 35.5%,respectively,compared to the secondbest performing approach,which affirms its efficacy in advancing the efficiency of AIoT task scheduling within fog-cloud networks.