In order to solve the delay requirements of computing intensive tasks in industrial Internet of things,edge computing is moving from theoretical research to practical applications.Edge servers(ESs)have been deployed i...In order to solve the delay requirements of computing intensive tasks in industrial Internet of things,edge computing is moving from theoretical research to practical applications.Edge servers(ESs)have been deployed in factories,and on-site auto guided vehicles(AGVs),besides doing their regular transportation tasks,can partly act as mobile collectors and distributors of computing data and tasks.Since AGVs may offload tasks to the same ES if they have overlapping path segments,resource allocation conflicts are inevitable.In this paper,we study the problem of efficient task offloading from AGVs to ESs,along their fixed trajectories.We propose a multi-AGV task offloading optimization algorithm(MATO),which first uses the weighted polling algorithm to preliminarily allocate tasks for individual AGVs based on load balancing,and then uses the Deep Q-Network(DQN)model to obtain the updated offloading strategy for the AGV group.The simulation results show that,compared with the existing methods,the proposed MATO algorithm can significantly reduce the maximum completion time of tasks and be stable under various parameter settings.展开更多
Federated learning is a distributedmachine learningmethod that can solve the increasingly serious problemof data islands and user data privacy,as it allows training data to be kept locally and not shared with other us...Federated learning is a distributedmachine learningmethod that can solve the increasingly serious problemof data islands and user data privacy,as it allows training data to be kept locally and not shared with other users.It trains a globalmodel by aggregating locally-computedmodels of clients rather than their rawdata.However,the divergence of local models caused by data heterogeneity of different clients may lead to slow convergence of the global model.For this problem,we focus on the client selection with federated learning,which can affect the convergence performance of the global model with the selected local models.We propose FedChoice,a client selection method based on loss function optimization,to select appropriate local models to improve the convergence of the global model.It firstly sets selected probability for clients with the value of loss function,and the client with high loss will be set higher selected probability,which can make them more likely to participate in training.Then,it introduces a local control vector and a global control vector to predict the local gradient direction and global gradient direction,respectively,and calculates the gradient correction vector to correct the gradient direction to reduce the cumulative deviationof the local gradient causedby theNon-IIDdata.Wemake experiments to verify the validity of FedChoice on CIFAR-10,CINIC-10,MNIST,EMNITS,and FEMNIST datasets,and the results show that the convergence of FedChoice is significantly improved,compared with FedAvg,FedProx,and FedNova.展开更多
基金supported by National Natural Science Foundation of China(No.62172134).
文摘In order to solve the delay requirements of computing intensive tasks in industrial Internet of things,edge computing is moving from theoretical research to practical applications.Edge servers(ESs)have been deployed in factories,and on-site auto guided vehicles(AGVs),besides doing their regular transportation tasks,can partly act as mobile collectors and distributors of computing data and tasks.Since AGVs may offload tasks to the same ES if they have overlapping path segments,resource allocation conflicts are inevitable.In this paper,we study the problem of efficient task offloading from AGVs to ESs,along their fixed trajectories.We propose a multi-AGV task offloading optimization algorithm(MATO),which first uses the weighted polling algorithm to preliminarily allocate tasks for individual AGVs based on load balancing,and then uses the Deep Q-Network(DQN)model to obtain the updated offloading strategy for the AGV group.The simulation results show that,compared with the existing methods,the proposed MATO algorithm can significantly reduce the maximum completion time of tasks and be stable under various parameter settings.
基金supported by the National Natural Science Foundation of China under Grant No.62072146The Key Research and Development Program of Zhejiang Province under Grant No.2021C03187+1 种基金National Key Research and Development Program of China 2019YFB2102100The State Key Laboratory of Computer Architecture(ICT,CAS)under Grant No.CARCHB202120.
文摘Federated learning is a distributedmachine learningmethod that can solve the increasingly serious problemof data islands and user data privacy,as it allows training data to be kept locally and not shared with other users.It trains a globalmodel by aggregating locally-computedmodels of clients rather than their rawdata.However,the divergence of local models caused by data heterogeneity of different clients may lead to slow convergence of the global model.For this problem,we focus on the client selection with federated learning,which can affect the convergence performance of the global model with the selected local models.We propose FedChoice,a client selection method based on loss function optimization,to select appropriate local models to improve the convergence of the global model.It firstly sets selected probability for clients with the value of loss function,and the client with high loss will be set higher selected probability,which can make them more likely to participate in training.Then,it introduces a local control vector and a global control vector to predict the local gradient direction and global gradient direction,respectively,and calculates the gradient correction vector to correct the gradient direction to reduce the cumulative deviationof the local gradient causedby theNon-IIDdata.Wemake experiments to verify the validity of FedChoice on CIFAR-10,CINIC-10,MNIST,EMNITS,and FEMNIST datasets,and the results show that the convergence of FedChoice is significantly improved,compared with FedAvg,FedProx,and FedNova.