期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Task Offloading Optimization for AGVs with Fixed Routes in Industrial IoT Environment
1
作者 Peng Liu Zifu Wu +3 位作者 Hangguan Shan Fei Lin Qi Wang Qingshan Wang 《China Communications》 SCIE CSCD 2023年第5期302-314,共13页
In order to solve the delay requirements of computing intensive tasks in industrial Internet of things,edge computing is moving from theoretical research to practical applications.Edge servers(ESs)have been deployed i... In order to solve the delay requirements of computing intensive tasks in industrial Internet of things,edge computing is moving from theoretical research to practical applications.Edge servers(ESs)have been deployed in factories,and on-site auto guided vehicles(AGVs),besides doing their regular transportation tasks,can partly act as mobile collectors and distributors of computing data and tasks.Since AGVs may offload tasks to the same ES if they have overlapping path segments,resource allocation conflicts are inevitable.In this paper,we study the problem of efficient task offloading from AGVs to ESs,along their fixed trajectories.We propose a multi-AGV task offloading optimization algorithm(MATO),which first uses the weighted polling algorithm to preliminarily allocate tasks for individual AGVs based on load balancing,and then uses the Deep Q-Network(DQN)model to obtain the updated offloading strategy for the AGV group.The simulation results show that,compared with the existing methods,the proposed MATO algorithm can significantly reduce the maximum completion time of tasks and be stable under various parameter settings. 展开更多
关键词 industrial Internet of Things task offloading optimization auto guided vehicles reinforcement learning
下载PDF
A Client Selection Method Based on Loss Function Optimization for Federated Learning
2
作者 Yan Zeng Siyuan Teng +4 位作者 Tian Xiang Jilin Zhang Yuankai Mu Yongjian Ren Jian Wan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第10期1047-1064,共18页
Federated learning is a distributedmachine learningmethod that can solve the increasingly serious problemof data islands and user data privacy,as it allows training data to be kept locally and not shared with other us... Federated learning is a distributedmachine learningmethod that can solve the increasingly serious problemof data islands and user data privacy,as it allows training data to be kept locally and not shared with other users.It trains a globalmodel by aggregating locally-computedmodels of clients rather than their rawdata.However,the divergence of local models caused by data heterogeneity of different clients may lead to slow convergence of the global model.For this problem,we focus on the client selection with federated learning,which can affect the convergence performance of the global model with the selected local models.We propose FedChoice,a client selection method based on loss function optimization,to select appropriate local models to improve the convergence of the global model.It firstly sets selected probability for clients with the value of loss function,and the client with high loss will be set higher selected probability,which can make them more likely to participate in training.Then,it introduces a local control vector and a global control vector to predict the local gradient direction and global gradient direction,respectively,and calculates the gradient correction vector to correct the gradient direction to reduce the cumulative deviationof the local gradient causedby theNon-IIDdata.Wemake experiments to verify the validity of FedChoice on CIFAR-10,CINIC-10,MNIST,EMNITS,and FEMNIST datasets,and the results show that the convergence of FedChoice is significantly improved,compared with FedAvg,FedProx,and FedNova. 展开更多
关键词 Federated learning model aggregation Non-IID
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部