The rapid expansion of artificial intelligence(AI)applications has raised significant concerns about user privacy,prompting the development of privacy-preserving machine learning(ML)paradigms such as federated learnin...The rapid expansion of artificial intelligence(AI)applications has raised significant concerns about user privacy,prompting the development of privacy-preserving machine learning(ML)paradigms such as federated learning(FL).FL enables the distributed training of ML models,keeping data on local devices and thus addressing the privacy concerns of users.However,challenges arise from the heterogeneous nature of mobile client devices,partial engagement of training,and non-independent identically distributed(non-IID)data distribution,leading to performance degradation and optimization objective bias in FL training.With the development of 5G/6G networks and the integration of cloud computing edge computing resources,globally distributed cloud computing resources can be effectively utilized to optimize the FL process.Through the specific parameters of the server through the selection mechanism,it does not increase the monetary cost and reduces the network latency overhead,but also balances the objectives of communication optimization and low engagement mitigation that cannot be achieved simultaneously in a single-server framework of existing works.In this paper,we propose the FedAdaSS algorithm,an adaptive parameter server selection mechanism designed to optimize the training efficiency in each round of FL training by selecting the most appropriate server as the parameter server.Our approach leverages the flexibility of cloud resource computing power,and allows organizers to strategically select servers for data broadcasting and aggregation,thus improving training performance while maintaining cost efficiency.The FedAdaSS algorithm estimates the utility of client systems and servers and incorporates an adaptive random reshuffling strategy that selects the optimal server in each round of the training process.Theoretical analysis confirms the convergence of FedAdaSS under strong convexity and L-smooth assumptions,and comparative experiments within the FLSim framework demonstrate a reduction in training round-to-accuracy by 12%–20%compared to the Federated Averaging(FedAvg)with random reshuffling method under unique server.Furthermore,FedAdaSS effectively mitigates performance loss caused by low client engagement,reducing the loss indicator by 50%.展开更多
Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of s...Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of solutions for this server is therefore growing continuously, these services are becoming more and more complex and expensive, without being able to fulfill the needs of the users. The absence of benchmarks for websites with dynamic content is the major obstacle to research in this area. These users place high demands on the speed of access to information on the Internet. This is why the performance of the web server is critically important. Several factors influence performance, such as server execution speed, network saturation on the internet or intranet, increased response time, and throughputs. By measuring these factors, we propose a performance evaluation strategy for servers that allows us to determine the actual performance of different servers in terms of user satisfaction. Furthermore, we identified performance characteristics such as throughput, resource utilization, and response time of a system through measurement and modeling by simulation. Finally, we present a simple queue model of an Apache web server, which reasonably represents the behavior of a saturated web server using the Simulink model in Matlab (Matrix Laboratory) and also incorporates sporadic incoming traffic. We obtain server performance metrics such as average response time and throughput through simulations. Compared to other models, our model is conceptually straightforward. The model has been validated through measurements and simulations during the tests that we conducted.展开更多
This study developed a mail server program using Socket API and Python.The program uses the Hypertext Transfer Protocol(HTTP)to receive emails from browser clients and forward them to actual email service providers vi...This study developed a mail server program using Socket API and Python.The program uses the Hypertext Transfer Protocol(HTTP)to receive emails from browser clients and forward them to actual email service providers via the Simple Mail Transfer Protocol(SMTP).As a web server,it handles Transmission Control Protocol(TCP)connection requests from browsers,receives HTTP commands and email data,and temporarily stores the emails in a file.Simultaneously,as an SMTP client,the program establishes a TCP connection with the actual mail server,sends SMTP commands,and transmits the previously saved emails.In addition,we also analyzed security issues and the efficiency and availability of this server,providing insights into the design of SMTP mail servers.展开更多
Mobile edge computing(MEC)provides services to devices and reduces latency in cellular internet of things(IoT)networks.However,the challenging problem is how to deploy MEC servers economically and efficiently.This pap...Mobile edge computing(MEC)provides services to devices and reduces latency in cellular internet of things(IoT)networks.However,the challenging problem is how to deploy MEC servers economically and efficiently.This paper investigates the deployment problem of MEC servers of the real-world road network by employing an improved genetic algorithm(GA)scheme.We first use the threshold-based K-means algorithm to form vehicle clusters according to their locations.We then select base stations(BSs)based on clustering center coordinates as the deployment locations set for potential MEC servers.We further select BSs using a combined simulated annealing(SA)algorithm and GA to minimize the deployment cost.The simulation results show that the improved GA deploys MEC servers effectively.In addition,the proposed algorithm outperforms GA and SA algorithms in terms of convergence speed and solution quality.展开更多
In distributed machine learning(DML)based on the parameter server(PS)architecture,unbalanced communication load distribution of PSs will lead to a significant slowdown of model synchronization in heterogeneous network...In distributed machine learning(DML)based on the parameter server(PS)architecture,unbalanced communication load distribution of PSs will lead to a significant slowdown of model synchronization in heterogeneous networks due to low utilization of bandwidth.To address this problem,a network-aware adaptive PS load distribution scheme is proposed,which accelerates model synchronization by proactively adjusting the communication load on PSs according to network states.We evaluate the proposed scheme on MXNet,known as a realworld distributed training platform,and results show that our scheme achieves up to 2.68 times speed-up of model training in the dynamic and heterogeneous network environment.展开更多
Currently,e-learning is one of the most prevalent educational methods because of its need in today’s world.Virtual classrooms and web-based learning are becoming the new method of teaching remotely.The students exper...Currently,e-learning is one of the most prevalent educational methods because of its need in today’s world.Virtual classrooms and web-based learning are becoming the new method of teaching remotely.The students experience a lack of access to resources commonly the educational material.In remote loca-tions,educational institutions face significant challenges in accessing various web-based materials due to bandwidth and network infrastructure limitations.The objective of this study is to demonstrate an optimization and queueing tech-nique for allocating optimal servers and slots for users to access cloud-based e-learning applications.The proposed method provides the optimization and queue-ing algorithm for multi-server and multi-city constraints and considers where to locate the best servers.For optimal server selection,the Rider Optimization Algo-rithm(ROA)is utilized.A performance analysis based on time,memory and delay was carried out for the proposed methodology in comparison with the exist-ing techniques.The proposed Rider Optimization Algorithm is compared to Par-ticle Swarm Optimization(PSO),Genetic Algorithm(GA)and Firefly Algorithm(FFA),the proposed method is more suitable and effective because the other three algorithms drop in local optima and are only suitable for small numbers of user requests.Thus the proposed method outweighs the conventional techniques by its enhanced performance over them.展开更多
In cloud data centers,the consolidation of workload is one of the phases during which the available hosts are allocated tasks.This phenomenon ensures that the least possible number of hosts is used without compromise ...In cloud data centers,the consolidation of workload is one of the phases during which the available hosts are allocated tasks.This phenomenon ensures that the least possible number of hosts is used without compromise in meeting the Service Level Agreement(SLA).To consolidate the workloads,the hosts are segregated into three categories:normal hosts,under-loaded hosts,and over-loaded hosts based on their utilization.It is to be noted that the identification of an extensively used host or underloaded host is challenging to accomplish.Thresh-old values were proposed in the literature to detect this scenario.The current study aims to improve the existing methods that choose the underloaded hosts,get rid of Virtual Machines(VMs)from them,andfinally place them in some other hosts.The researcher proposes a Host Resource Utilization Aware(HRUAA)Algorithm to detect those underloaded and place its virtual machines on different hosts in a vibrant Cloud environment.The mechanism presented in this study is contrasted with existing mechanisms empirically.The results attained from the study estab-lish that numerous hosts can be shut down,while at the same time,the user's workload requirement can also be met.The proposed method is energy-efficient in workload consolidation,saves cost and time,and leverages active hosts.展开更多
基金supported in part by the National Natural Science Foundation of China under Grant U22B2005,Grant 62372462.
文摘The rapid expansion of artificial intelligence(AI)applications has raised significant concerns about user privacy,prompting the development of privacy-preserving machine learning(ML)paradigms such as federated learning(FL).FL enables the distributed training of ML models,keeping data on local devices and thus addressing the privacy concerns of users.However,challenges arise from the heterogeneous nature of mobile client devices,partial engagement of training,and non-independent identically distributed(non-IID)data distribution,leading to performance degradation and optimization objective bias in FL training.With the development of 5G/6G networks and the integration of cloud computing edge computing resources,globally distributed cloud computing resources can be effectively utilized to optimize the FL process.Through the specific parameters of the server through the selection mechanism,it does not increase the monetary cost and reduces the network latency overhead,but also balances the objectives of communication optimization and low engagement mitigation that cannot be achieved simultaneously in a single-server framework of existing works.In this paper,we propose the FedAdaSS algorithm,an adaptive parameter server selection mechanism designed to optimize the training efficiency in each round of FL training by selecting the most appropriate server as the parameter server.Our approach leverages the flexibility of cloud resource computing power,and allows organizers to strategically select servers for data broadcasting and aggregation,thus improving training performance while maintaining cost efficiency.The FedAdaSS algorithm estimates the utility of client systems and servers and incorporates an adaptive random reshuffling strategy that selects the optimal server in each round of the training process.Theoretical analysis confirms the convergence of FedAdaSS under strong convexity and L-smooth assumptions,and comparative experiments within the FLSim framework demonstrate a reduction in training round-to-accuracy by 12%–20%compared to the Federated Averaging(FedAvg)with random reshuffling method under unique server.Furthermore,FedAdaSS effectively mitigates performance loss caused by low client engagement,reducing the loss indicator by 50%.
文摘Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of solutions for this server is therefore growing continuously, these services are becoming more and more complex and expensive, without being able to fulfill the needs of the users. The absence of benchmarks for websites with dynamic content is the major obstacle to research in this area. These users place high demands on the speed of access to information on the Internet. This is why the performance of the web server is critically important. Several factors influence performance, such as server execution speed, network saturation on the internet or intranet, increased response time, and throughputs. By measuring these factors, we propose a performance evaluation strategy for servers that allows us to determine the actual performance of different servers in terms of user satisfaction. Furthermore, we identified performance characteristics such as throughput, resource utilization, and response time of a system through measurement and modeling by simulation. Finally, we present a simple queue model of an Apache web server, which reasonably represents the behavior of a saturated web server using the Simulink model in Matlab (Matrix Laboratory) and also incorporates sporadic incoming traffic. We obtain server performance metrics such as average response time and throughput through simulations. Compared to other models, our model is conceptually straightforward. The model has been validated through measurements and simulations during the tests that we conducted.
文摘This study developed a mail server program using Socket API and Python.The program uses the Hypertext Transfer Protocol(HTTP)to receive emails from browser clients and forward them to actual email service providers via the Simple Mail Transfer Protocol(SMTP).As a web server,it handles Transmission Control Protocol(TCP)connection requests from browsers,receives HTTP commands and email data,and temporarily stores the emails in a file.Simultaneously,as an SMTP client,the program establishes a TCP connection with the actual mail server,sends SMTP commands,and transmits the previously saved emails.In addition,we also analyzed security issues and the efficiency and availability of this server,providing insights into the design of SMTP mail servers.
基金supported in part by National Key Research and Development Project (2020YFB1807204)in part by the National Natural Science Foundation of China (U2001213 and 61971191)+1 种基金in part by the Beijing Natural Science Foundation under Grant L201011in part by Jiangxi Key Laboratory of Artificial Intelligence Transportation Information Transmission and Processing (20202BCD42010)
文摘Mobile edge computing(MEC)provides services to devices and reduces latency in cellular internet of things(IoT)networks.However,the challenging problem is how to deploy MEC servers economically and efficiently.This paper investigates the deployment problem of MEC servers of the real-world road network by employing an improved genetic algorithm(GA)scheme.We first use the threshold-based K-means algorithm to form vehicle clusters according to their locations.We then select base stations(BSs)based on clustering center coordinates as the deployment locations set for potential MEC servers.We further select BSs using a combined simulated annealing(SA)algorithm and GA to minimize the deployment cost.The simulation results show that the improved GA deploys MEC servers effectively.In addition,the proposed algorithm outperforms GA and SA algorithms in terms of convergence speed and solution quality.
基金partially supported by the computing power networks and new communication primitives project under Grant No. HC-CN-2020120001the National Natural Science Foundation of China under Grant No. 62102066Open Research Projects of Zhejiang Lab under Grant No. 2022QA0AB02
文摘In distributed machine learning(DML)based on the parameter server(PS)architecture,unbalanced communication load distribution of PSs will lead to a significant slowdown of model synchronization in heterogeneous networks due to low utilization of bandwidth.To address this problem,a network-aware adaptive PS load distribution scheme is proposed,which accelerates model synchronization by proactively adjusting the communication load on PSs according to network states.We evaluate the proposed scheme on MXNet,known as a realworld distributed training platform,and results show that our scheme achieves up to 2.68 times speed-up of model training in the dynamic and heterogeneous network environment.
文摘Currently,e-learning is one of the most prevalent educational methods because of its need in today’s world.Virtual classrooms and web-based learning are becoming the new method of teaching remotely.The students experience a lack of access to resources commonly the educational material.In remote loca-tions,educational institutions face significant challenges in accessing various web-based materials due to bandwidth and network infrastructure limitations.The objective of this study is to demonstrate an optimization and queueing tech-nique for allocating optimal servers and slots for users to access cloud-based e-learning applications.The proposed method provides the optimization and queue-ing algorithm for multi-server and multi-city constraints and considers where to locate the best servers.For optimal server selection,the Rider Optimization Algo-rithm(ROA)is utilized.A performance analysis based on time,memory and delay was carried out for the proposed methodology in comparison with the exist-ing techniques.The proposed Rider Optimization Algorithm is compared to Par-ticle Swarm Optimization(PSO),Genetic Algorithm(GA)and Firefly Algorithm(FFA),the proposed method is more suitable and effective because the other three algorithms drop in local optima and are only suitable for small numbers of user requests.Thus the proposed method outweighs the conventional techniques by its enhanced performance over them.
文摘In cloud data centers,the consolidation of workload is one of the phases during which the available hosts are allocated tasks.This phenomenon ensures that the least possible number of hosts is used without compromise in meeting the Service Level Agreement(SLA).To consolidate the workloads,the hosts are segregated into three categories:normal hosts,under-loaded hosts,and over-loaded hosts based on their utilization.It is to be noted that the identification of an extensively used host or underloaded host is challenging to accomplish.Thresh-old values were proposed in the literature to detect this scenario.The current study aims to improve the existing methods that choose the underloaded hosts,get rid of Virtual Machines(VMs)from them,andfinally place them in some other hosts.The researcher proposes a Host Resource Utilization Aware(HRUAA)Algorithm to detect those underloaded and place its virtual machines on different hosts in a vibrant Cloud environment.The mechanism presented in this study is contrasted with existing mechanisms empirically.The results attained from the study estab-lish that numerous hosts can be shut down,while at the same time,the user's workload requirement can also be met.The proposed method is energy-efficient in workload consolidation,saves cost and time,and leverages active hosts.