Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of s...Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of solutions for this server is therefore growing continuously, these services are becoming more and more complex and expensive, without being able to fulfill the needs of the users. The absence of benchmarks for websites with dynamic content is the major obstacle to research in this area. These users place high demands on the speed of access to information on the Internet. This is why the performance of the web server is critically important. Several factors influence performance, such as server execution speed, network saturation on the internet or intranet, increased response time, and throughputs. By measuring these factors, we propose a performance evaluation strategy for servers that allows us to determine the actual performance of different servers in terms of user satisfaction. Furthermore, we identified performance characteristics such as throughput, resource utilization, and response time of a system through measurement and modeling by simulation. Finally, we present a simple queue model of an Apache web server, which reasonably represents the behavior of a saturated web server using the Simulink model in Matlab (Matrix Laboratory) and also incorporates sporadic incoming traffic. We obtain server performance metrics such as average response time and throughput through simulations. Compared to other models, our model is conceptually straightforward. The model has been validated through measurements and simulations during the tests that we conducted.展开更多
The rapid expansion of artificial intelligence(AI)applications has raised significant concerns about user privacy,prompting the development of privacy-preserving machine learning(ML)paradigms such as federated learnin...The rapid expansion of artificial intelligence(AI)applications has raised significant concerns about user privacy,prompting the development of privacy-preserving machine learning(ML)paradigms such as federated learning(FL).FL enables the distributed training of ML models,keeping data on local devices and thus addressing the privacy concerns of users.However,challenges arise from the heterogeneous nature of mobile client devices,partial engagement of training,and non-independent identically distributed(non-IID)data distribution,leading to performance degradation and optimization objective bias in FL training.With the development of 5G/6G networks and the integration of cloud computing edge computing resources,globally distributed cloud computing resources can be effectively utilized to optimize the FL process.Through the specific parameters of the server through the selection mechanism,it does not increase the monetary cost and reduces the network latency overhead,but also balances the objectives of communication optimization and low engagement mitigation that cannot be achieved simultaneously in a single-server framework of existing works.In this paper,we propose the FedAdaSS algorithm,an adaptive parameter server selection mechanism designed to optimize the training efficiency in each round of FL training by selecting the most appropriate server as the parameter server.Our approach leverages the flexibility of cloud resource computing power,and allows organizers to strategically select servers for data broadcasting and aggregation,thus improving training performance while maintaining cost efficiency.The FedAdaSS algorithm estimates the utility of client systems and servers and incorporates an adaptive random reshuffling strategy that selects the optimal server in each round of the training process.Theoretical analysis confirms the convergence of FedAdaSS under strong convexity and L-smooth assumptions,and comparative experiments within the FLSim framework demonstrate a reduction in training round-to-accuracy by 12%–20%compared to the Federated Averaging(FedAvg)with random reshuffling method under unique server.Furthermore,FedAdaSS effectively mitigates performance loss caused by low client engagement,reducing the loss indicator by 50%.展开更多
This study developed a mail server program using Socket API and Python.The program uses the Hypertext Transfer Protocol(HTTP)to receive emails from browser clients and forward them to actual email service providers vi...This study developed a mail server program using Socket API and Python.The program uses the Hypertext Transfer Protocol(HTTP)to receive emails from browser clients and forward them to actual email service providers via the Simple Mail Transfer Protocol(SMTP).As a web server,it handles Transmission Control Protocol(TCP)connection requests from browsers,receives HTTP commands and email data,and temporarily stores the emails in a file.Simultaneously,as an SMTP client,the program establishes a TCP connection with the actual mail server,sends SMTP commands,and transmits the previously saved emails.In addition,we also analyzed security issues and the efficiency and availability of this server,providing insights into the design of SMTP mail servers.展开更多
Mobile edge computing(MEC)provides services to devices and reduces latency in cellular internet of things(IoT)networks.However,the challenging problem is how to deploy MEC servers economically and efficiently.This pap...Mobile edge computing(MEC)provides services to devices and reduces latency in cellular internet of things(IoT)networks.However,the challenging problem is how to deploy MEC servers economically and efficiently.This paper investigates the deployment problem of MEC servers of the real-world road network by employing an improved genetic algorithm(GA)scheme.We first use the threshold-based K-means algorithm to form vehicle clusters according to their locations.We then select base stations(BSs)based on clustering center coordinates as the deployment locations set for potential MEC servers.We further select BSs using a combined simulated annealing(SA)algorithm and GA to minimize the deployment cost.The simulation results show that the improved GA deploys MEC servers effectively.In addition,the proposed algorithm outperforms GA and SA algorithms in terms of convergence speed and solution quality.展开更多
In distributed machine learning(DML)based on the parameter server(PS)architecture,unbalanced communication load distribution of PSs will lead to a significant slowdown of model synchronization in heterogeneous network...In distributed machine learning(DML)based on the parameter server(PS)architecture,unbalanced communication load distribution of PSs will lead to a significant slowdown of model synchronization in heterogeneous networks due to low utilization of bandwidth.To address this problem,a network-aware adaptive PS load distribution scheme is proposed,which accelerates model synchronization by proactively adjusting the communication load on PSs according to network states.We evaluate the proposed scheme on MXNet,known as a realworld distributed training platform,and results show that our scheme achieves up to 2.68 times speed-up of model training in the dynamic and heterogeneous network environment.展开更多
The error correction performance of Belief Propagation(BP)decoding for polar codes is satisfactory compared with the Successive Cancellation(SC)decoding.Nevertheless,it has to complete a fixed number of iterations,whi...The error correction performance of Belief Propagation(BP)decoding for polar codes is satisfactory compared with the Successive Cancellation(SC)decoding.Nevertheless,it has to complete a fixed number of iterations,which results in high computational complexity.This necessitates an intelligent identification of successful BP decoding for early termination of the decoding process to avoid unnecessary iterations and minimize the computational complexity of BP decoding.This paper proposes a hybrid technique that combines the“paritycheck”with the“G-matrix”to reduce the computational complexity of BP decoder for polar codes.The proposed hybrid technique takes advantage of the parity-check to intelligently identify the valid codeword at an early stage and terminate the BP decoding process,which minimizes the overhead of the G-matrix and reduces the computational complexity of BP decoding.We explore a detailed mechanism incorporating the parity bits as outer code and prove that the proposed hybrid technique minimizes the computational complexity while preserving the BP error correction performance.Moreover,mathematical formulation for the proposed hybrid technique that minimizes the computation cost of the G-matrix is elaborated.The performance of the proposed hybrid technique is validated by comparing it with the state-of-the-art early stopping criteria for BP decoding.Simulation results show that the proposed hybrid technique reduces the iterations by about 90%of BP decoding in a high Signal-to-Noise Ratio(SNR)(i.e.,3.5~4 dB),and approaches the error correction performance of G-matrix and conventional BP decoder for polar codes.展开更多
Currently,e-learning is one of the most prevalent educational methods because of its need in today’s world.Virtual classrooms and web-based learning are becoming the new method of teaching remotely.The students exper...Currently,e-learning is one of the most prevalent educational methods because of its need in today’s world.Virtual classrooms and web-based learning are becoming the new method of teaching remotely.The students experience a lack of access to resources commonly the educational material.In remote loca-tions,educational institutions face significant challenges in accessing various web-based materials due to bandwidth and network infrastructure limitations.The objective of this study is to demonstrate an optimization and queueing tech-nique for allocating optimal servers and slots for users to access cloud-based e-learning applications.The proposed method provides the optimization and queue-ing algorithm for multi-server and multi-city constraints and considers where to locate the best servers.For optimal server selection,the Rider Optimization Algo-rithm(ROA)is utilized.A performance analysis based on time,memory and delay was carried out for the proposed methodology in comparison with the exist-ing techniques.The proposed Rider Optimization Algorithm is compared to Par-ticle Swarm Optimization(PSO),Genetic Algorithm(GA)and Firefly Algorithm(FFA),the proposed method is more suitable and effective because the other three algorithms drop in local optima and are only suitable for small numbers of user requests.Thus the proposed method outweighs the conventional techniques by its enhanced performance over them.展开更多
文摘Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of solutions for this server is therefore growing continuously, these services are becoming more and more complex and expensive, without being able to fulfill the needs of the users. The absence of benchmarks for websites with dynamic content is the major obstacle to research in this area. These users place high demands on the speed of access to information on the Internet. This is why the performance of the web server is critically important. Several factors influence performance, such as server execution speed, network saturation on the internet or intranet, increased response time, and throughputs. By measuring these factors, we propose a performance evaluation strategy for servers that allows us to determine the actual performance of different servers in terms of user satisfaction. Furthermore, we identified performance characteristics such as throughput, resource utilization, and response time of a system through measurement and modeling by simulation. Finally, we present a simple queue model of an Apache web server, which reasonably represents the behavior of a saturated web server using the Simulink model in Matlab (Matrix Laboratory) and also incorporates sporadic incoming traffic. We obtain server performance metrics such as average response time and throughput through simulations. Compared to other models, our model is conceptually straightforward. The model has been validated through measurements and simulations during the tests that we conducted.
基金supported in part by the National Natural Science Foundation of China under Grant U22B2005,Grant 62372462.
文摘The rapid expansion of artificial intelligence(AI)applications has raised significant concerns about user privacy,prompting the development of privacy-preserving machine learning(ML)paradigms such as federated learning(FL).FL enables the distributed training of ML models,keeping data on local devices and thus addressing the privacy concerns of users.However,challenges arise from the heterogeneous nature of mobile client devices,partial engagement of training,and non-independent identically distributed(non-IID)data distribution,leading to performance degradation and optimization objective bias in FL training.With the development of 5G/6G networks and the integration of cloud computing edge computing resources,globally distributed cloud computing resources can be effectively utilized to optimize the FL process.Through the specific parameters of the server through the selection mechanism,it does not increase the monetary cost and reduces the network latency overhead,but also balances the objectives of communication optimization and low engagement mitigation that cannot be achieved simultaneously in a single-server framework of existing works.In this paper,we propose the FedAdaSS algorithm,an adaptive parameter server selection mechanism designed to optimize the training efficiency in each round of FL training by selecting the most appropriate server as the parameter server.Our approach leverages the flexibility of cloud resource computing power,and allows organizers to strategically select servers for data broadcasting and aggregation,thus improving training performance while maintaining cost efficiency.The FedAdaSS algorithm estimates the utility of client systems and servers and incorporates an adaptive random reshuffling strategy that selects the optimal server in each round of the training process.Theoretical analysis confirms the convergence of FedAdaSS under strong convexity and L-smooth assumptions,and comparative experiments within the FLSim framework demonstrate a reduction in training round-to-accuracy by 12%–20%compared to the Federated Averaging(FedAvg)with random reshuffling method under unique server.Furthermore,FedAdaSS effectively mitigates performance loss caused by low client engagement,reducing the loss indicator by 50%.
文摘This study developed a mail server program using Socket API and Python.The program uses the Hypertext Transfer Protocol(HTTP)to receive emails from browser clients and forward them to actual email service providers via the Simple Mail Transfer Protocol(SMTP).As a web server,it handles Transmission Control Protocol(TCP)connection requests from browsers,receives HTTP commands and email data,and temporarily stores the emails in a file.Simultaneously,as an SMTP client,the program establishes a TCP connection with the actual mail server,sends SMTP commands,and transmits the previously saved emails.In addition,we also analyzed security issues and the efficiency and availability of this server,providing insights into the design of SMTP mail servers.
基金supported in part by National Key Research and Development Project (2020YFB1807204)in part by the National Natural Science Foundation of China (U2001213 and 61971191)+1 种基金in part by the Beijing Natural Science Foundation under Grant L201011in part by Jiangxi Key Laboratory of Artificial Intelligence Transportation Information Transmission and Processing (20202BCD42010)
文摘Mobile edge computing(MEC)provides services to devices and reduces latency in cellular internet of things(IoT)networks.However,the challenging problem is how to deploy MEC servers economically and efficiently.This paper investigates the deployment problem of MEC servers of the real-world road network by employing an improved genetic algorithm(GA)scheme.We first use the threshold-based K-means algorithm to form vehicle clusters according to their locations.We then select base stations(BSs)based on clustering center coordinates as the deployment locations set for potential MEC servers.We further select BSs using a combined simulated annealing(SA)algorithm and GA to minimize the deployment cost.The simulation results show that the improved GA deploys MEC servers effectively.In addition,the proposed algorithm outperforms GA and SA algorithms in terms of convergence speed and solution quality.
基金partially supported by the computing power networks and new communication primitives project under Grant No. HC-CN-2020120001the National Natural Science Foundation of China under Grant No. 62102066Open Research Projects of Zhejiang Lab under Grant No. 2022QA0AB02
文摘In distributed machine learning(DML)based on the parameter server(PS)architecture,unbalanced communication load distribution of PSs will lead to a significant slowdown of model synchronization in heterogeneous networks due to low utilization of bandwidth.To address this problem,a network-aware adaptive PS load distribution scheme is proposed,which accelerates model synchronization by proactively adjusting the communication load on PSs according to network states.We evaluate the proposed scheme on MXNet,known as a realworld distributed training platform,and results show that our scheme achieves up to 2.68 times speed-up of model training in the dynamic and heterogeneous network environment.
基金This work is partially supported by the National Key Research and Development Project under Grant 2018YFB1802402.
文摘The error correction performance of Belief Propagation(BP)decoding for polar codes is satisfactory compared with the Successive Cancellation(SC)decoding.Nevertheless,it has to complete a fixed number of iterations,which results in high computational complexity.This necessitates an intelligent identification of successful BP decoding for early termination of the decoding process to avoid unnecessary iterations and minimize the computational complexity of BP decoding.This paper proposes a hybrid technique that combines the“paritycheck”with the“G-matrix”to reduce the computational complexity of BP decoder for polar codes.The proposed hybrid technique takes advantage of the parity-check to intelligently identify the valid codeword at an early stage and terminate the BP decoding process,which minimizes the overhead of the G-matrix and reduces the computational complexity of BP decoding.We explore a detailed mechanism incorporating the parity bits as outer code and prove that the proposed hybrid technique minimizes the computational complexity while preserving the BP error correction performance.Moreover,mathematical formulation for the proposed hybrid technique that minimizes the computation cost of the G-matrix is elaborated.The performance of the proposed hybrid technique is validated by comparing it with the state-of-the-art early stopping criteria for BP decoding.Simulation results show that the proposed hybrid technique reduces the iterations by about 90%of BP decoding in a high Signal-to-Noise Ratio(SNR)(i.e.,3.5~4 dB),and approaches the error correction performance of G-matrix and conventional BP decoder for polar codes.
文摘Currently,e-learning is one of the most prevalent educational methods because of its need in today’s world.Virtual classrooms and web-based learning are becoming the new method of teaching remotely.The students experience a lack of access to resources commonly the educational material.In remote loca-tions,educational institutions face significant challenges in accessing various web-based materials due to bandwidth and network infrastructure limitations.The objective of this study is to demonstrate an optimization and queueing tech-nique for allocating optimal servers and slots for users to access cloud-based e-learning applications.The proposed method provides the optimization and queue-ing algorithm for multi-server and multi-city constraints and considers where to locate the best servers.For optimal server selection,the Rider Optimization Algo-rithm(ROA)is utilized.A performance analysis based on time,memory and delay was carried out for the proposed methodology in comparison with the exist-ing techniques.The proposed Rider Optimization Algorithm is compared to Par-ticle Swarm Optimization(PSO),Genetic Algorithm(GA)and Firefly Algorithm(FFA),the proposed method is more suitable and effective because the other three algorithms drop in local optima and are only suitable for small numbers of user requests.Thus the proposed method outweighs the conventional techniques by its enhanced performance over them.