Scalability and information personal privacy are vital for training and deploying large-scale deep learning models.Federated learning trains models on exclusive information by aggregating weights from various devices ...Scalability and information personal privacy are vital for training and deploying large-scale deep learning models.Federated learning trains models on exclusive information by aggregating weights from various devices and taking advantage of the device-agnostic environment of web browsers.Nevertheless,relying on a main central server for internet browser-based federated systems can prohibit scalability and interfere with the training process as a result of growing client numbers.Additionally,information relating to the training dataset can possibly be extracted from the distributed weights,potentially reducing the privacy of the local data used for training.In this research paper,we aim to investigate the challenges of scalability and data privacy to increase the efficiency of distributed training models.As a result,we propose a web-federated learning exchange(WebFLex)framework,which intends to improve the decentralization of the federated learning process.WebFLex is additionally developed to secure distributed and scalable federated learning systems that operate in web browsers across heterogeneous devices.Furthermore,WebFLex utilizes peer-to-peer interactions and secure weight exchanges utilizing browser-to-browser web real-time communication(WebRTC),efficiently preventing the need for a main central server.WebFLex has actually been measured in various setups using the MNIST dataset.Experimental results show WebFLex’s ability to improve the scalability of federated learning systems,allowing a smooth increase in the number of participating devices without central data aggregation.In addition,WebFLex can maintain a durable federated learning procedure even when faced with device disconnections and network variability.Additionally,it improves data privacy by utilizing artificial noise,which accomplishes an appropriate balance between accuracy and privacy preservation.展开更多
Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of s...Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of solutions for this server is therefore growing continuously, these services are becoming more and more complex and expensive, without being able to fulfill the needs of the users. The absence of benchmarks for websites with dynamic content is the major obstacle to research in this area. These users place high demands on the speed of access to information on the Internet. This is why the performance of the web server is critically important. Several factors influence performance, such as server execution speed, network saturation on the internet or intranet, increased response time, and throughputs. By measuring these factors, we propose a performance evaluation strategy for servers that allows us to determine the actual performance of different servers in terms of user satisfaction. Furthermore, we identified performance characteristics such as throughput, resource utilization, and response time of a system through measurement and modeling by simulation. Finally, we present a simple queue model of an Apache web server, which reasonably represents the behavior of a saturated web server using the Simulink model in Matlab (Matrix Laboratory) and also incorporates sporadic incoming traffic. We obtain server performance metrics such as average response time and throughput through simulations. Compared to other models, our model is conceptually straightforward. The model has been validated through measurements and simulations during the tests that we conducted.展开更多
The rapid expansion of artificial intelligence(AI)applications has raised significant concerns about user privacy,prompting the development of privacy-preserving machine learning(ML)paradigms such as federated learnin...The rapid expansion of artificial intelligence(AI)applications has raised significant concerns about user privacy,prompting the development of privacy-preserving machine learning(ML)paradigms such as federated learning(FL).FL enables the distributed training of ML models,keeping data on local devices and thus addressing the privacy concerns of users.However,challenges arise from the heterogeneous nature of mobile client devices,partial engagement of training,and non-independent identically distributed(non-IID)data distribution,leading to performance degradation and optimization objective bias in FL training.With the development of 5G/6G networks and the integration of cloud computing edge computing resources,globally distributed cloud computing resources can be effectively utilized to optimize the FL process.Through the specific parameters of the server through the selection mechanism,it does not increase the monetary cost and reduces the network latency overhead,but also balances the objectives of communication optimization and low engagement mitigation that cannot be achieved simultaneously in a single-server framework of existing works.In this paper,we propose the FedAdaSS algorithm,an adaptive parameter server selection mechanism designed to optimize the training efficiency in each round of FL training by selecting the most appropriate server as the parameter server.Our approach leverages the flexibility of cloud resource computing power,and allows organizers to strategically select servers for data broadcasting and aggregation,thus improving training performance while maintaining cost efficiency.The FedAdaSS algorithm estimates the utility of client systems and servers and incorporates an adaptive random reshuffling strategy that selects the optimal server in each round of the training process.Theoretical analysis confirms the convergence of FedAdaSS under strong convexity and L-smooth assumptions,and comparative experiments within the FLSim framework demonstrate a reduction in training round-to-accuracy by 12%–20%compared to the Federated Averaging(FedAvg)with random reshuffling method under unique server.Furthermore,FedAdaSS effectively mitigates performance loss caused by low client engagement,reducing the loss indicator by 50%.展开更多
This study developed a mail server program using Socket API and Python.The program uses the Hypertext Transfer Protocol(HTTP)to receive emails from browser clients and forward them to actual email service providers vi...This study developed a mail server program using Socket API and Python.The program uses the Hypertext Transfer Protocol(HTTP)to receive emails from browser clients and forward them to actual email service providers via the Simple Mail Transfer Protocol(SMTP).As a web server,it handles Transmission Control Protocol(TCP)connection requests from browsers,receives HTTP commands and email data,and temporarily stores the emails in a file.Simultaneously,as an SMTP client,the program establishes a TCP connection with the actual mail server,sends SMTP commands,and transmits the previously saved emails.In addition,we also analyzed security issues and the efficiency and availability of this server,providing insights into the design of SMTP mail servers.展开更多
基金This work has been funded by King Saud University,Riyadh,Saudi Arabia,through Researchers Supporting Project Number(RSPD2024R857).
文摘Scalability and information personal privacy are vital for training and deploying large-scale deep learning models.Federated learning trains models on exclusive information by aggregating weights from various devices and taking advantage of the device-agnostic environment of web browsers.Nevertheless,relying on a main central server for internet browser-based federated systems can prohibit scalability and interfere with the training process as a result of growing client numbers.Additionally,information relating to the training dataset can possibly be extracted from the distributed weights,potentially reducing the privacy of the local data used for training.In this research paper,we aim to investigate the challenges of scalability and data privacy to increase the efficiency of distributed training models.As a result,we propose a web-federated learning exchange(WebFLex)framework,which intends to improve the decentralization of the federated learning process.WebFLex is additionally developed to secure distributed and scalable federated learning systems that operate in web browsers across heterogeneous devices.Furthermore,WebFLex utilizes peer-to-peer interactions and secure weight exchanges utilizing browser-to-browser web real-time communication(WebRTC),efficiently preventing the need for a main central server.WebFLex has actually been measured in various setups using the MNIST dataset.Experimental results show WebFLex’s ability to improve the scalability of federated learning systems,allowing a smooth increase in the number of participating devices without central data aggregation.In addition,WebFLex can maintain a durable federated learning procedure even when faced with device disconnections and network variability.Additionally,it improves data privacy by utilizing artificial noise,which accomplishes an appropriate balance between accuracy and privacy preservation.
文摘Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of solutions for this server is therefore growing continuously, these services are becoming more and more complex and expensive, without being able to fulfill the needs of the users. The absence of benchmarks for websites with dynamic content is the major obstacle to research in this area. These users place high demands on the speed of access to information on the Internet. This is why the performance of the web server is critically important. Several factors influence performance, such as server execution speed, network saturation on the internet or intranet, increased response time, and throughputs. By measuring these factors, we propose a performance evaluation strategy for servers that allows us to determine the actual performance of different servers in terms of user satisfaction. Furthermore, we identified performance characteristics such as throughput, resource utilization, and response time of a system through measurement and modeling by simulation. Finally, we present a simple queue model of an Apache web server, which reasonably represents the behavior of a saturated web server using the Simulink model in Matlab (Matrix Laboratory) and also incorporates sporadic incoming traffic. We obtain server performance metrics such as average response time and throughput through simulations. Compared to other models, our model is conceptually straightforward. The model has been validated through measurements and simulations during the tests that we conducted.
基金supported in part by the National Natural Science Foundation of China under Grant U22B2005,Grant 62372462.
文摘The rapid expansion of artificial intelligence(AI)applications has raised significant concerns about user privacy,prompting the development of privacy-preserving machine learning(ML)paradigms such as federated learning(FL).FL enables the distributed training of ML models,keeping data on local devices and thus addressing the privacy concerns of users.However,challenges arise from the heterogeneous nature of mobile client devices,partial engagement of training,and non-independent identically distributed(non-IID)data distribution,leading to performance degradation and optimization objective bias in FL training.With the development of 5G/6G networks and the integration of cloud computing edge computing resources,globally distributed cloud computing resources can be effectively utilized to optimize the FL process.Through the specific parameters of the server through the selection mechanism,it does not increase the monetary cost and reduces the network latency overhead,but also balances the objectives of communication optimization and low engagement mitigation that cannot be achieved simultaneously in a single-server framework of existing works.In this paper,we propose the FedAdaSS algorithm,an adaptive parameter server selection mechanism designed to optimize the training efficiency in each round of FL training by selecting the most appropriate server as the parameter server.Our approach leverages the flexibility of cloud resource computing power,and allows organizers to strategically select servers for data broadcasting and aggregation,thus improving training performance while maintaining cost efficiency.The FedAdaSS algorithm estimates the utility of client systems and servers and incorporates an adaptive random reshuffling strategy that selects the optimal server in each round of the training process.Theoretical analysis confirms the convergence of FedAdaSS under strong convexity and L-smooth assumptions,and comparative experiments within the FLSim framework demonstrate a reduction in training round-to-accuracy by 12%–20%compared to the Federated Averaging(FedAvg)with random reshuffling method under unique server.Furthermore,FedAdaSS effectively mitigates performance loss caused by low client engagement,reducing the loss indicator by 50%.
文摘This study developed a mail server program using Socket API and Python.The program uses the Hypertext Transfer Protocol(HTTP)to receive emails from browser clients and forward them to actual email service providers via the Simple Mail Transfer Protocol(SMTP).As a web server,it handles Transmission Control Protocol(TCP)connection requests from browsers,receives HTTP commands and email data,and temporarily stores the emails in a file.Simultaneously,as an SMTP client,the program establishes a TCP connection with the actual mail server,sends SMTP commands,and transmits the previously saved emails.In addition,we also analyzed security issues and the efficiency and availability of this server,providing insights into the design of SMTP mail servers.