Mobile edge computing(MEC)provides services to devices and reduces latency in cellular internet of things(IoT)networks.However,the challenging problem is how to deploy MEC servers economically and efficiently.This pap...Mobile edge computing(MEC)provides services to devices and reduces latency in cellular internet of things(IoT)networks.However,the challenging problem is how to deploy MEC servers economically and efficiently.This paper investigates the deployment problem of MEC servers of the real-world road network by employing an improved genetic algorithm(GA)scheme.We first use the threshold-based K-means algorithm to form vehicle clusters according to their locations.We then select base stations(BSs)based on clustering center coordinates as the deployment locations set for potential MEC servers.We further select BSs using a combined simulated annealing(SA)algorithm and GA to minimize the deployment cost.The simulation results show that the improved GA deploys MEC servers effectively.In addition,the proposed algorithm outperforms GA and SA algorithms in terms of convergence speed and solution quality.展开更多
In distributed machine learning(DML)based on the parameter server(PS)architecture,unbalanced communication load distribution of PSs will lead to a significant slowdown of model synchronization in heterogeneous network...In distributed machine learning(DML)based on the parameter server(PS)architecture,unbalanced communication load distribution of PSs will lead to a significant slowdown of model synchronization in heterogeneous networks due to low utilization of bandwidth.To address this problem,a network-aware adaptive PS load distribution scheme is proposed,which accelerates model synchronization by proactively adjusting the communication load on PSs according to network states.We evaluate the proposed scheme on MXNet,known as a realworld distributed training platform,and results show that our scheme achieves up to 2.68 times speed-up of model training in the dynamic and heterogeneous network environment.展开更多
In cloud data centers,the consolidation of workload is one of the phases during which the available hosts are allocated tasks.This phenomenon ensures that the least possible number of hosts is used without compromise ...In cloud data centers,the consolidation of workload is one of the phases during which the available hosts are allocated tasks.This phenomenon ensures that the least possible number of hosts is used without compromise in meeting the Service Level Agreement(SLA).To consolidate the workloads,the hosts are segregated into three categories:normal hosts,under-loaded hosts,and over-loaded hosts based on their utilization.It is to be noted that the identification of an extensively used host or underloaded host is challenging to accomplish.Thresh-old values were proposed in the literature to detect this scenario.The current study aims to improve the existing methods that choose the underloaded hosts,get rid of Virtual Machines(VMs)from them,andfinally place them in some other hosts.The researcher proposes a Host Resource Utilization Aware(HRUAA)Algorithm to detect those underloaded and place its virtual machines on different hosts in a vibrant Cloud environment.The mechanism presented in this study is contrasted with existing mechanisms empirically.The results attained from the study estab-lish that numerous hosts can be shut down,while at the same time,the user's workload requirement can also be met.The proposed method is energy-efficient in workload consolidation,saves cost and time,and leverages active hosts.展开更多
存储过程和触发器是数据库中的重要编程对象,在系统设计中可以提升数据库的性能。触发器是一种特殊的存储过程,主要用于实现比约束更复杂的功能,维护数据的完整性。文章利用SQL Server 2019数据库管理工具,以商超管理系统为例,研究存储...存储过程和触发器是数据库中的重要编程对象,在系统设计中可以提升数据库的性能。触发器是一种特殊的存储过程,主要用于实现比约束更复杂的功能,维护数据的完整性。文章利用SQL Server 2019数据库管理工具,以商超管理系统为例,研究存储过程和触发器在该系统中的应用,并结合实际案例,给出了存储过程和触发器在解决具体业务问题中的实现方法。展开更多
Traditional email systems can only achieve one-way communication,which means only the receiver is allowed to search for emails on the email server.In this paper,we propose a blockchain-based certificateless bidirectio...Traditional email systems can only achieve one-way communication,which means only the receiver is allowed to search for emails on the email server.In this paper,we propose a blockchain-based certificateless bidirectional authenticated searchable encryption model for a cloud email system named certificateless authenticated bidirectional searchable encryption(CL-BSE)by combining the storage function of cloud server with the communication function of email server.In the new model,not only can the data receiver search for the relevant content by generating its own trapdoor,but the data owner also can retrieve the content in the same way.Meanwhile,there are dual authentication functions in our model.First,during encryption,the data owner uses the private key to authenticate their identity,ensuring that only legal owner can generate the keyword ciphertext.Second,the blockchain verifies the data owner’s identity by the received ciphertext,allowing only authorized members to store their data in the server and avoiding unnecessary storage space consumption.We obtain a formal definition of CL-BSE and formulate a specific scheme from the new system model.Then the security of the scheme is analyzed based on the formalized security model.The results demonstrate that the scheme achieves multikeyword ciphertext indistinguishability andmulti-keyword trapdoor privacy against any adversary simultaneously.In addition,performance evaluation shows that the new scheme has higher computational and communication efficiency by comparing it with some existing ones.展开更多
The rapid expansion of artificial intelligence(AI)applications has raised significant concerns about user privacy,prompting the development of privacy-preserving machine learning(ML)paradigms such as federated learnin...The rapid expansion of artificial intelligence(AI)applications has raised significant concerns about user privacy,prompting the development of privacy-preserving machine learning(ML)paradigms such as federated learning(FL).FL enables the distributed training of ML models,keeping data on local devices and thus addressing the privacy concerns of users.However,challenges arise from the heterogeneous nature of mobile client devices,partial engagement of training,and non-independent identically distributed(non-IID)data distribution,leading to performance degradation and optimization objective bias in FL training.With the development of 5G/6G networks and the integration of cloud computing edge computing resources,globally distributed cloud computing resources can be effectively utilized to optimize the FL process.Through the specific parameters of the server through the selection mechanism,it does not increase the monetary cost and reduces the network latency overhead,but also balances the objectives of communication optimization and low engagement mitigation that cannot be achieved simultaneously in a single-server framework of existing works.In this paper,we propose the FedAdaSS algorithm,an adaptive parameter server selection mechanism designed to optimize the training efficiency in each round of FL training by selecting the most appropriate server as the parameter server.Our approach leverages the flexibility of cloud resource computing power,and allows organizers to strategically select servers for data broadcasting and aggregation,thus improving training performance while maintaining cost efficiency.The FedAdaSS algorithm estimates the utility of client systems and servers and incorporates an adaptive random reshuffling strategy that selects the optimal server in each round of the training process.Theoretical analysis confirms the convergence of FedAdaSS under strong convexity and L-smooth assumptions,and comparative experiments within the FLSim framework demonstrate a reduction in training round-to-accuracy by 12%–20%compared to the Federated Averaging(FedAvg)with random reshuffling method under unique server.Furthermore,FedAdaSS effectively mitigates performance loss caused by low client engagement,reducing the loss indicator by 50%.展开更多
Software-Defined Networking(SDN),with segregated data and control planes,provides faster data routing,stability,and enhanced quality metrics,such as throughput(Th),maximum available bandwidth(Bd(max)),data transfer(DT...Software-Defined Networking(SDN),with segregated data and control planes,provides faster data routing,stability,and enhanced quality metrics,such as throughput(Th),maximum available bandwidth(Bd(max)),data transfer(DTransfer),and reduction in end-to-end delay(D(E-E)).This paper explores the critical work of deploying SDN in large-scale Data Center Networks(DCNs)to enhance its Quality of Service(QoS)parameters,using logically distributed control configurations.There is a noticeable increase in Delay(E-E)when adopting SDN with a unified(single)control structure in big DCNs to handle Hypertext Transfer Protocol(HTTP)requests causing a reduction in network quality parameters(Bd(max),Th,DTransfer,D(E-E),etc.).This article examines the network performance in terms of quality matrices(bandwidth,throughput,data transfer,etc.),by establishing a large-scale SDN-based virtual network in the Mininet environment.The SDN network is simulated in three stages:(1)An SDN network with unitary controller-POX to manage the data traffic flow of the network without the server load management algorithm.(2)An SDN network with only one controller to manage the data traffic flow of the network with a server load management algorithm.(3)Deployment of SDN in proposed control arrangement(logically distributed controlled framework)with multiple controllers managing data traffic flow under the proposed Intelligent Sensing Server Load Management(ISSLM)algorithm.As a result of this approach,the network quality parameters in large-scale networks are enhanced.展开更多
Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of s...Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of solutions for this server is therefore growing continuously, these services are becoming more and more complex and expensive, without being able to fulfill the needs of the users. The absence of benchmarks for websites with dynamic content is the major obstacle to research in this area. These users place high demands on the speed of access to information on the Internet. This is why the performance of the web server is critically important. Several factors influence performance, such as server execution speed, network saturation on the internet or intranet, increased response time, and throughputs. By measuring these factors, we propose a performance evaluation strategy for servers that allows us to determine the actual performance of different servers in terms of user satisfaction. Furthermore, we identified performance characteristics such as throughput, resource utilization, and response time of a system through measurement and modeling by simulation. Finally, we present a simple queue model of an Apache web server, which reasonably represents the behavior of a saturated web server using the Simulink model in Matlab (Matrix Laboratory) and also incorporates sporadic incoming traffic. We obtain server performance metrics such as average response time and throughput through simulations. Compared to other models, our model is conceptually straightforward. The model has been validated through measurements and simulations during the tests that we conducted.展开更多
基金supported in part by National Key Research and Development Project (2020YFB1807204)in part by the National Natural Science Foundation of China (U2001213 and 61971191)+1 种基金in part by the Beijing Natural Science Foundation under Grant L201011in part by Jiangxi Key Laboratory of Artificial Intelligence Transportation Information Transmission and Processing (20202BCD42010)
文摘Mobile edge computing(MEC)provides services to devices and reduces latency in cellular internet of things(IoT)networks.However,the challenging problem is how to deploy MEC servers economically and efficiently.This paper investigates the deployment problem of MEC servers of the real-world road network by employing an improved genetic algorithm(GA)scheme.We first use the threshold-based K-means algorithm to form vehicle clusters according to their locations.We then select base stations(BSs)based on clustering center coordinates as the deployment locations set for potential MEC servers.We further select BSs using a combined simulated annealing(SA)algorithm and GA to minimize the deployment cost.The simulation results show that the improved GA deploys MEC servers effectively.In addition,the proposed algorithm outperforms GA and SA algorithms in terms of convergence speed and solution quality.
基金partially supported by the computing power networks and new communication primitives project under Grant No. HC-CN-2020120001the National Natural Science Foundation of China under Grant No. 62102066Open Research Projects of Zhejiang Lab under Grant No. 2022QA0AB02
文摘In distributed machine learning(DML)based on the parameter server(PS)architecture,unbalanced communication load distribution of PSs will lead to a significant slowdown of model synchronization in heterogeneous networks due to low utilization of bandwidth.To address this problem,a network-aware adaptive PS load distribution scheme is proposed,which accelerates model synchronization by proactively adjusting the communication load on PSs according to network states.We evaluate the proposed scheme on MXNet,known as a realworld distributed training platform,and results show that our scheme achieves up to 2.68 times speed-up of model training in the dynamic and heterogeneous network environment.
文摘In cloud data centers,the consolidation of workload is one of the phases during which the available hosts are allocated tasks.This phenomenon ensures that the least possible number of hosts is used without compromise in meeting the Service Level Agreement(SLA).To consolidate the workloads,the hosts are segregated into three categories:normal hosts,under-loaded hosts,and over-loaded hosts based on their utilization.It is to be noted that the identification of an extensively used host or underloaded host is challenging to accomplish.Thresh-old values were proposed in the literature to detect this scenario.The current study aims to improve the existing methods that choose the underloaded hosts,get rid of Virtual Machines(VMs)from them,andfinally place them in some other hosts.The researcher proposes a Host Resource Utilization Aware(HRUAA)Algorithm to detect those underloaded and place its virtual machines on different hosts in a vibrant Cloud environment.The mechanism presented in this study is contrasted with existing mechanisms empirically.The results attained from the study estab-lish that numerous hosts can be shut down,while at the same time,the user's workload requirement can also be met.The proposed method is energy-efficient in workload consolidation,saves cost and time,and leverages active hosts.
文摘存储过程和触发器是数据库中的重要编程对象,在系统设计中可以提升数据库的性能。触发器是一种特殊的存储过程,主要用于实现比约束更复杂的功能,维护数据的完整性。文章利用SQL Server 2019数据库管理工具,以商超管理系统为例,研究存储过程和触发器在该系统中的应用,并结合实际案例,给出了存储过程和触发器在解决具体业务问题中的实现方法。
基金supported by the National Natural Science Foundation of China(Nos.62172337,62241207)Key Project of GansuNatural Science Foundation(No.23JRRA685).
文摘Traditional email systems can only achieve one-way communication,which means only the receiver is allowed to search for emails on the email server.In this paper,we propose a blockchain-based certificateless bidirectional authenticated searchable encryption model for a cloud email system named certificateless authenticated bidirectional searchable encryption(CL-BSE)by combining the storage function of cloud server with the communication function of email server.In the new model,not only can the data receiver search for the relevant content by generating its own trapdoor,but the data owner also can retrieve the content in the same way.Meanwhile,there are dual authentication functions in our model.First,during encryption,the data owner uses the private key to authenticate their identity,ensuring that only legal owner can generate the keyword ciphertext.Second,the blockchain verifies the data owner’s identity by the received ciphertext,allowing only authorized members to store their data in the server and avoiding unnecessary storage space consumption.We obtain a formal definition of CL-BSE and formulate a specific scheme from the new system model.Then the security of the scheme is analyzed based on the formalized security model.The results demonstrate that the scheme achieves multikeyword ciphertext indistinguishability andmulti-keyword trapdoor privacy against any adversary simultaneously.In addition,performance evaluation shows that the new scheme has higher computational and communication efficiency by comparing it with some existing ones.
基金supported in part by the National Natural Science Foundation of China under Grant U22B2005,Grant 62372462.
文摘The rapid expansion of artificial intelligence(AI)applications has raised significant concerns about user privacy,prompting the development of privacy-preserving machine learning(ML)paradigms such as federated learning(FL).FL enables the distributed training of ML models,keeping data on local devices and thus addressing the privacy concerns of users.However,challenges arise from the heterogeneous nature of mobile client devices,partial engagement of training,and non-independent identically distributed(non-IID)data distribution,leading to performance degradation and optimization objective bias in FL training.With the development of 5G/6G networks and the integration of cloud computing edge computing resources,globally distributed cloud computing resources can be effectively utilized to optimize the FL process.Through the specific parameters of the server through the selection mechanism,it does not increase the monetary cost and reduces the network latency overhead,but also balances the objectives of communication optimization and low engagement mitigation that cannot be achieved simultaneously in a single-server framework of existing works.In this paper,we propose the FedAdaSS algorithm,an adaptive parameter server selection mechanism designed to optimize the training efficiency in each round of FL training by selecting the most appropriate server as the parameter server.Our approach leverages the flexibility of cloud resource computing power,and allows organizers to strategically select servers for data broadcasting and aggregation,thus improving training performance while maintaining cost efficiency.The FedAdaSS algorithm estimates the utility of client systems and servers and incorporates an adaptive random reshuffling strategy that selects the optimal server in each round of the training process.Theoretical analysis confirms the convergence of FedAdaSS under strong convexity and L-smooth assumptions,and comparative experiments within the FLSim framework demonstrate a reduction in training round-to-accuracy by 12%–20%compared to the Federated Averaging(FedAvg)with random reshuffling method under unique server.Furthermore,FedAdaSS effectively mitigates performance loss caused by low client engagement,reducing the loss indicator by 50%.
文摘Software-Defined Networking(SDN),with segregated data and control planes,provides faster data routing,stability,and enhanced quality metrics,such as throughput(Th),maximum available bandwidth(Bd(max)),data transfer(DTransfer),and reduction in end-to-end delay(D(E-E)).This paper explores the critical work of deploying SDN in large-scale Data Center Networks(DCNs)to enhance its Quality of Service(QoS)parameters,using logically distributed control configurations.There is a noticeable increase in Delay(E-E)when adopting SDN with a unified(single)control structure in big DCNs to handle Hypertext Transfer Protocol(HTTP)requests causing a reduction in network quality parameters(Bd(max),Th,DTransfer,D(E-E),etc.).This article examines the network performance in terms of quality matrices(bandwidth,throughput,data transfer,etc.),by establishing a large-scale SDN-based virtual network in the Mininet environment.The SDN network is simulated in three stages:(1)An SDN network with unitary controller-POX to manage the data traffic flow of the network without the server load management algorithm.(2)An SDN network with only one controller to manage the data traffic flow of the network with a server load management algorithm.(3)Deployment of SDN in proposed control arrangement(logically distributed controlled framework)with multiple controllers managing data traffic flow under the proposed Intelligent Sensing Server Load Management(ISSLM)algorithm.As a result of this approach,the network quality parameters in large-scale networks are enhanced.
文摘Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of solutions for this server is therefore growing continuously, these services are becoming more and more complex and expensive, without being able to fulfill the needs of the users. The absence of benchmarks for websites with dynamic content is the major obstacle to research in this area. These users place high demands on the speed of access to information on the Internet. This is why the performance of the web server is critically important. Several factors influence performance, such as server execution speed, network saturation on the internet or intranet, increased response time, and throughputs. By measuring these factors, we propose a performance evaluation strategy for servers that allows us to determine the actual performance of different servers in terms of user satisfaction. Furthermore, we identified performance characteristics such as throughput, resource utilization, and response time of a system through measurement and modeling by simulation. Finally, we present a simple queue model of an Apache web server, which reasonably represents the behavior of a saturated web server using the Simulink model in Matlab (Matrix Laboratory) and also incorporates sporadic incoming traffic. We obtain server performance metrics such as average response time and throughput through simulations. Compared to other models, our model is conceptually straightforward. The model has been validated through measurements and simulations during the tests that we conducted.