Federated learning for edge computing is a promising solution in the data booming era,which leverages the computation ability of each edge device to train local models and only shares the model gradients to the centra...Federated learning for edge computing is a promising solution in the data booming era,which leverages the computation ability of each edge device to train local models and only shares the model gradients to the central server.However,the frequently transmitted local gradients could also leak the participants’private data.To protect the privacy of local training data,lots of cryptographic-based Privacy-Preserving Federated Learning(PPFL)schemes have been proposed.However,due to the constrained resource nature of mobile devices and complex cryptographic operations,traditional PPFL schemes fail to provide efficient data confidentiality and lightweight integrity verification simultaneously.To tackle this problem,we propose a Verifiable Privacypreserving Federated Learning scheme(VPFL)for edge computing systems to prevent local gradients from leaking over the transmission stage.Firstly,we combine the Distributed Selective Stochastic Gradient Descent(DSSGD)method with Paillier homomorphic cryptosystem to achieve the distributed encryption functionality,so as to reduce the computation cost of the complex cryptosystem.Secondly,we further present an online/offline signature method to realize the lightweight gradients integrity verification,where the offline part can be securely outsourced to the edge server.Comprehensive security analysis demonstrates the proposed VPFL can achieve data confidentiality,authentication,and integrity.At last,we evaluate both communication overhead and computation cost of the proposed VPFL scheme,the experimental results have shown VPFL has low computation costs and communication overheads while maintaining high training accuracy.展开更多
Cloud computing has taken over the high-performance distributed computing area,and it currently provides on-demand services and resource polling over the web.As a result of constantly changing user service demand,the ...Cloud computing has taken over the high-performance distributed computing area,and it currently provides on-demand services and resource polling over the web.As a result of constantly changing user service demand,the task scheduling problem has emerged as a critical analytical topic in cloud computing.The primary goal of scheduling tasks is to distribute tasks to available processors to construct the shortest possible schedule without breaching precedence restrictions.Assignments and schedules of tasks substantially influence system operation in a heterogeneous multiprocessor system.The diverse processes inside the heuristic-based task scheduling method will result in varying makespan in the heterogeneous computing system.As a result,an intelligent scheduling algorithm should efficiently determine the priority of every subtask based on the resources necessary to lower the makespan.This research introduced a novel efficient scheduling task method in cloud computing systems based on the cooperation search algorithm to tackle an essential task and schedule a heterogeneous cloud computing problem.The basic idea of thismethod is to use the advantages of meta-heuristic algorithms to get the optimal solution.We assess our algorithm’s performance by running it through three scenarios with varying numbers of tasks.The findings demonstrate that the suggested technique beats existingmethods NewGenetic Algorithm(NGA),Genetic Algorithm(GA),Whale Optimization Algorithm(WOA),Gravitational Search Algorithm(GSA),and Hybrid Heuristic and Genetic(HHG)by 7.9%,2.1%,8.8%,7.7%,3.4%respectively according to makespan.展开更多
In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(...In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(UCS).The UCS necessitates heterogeneity,management level,and data transmission for distributed users.Simultaneously,security remains a major issue in the IoT-driven UCS.Besides,energy-limited IoT devices need an effective clustering strategy for optimal energy utilization.The recent developments of explainable artificial intelligence(XAI)concepts can be employed to effectively design intrusion detection systems(IDS)for accomplishing security in UCS.In this view,this study designs a novel Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for IoT Driven Ubiquitous Computing System(BXAI-IDCUCS)model.The major intention of the BXAI-IDCUCS model is to accomplish energy efficacy and security in the IoT environment.The BXAI-IDCUCS model initially clusters the IoT nodes using an energy-aware duck swarm optimization(EADSO)algorithm to accomplish this.Besides,deep neural network(DNN)is employed for detecting and classifying intrusions in the IoT network.Lastly,blockchain technology is exploited for secure inter-cluster data transmission processes.To ensure the productive performance of the BXAI-IDCUCS model,a comprehensive experimentation study is applied,and the outcomes are assessed under different aspects.The comparison study emphasized the superiority of the BXAI-IDCUCS model over the current state-of-the-art approaches with a packet delivery ratio of 99.29%,a packet loss rate of 0.71%,a throughput of 92.95 Mbps,energy consumption of 0.0891 mJ,a lifetime of 3529 rounds,and accuracy of 99.38%.展开更多
With network developing and virtualization rising, more and more indoor environment (POIs) such as care, library, office, even bus and subway can provide plenty of bandwidth and computing resources. Meanwhile many p...With network developing and virtualization rising, more and more indoor environment (POIs) such as care, library, office, even bus and subway can provide plenty of bandwidth and computing resources. Meanwhile many people daily spending much time in them are still suffering from the mobile device with limited resources. This situation implies a novel local cloud computing paradigm in which mobile device can leverage nearby resources to facilitate task execution. In this paper, we implement a mobile local computing system based on indoor virtual cloud. This system mainly contains three key components: 1)As to application, we create a parser to generate the "method call and cost tree" and analyze it to identify resource- intensive methods. 2) As to mobile device, we design a self-learning execution controller to make offtoading decision at runtime. 3) As to cloud, we construct a social scheduling based application-isolation virtual cloud model. The evaluation results demonstrate that our system is effective and efficient by evaluating CPU- intensive calculation application, Memory- intensive image translation application and I/ O-intensive image downloading application.展开更多
For the cloud computing system,combined wth the memory function and incomplete matching of the biological immune system,a formal modeling and analysis method of the cloud computing system survivability is proposed by ...For the cloud computing system,combined wth the memory function and incomplete matching of the biological immune system,a formal modeling and analysis method of the cloud computing system survivability is proposed by analyzing the survival situation of critical cloud services.First,on the basis of the SAIR(susceptible,active,infected,recovered)model,the SEIRS(susceptible,exposed,infected,recovered,susceptible)model and the vulnerability diffusion model of the distributed virtual system,the evolution state of the virus is divided into six types,and then the diffusion rules of the virus in the service domain of the cloud computing system and the propagation rules between service domains are analyzee.Finally,on the basis of Bio-PEPA(biological-performance evaluation process algebra),the formalized modeling of the survivability evolution of critical cloud services is made,and the SLIRAS(susceptible,latent,infected,recovered,antidotal,susceptible)model is obtained.Based on the stochastic simulation and the ODEs(ordinary differential equations)simulation of the Bio-PEPA model,the sensitivity parameters of the model are analyzed from three aspects,namely,the virus propagation speed of inter-domain,recovery ability and memory ability.The results showthat the proposed model has high approximate fitting degree to the actual cloud computing system,and it can well reflect the survivable change of the system.展开更多
The traditional collaborative filtering recommendation technology has some shortcomings in the large data environment. To solve this problem, a personalized recommendation method based on cloud computing technology is...The traditional collaborative filtering recommendation technology has some shortcomings in the large data environment. To solve this problem, a personalized recommendation method based on cloud computing technology is proposed. The large data set and recommendation computation are decomposed into parallel processing on multiple computers. A parallel recommendation engine based on Hadoop open source framework is established, and the effectiveness of the system is validated by learning recommendation on an English training platform. The experimental results show that the scalability of the recommender system can be greatly improved by using cloud computing technology to handle massive data in the cluster. On the basis of the comparison of traditional recommendation algorithms, combined with the advantages of cloud computing, a personalized recommendation system based on cloud computing is proposed.展开更多
A subdynamics theory framework for describing multi coupled quantum computing systems is presented first. A general kinetic equation for the reduced system is given then, enabling a sufficient condition to be formula...A subdynamics theory framework for describing multi coupled quantum computing systems is presented first. A general kinetic equation for the reduced system is given then, enabling a sufficient condition to be formulated for constructing a pure coherent quantum computing system. This reveals that using multi coupled systems to perform quantum computing in Rigged Liouville Space opens the door to controlling or eliminating the intrinsic de coherence of quantum computing systems.展开更多
The mode of mobile computing originated from distributed computing and it has the un-idempotent operation property, therefore the deadlock detection algorithm designed for mobile computing systems will face challenges...The mode of mobile computing originated from distributed computing and it has the un-idempotent operation property, therefore the deadlock detection algorithm designed for mobile computing systems will face challenges with regard to correctness and high efficiency. This paper attempts a fundamental study of deadlock detection for the AND model of mobile computing systems. First, the existing deadlock detection algorithms for distributed systems are classified into the resource node dependent (RD) and the resource node independent (RI) categories, and their corresponding weaknesses are discussed. Afterwards a new RI algorithm based on the AND model of mobile computing system is presented. The novelties of our algorithm are that: 1) the blocked nodes inform their predecessors and successors simultaneously; 2) the detection messages (agents) hold the predecessors information of their originator; 3) no agent is stored midway. Additionally, the quit-inform scheme is introduced to treat the excessive victim quitting problem raised by the overlapped cycles. By these methods the proposed algorithm can detect a cycle of size n within n-2 steps and with (n^2-n-2)/2 agents. The performance of our algorithm is compared with the most competitive RD and RI algorithms for distributed systems on a mobile agent simulation platform. Experiment results point out that our algorithm outperforms the two algorithms under the vast majority of resource configurations and concurrent workloads. The correctness of the proposed algorithm is formally proven by the invariant verification technique.展开更多
Mobile edge computing (MEC) is a novel technique that can reduce mobiles' com- putational burden by tasks offioading, which emerges as a promising paradigm to provide computing capabilities in close proximity to mo...Mobile edge computing (MEC) is a novel technique that can reduce mobiles' com- putational burden by tasks offioading, which emerges as a promising paradigm to provide computing capabilities in close proximity to mobile users. In this paper, we will study the scenario where multiple mobiles upload tasks to a MEC server in a sing cell, and allocating the limited server resources and wireless chan- nels between mobiles becomes a challenge. We formulate the optimization problem for the energy saved on mobiles with the tasks being dividable, and utilize a greedy choice to solve the problem. A Select Maximum Saved Energy First (SMSEF) algorithm is proposed to realize the solving process. We examined the saved energy at different number of nodes and channels, and the results show that the proposed scheme can effectively help mobiles to save energy in the MEC system.展开更多
Fog computing is an emerging paradigm of cloud computing which to meet the growing computation demand of mobile application. It can help mobile devices to overcome resource constraints by offloading the computationall...Fog computing is an emerging paradigm of cloud computing which to meet the growing computation demand of mobile application. It can help mobile devices to overcome resource constraints by offloading the computationally intensive tasks to cloud servers. The challenge of the cloud is to minimize the time of data transfer and task execution to the user, whose location changes owing to mobility, and the energy consumption for the mobile device. To provide satisfactory computation performance is particularly challenging in the fog computing environment. In this paper, we propose a novel fog computing model and offloading policy which can effectively bring the fog computing power closer to the mobile user. The fog computing model consist of remote cloud nodes and local cloud nodes, which is attached to wireless access infrastructure. And we give task offloading policy taking into account executi+on, energy consumption and other expenses. We finally evaluate the performance of our method through experimental simulations. The experimental results show that this method has a significant effect on reducing the execution time of tasks and energy consumption of mobile devices.展开更多
Cloud computing is becoming an important solution for providing scalable computing resources via Internet. Because there are tens of thousands of nodes in data center, the probability of server failures is nontrivial....Cloud computing is becoming an important solution for providing scalable computing resources via Internet. Because there are tens of thousands of nodes in data center, the probability of server failures is nontrivial. Therefore, it is a critical challenge to guarantee the service reliability. Fault-tolerance strategies, such as checkpoint, are commonly employed. Because of the failure of the edge switches, the checkpoint image may become inaccessible. Therefore, current checkpoint-based fault tolerance method cannot achieve the best effect. In this paper, we propose an optimal checkpoint method with edge switch failure-aware. The edge switch failure-aware checkpoint method includes two algorithms. The first algorithm employs the data center topology and communication characteristic for checkpoint image storage server selection. The second algorithm employs the checkpoint image storage characteristic as well as the data center topology to select the recovery server. Simulation experiments are performed to demonstrate the effectiveness of the proposed method.展开更多
In this manuscript, a cooperative non-orthogonal multiple access based intelligent mobile edge computing(NOMA-MEC) communication system is constructed in detail. The nearby user is viewed as a decoding and forwarding ...In this manuscript, a cooperative non-orthogonal multiple access based intelligent mobile edge computing(NOMA-MEC) communication system is constructed in detail. The nearby user is viewed as a decoding and forwarding relay, which can assist a distant user in offloading tasks to the intelligent MEC server. Then, the closed-form expressions of offloading outage probability for a pair of users are derived in detail to evaluate the performance of the cooperative NOMA-MEC system. Furthermore, the approximate expressions of offloading outage probability are provided in the high signal-to-noise ratio region. Based on the asymptotic analyses, the diversity order of distant user and nearby user is n+m+1 and n+1, respectively. The system throughput and energy efficiency of cooperative NOMA-MEC are analyzed in delay-limited transmission mode. Numerical results show that 1) Cooperative NOMA-MEC is better than orthogonal multiple access(OMA) in terms of offload performance;2) The offload performance of cooperative NOMA-MEC system improves as the number of transmission task decreases;and 3) Cooperative NOMA-MEC performs better than OMA in energy efficiency.展开更多
Blockchain and multi-access edge com-puting(MEC)are two emerging promising tech-nologies that have received extensive attention from academia and industry.As a brand-new information storage,dissemination and managemen...Blockchain and multi-access edge com-puting(MEC)are two emerging promising tech-nologies that have received extensive attention from academia and industry.As a brand-new information storage,dissemination and management mechanism,blockchain technology achieves the reliable transmis-sion of data and value.While as a new computing paradigm,multi-access edge computing enables the high-frequency interaction and real-time transmission of data.The integration of communication and com-puting in blockchain-enabled multi-access edge com-puting networks has been studied without a systemat-ical view.In the survey,we focus on the integration of communication and computing,explores the mu-tual empowerment and mutual promotion effects be-tween the blockchain and MEC,and introduces the resource integration architecture of blockchain and multi-access edge computing.Then,the paper sum-marizes the applications of the resource integration ar-chitecture,resource management,data sharing,incen-tive mechanism,and consensus mechanism,and ana-lyzes corresponding applications in real-world scenar-ios.Finally,future challenges and potentially promis-ing research directions are discussed and present in de-tail.展开更多
To cope with the low latency requirements and security issues of the emerging applications such as Internet of Vehicles(Io V)and Industrial Internet of Things(IIo T),the blockchain-enabled Mobile Edge Computing(MEC)sy...To cope with the low latency requirements and security issues of the emerging applications such as Internet of Vehicles(Io V)and Industrial Internet of Things(IIo T),the blockchain-enabled Mobile Edge Computing(MEC)system has received extensive attention.However,blockchain is a computing and communication intensive technology due to the complex consensus mechanisms.To facilitate the implementation of blockchain in the MEC system,this paper adopts the committee-based Practical Byzantine Fault Tolerance(PBFT)consensus algorithm and focuses on the committee selection problem.Vehicles and IIo T devices generate the transactions which are records of the application tasks.Base Stations(BSs)with MEC servers,which serve the transactions according to the wireless channel quality and the available computing resources,are blockchain nodes and candidates for committee members.The income of transaction service fees,the penalty of service delay,the decentralization of the blockchain and the communication complexity of the consensus process constitute the performance index.The committee selection problem is modeled as a Markov decision process,and the Proximal Policy Optimization(PPO)algorithm is adopted in the solution.Simulation results show that the proposed PPO-based committee selection algorithm can adapt to the system design requirements with different emphases and outperforms other comparison methods.展开更多
Mobile edge computing has emerged as a new paradigm to enhance computing capabilities by offloading complicated tasks to nearby cloud server.To conserve energy as well as maintain quality of service,low time complexit...Mobile edge computing has emerged as a new paradigm to enhance computing capabilities by offloading complicated tasks to nearby cloud server.To conserve energy as well as maintain quality of service,low time complexity algorithm is proposed to complete task offloading and server allocation.In this paper,a multi-user with multiple tasks and single server scenario is considered for small network,taking full account of factors including data size,bandwidth,channel state information.Furthermore,we consider a multi-server scenario for bigger network,where the influence of task priority is taken into consideration.To jointly minimize delay and energy cost,we propose a distributed unsupervised learning-based offloading framework for task offloading and server allocation.We exploit a memory pool to store input data and corresponding decisions as key-value pairs for model to learn to solve optimization problems.To further reduce time cost and achieve near-optimal performance,we use convolutional neural networks to process mass data based on fully connected networks.Numerical results show that the proposed algorithm performs better than other offloading schemes,which can generate near-optimal offloading decision timely.展开更多
Cloud computing systems play a vital role in national security. This paper describes a conceptual framework called dualsystem architecture for protecting computing environments. While attempting to be logical and rigo...Cloud computing systems play a vital role in national security. This paper describes a conceptual framework called dualsystem architecture for protecting computing environments. While attempting to be logical and rigorous, formalism method is avoided and this paper chooses algebra Communication Sequential Process.展开更多
Distributed cryptographic computing system plays an important role since cryptographic computing is extremely computation sensitive. However, no general cryptographic computing system is available. Grid technology can...Distributed cryptographic computing system plays an important role since cryptographic computing is extremely computation sensitive. However, no general cryptographic computing system is available. Grid technology can give an efficient computational support for cryptographic applications. Therefore, a general-purpose grid-based distributed computing system called DCCS is put forward in this paper. The architecture of DCCS is simply described at first. The policy of task division adapted in DCCS is then presented. The method to manage subtask is further discussed in detail. Furthermore, the building and execution process of a computing job is revealed. Finally, the details of DCCS implementation under Globus Toolkit 4 are illustrated.展开更多
To further reduce the delay in cellular edge computing systems, a new type of resource scheduling algorithm is proposed. Without assuming the knowledge of the statistics of user task arrival traffic, the analytical fo...To further reduce the delay in cellular edge computing systems, a new type of resource scheduling algorithm is proposed. Without assuming the knowledge of the statistics of user task arrival traffic, the analytical formulae of the communication and computing queueing delays in many-to-one multi-server cellular edge computing systems are derived by using the arriving curve and leaving curve. Based on the analytical formulae, an optimization problem of delay minimization is directly formulated, and then a novel scheduling algorithm is designed. The delay performance of the proposed algorithm is evaluated via simulation experiments. Under the considered simulation parameters, the proposed algorithm can achieve 12% less total delay, as compared to the traditional algorithms. System parameters including the weight, the amount of computing resources provided by servers, and the average user task arrival rate have impact on the percentage of delay reduction. Therefore, compared with the queue length optimization based traditional scheduling algorithms, the proposed delay optimization-based scheduling algorithm can further reduce delay.展开更多
To further improve delay performance in multi-cell cellular edge computing systems,a new delay-driven joint communication and computing resource BP(backpressure)scheduling algorithm is proposed.Firstly,the mathematica...To further improve delay performance in multi-cell cellular edge computing systems,a new delay-driven joint communication and computing resource BP(backpressure)scheduling algorithm is proposed.Firstly,the mathematical models of the communication delay and computing delay in multi-cell cellular edge computing systems are established and expressed as virtual delay queues.Then,based on the virtual delay models,a novel joint wireless subcarrier and virtual machine resource scheduling algorithm is proposed to stabilize the virtual delay queues in the framework of the BP scheduling principle.Finally,the delay performance of the proposed virtual queue-based BP scheduling algorithm is evaluated via simulation experiments and compared with the traditional queue length-based BP scheduling algorithm.Results show that under the considered simulation parameters,the total delay of the proposed BP scheduling algorithm is always lower than that of the traditional queue length-based BP scheduling algorithm.The percentage of the reduced total delay can be as high as 51.29%when the computing resources are heterogeneously configured.Therefore,compared with the traditional queue length-based BP scheduling algorithms,the proposed virtual delay queue-based BP scheduling algorithm can further reduce delay in multi-cell cellular edge computing systems.展开更多
A new file assignment strategy of parallel I/O, which is named heuristic file sorted assignment algorithm was proposed on cluster computing system. Based on the load balancing, it assigns the files to the same disk ac...A new file assignment strategy of parallel I/O, which is named heuristic file sorted assignment algorithm was proposed on cluster computing system. Based on the load balancing, it assigns the files to the same disk according to the similar service time. Firstly, the files were sorted and stored at the set I in descending order in terms of their service time, then one disk of cluster node was selected randomly when the files were to be assigned, and at last the continuous files were taken orderly from the set I to the disk until the disk reached its load maximum. The experimental results show that the new strategy improves the performance by 20.2% when the load of the system is light and by 31.6% when the load is heavy. And the higher the data access rate, the more evident the improvement of the performance obtained by the heuristic file sorted assignment algorithm.展开更多
基金supported by the National Natural Science Foundation of China(No.62206238)the Natural Science Foundation of Jiangsu Province(Grant No.BK20220562)the Natural Science Research Project of Universities in Jiangsu Province(No.22KJB520010).
文摘Federated learning for edge computing is a promising solution in the data booming era,which leverages the computation ability of each edge device to train local models and only shares the model gradients to the central server.However,the frequently transmitted local gradients could also leak the participants’private data.To protect the privacy of local training data,lots of cryptographic-based Privacy-Preserving Federated Learning(PPFL)schemes have been proposed.However,due to the constrained resource nature of mobile devices and complex cryptographic operations,traditional PPFL schemes fail to provide efficient data confidentiality and lightweight integrity verification simultaneously.To tackle this problem,we propose a Verifiable Privacypreserving Federated Learning scheme(VPFL)for edge computing systems to prevent local gradients from leaking over the transmission stage.Firstly,we combine the Distributed Selective Stochastic Gradient Descent(DSSGD)method with Paillier homomorphic cryptosystem to achieve the distributed encryption functionality,so as to reduce the computation cost of the complex cryptosystem.Secondly,we further present an online/offline signature method to realize the lightweight gradients integrity verification,where the offline part can be securely outsourced to the edge server.Comprehensive security analysis demonstrates the proposed VPFL can achieve data confidentiality,authentication,and integrity.At last,we evaluate both communication overhead and computation cost of the proposed VPFL scheme,the experimental results have shown VPFL has low computation costs and communication overheads while maintaining high training accuracy.
文摘Cloud computing has taken over the high-performance distributed computing area,and it currently provides on-demand services and resource polling over the web.As a result of constantly changing user service demand,the task scheduling problem has emerged as a critical analytical topic in cloud computing.The primary goal of scheduling tasks is to distribute tasks to available processors to construct the shortest possible schedule without breaching precedence restrictions.Assignments and schedules of tasks substantially influence system operation in a heterogeneous multiprocessor system.The diverse processes inside the heuristic-based task scheduling method will result in varying makespan in the heterogeneous computing system.As a result,an intelligent scheduling algorithm should efficiently determine the priority of every subtask based on the resources necessary to lower the makespan.This research introduced a novel efficient scheduling task method in cloud computing systems based on the cooperation search algorithm to tackle an essential task and schedule a heterogeneous cloud computing problem.The basic idea of thismethod is to use the advantages of meta-heuristic algorithms to get the optimal solution.We assess our algorithm’s performance by running it through three scenarios with varying numbers of tasks.The findings demonstrate that the suggested technique beats existingmethods NewGenetic Algorithm(NGA),Genetic Algorithm(GA),Whale Optimization Algorithm(WOA),Gravitational Search Algorithm(GSA),and Hybrid Heuristic and Genetic(HHG)by 7.9%,2.1%,8.8%,7.7%,3.4%respectively according to makespan.
基金This research work was funded by Institutional Fund Projects under grant no.(IFPIP:624-611-1443)。
文摘In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(UCS).The UCS necessitates heterogeneity,management level,and data transmission for distributed users.Simultaneously,security remains a major issue in the IoT-driven UCS.Besides,energy-limited IoT devices need an effective clustering strategy for optimal energy utilization.The recent developments of explainable artificial intelligence(XAI)concepts can be employed to effectively design intrusion detection systems(IDS)for accomplishing security in UCS.In this view,this study designs a novel Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for IoT Driven Ubiquitous Computing System(BXAI-IDCUCS)model.The major intention of the BXAI-IDCUCS model is to accomplish energy efficacy and security in the IoT environment.The BXAI-IDCUCS model initially clusters the IoT nodes using an energy-aware duck swarm optimization(EADSO)algorithm to accomplish this.Besides,deep neural network(DNN)is employed for detecting and classifying intrusions in the IoT network.Lastly,blockchain technology is exploited for secure inter-cluster data transmission processes.To ensure the productive performance of the BXAI-IDCUCS model,a comprehensive experimentation study is applied,and the outcomes are assessed under different aspects.The comparison study emphasized the superiority of the BXAI-IDCUCS model over the current state-of-the-art approaches with a packet delivery ratio of 99.29%,a packet loss rate of 0.71%,a throughput of 92.95 Mbps,energy consumption of 0.0891 mJ,a lifetime of 3529 rounds,and accuracy of 99.38%.
基金ACKNOWLEDGEMENTS This work was supported by the Research Fund for the Doctoral Program of Higher Education of China (No.20110031110026 and No.20120031110035), the National Natural Science Foundation of China (No. 61103214), and the Key Project in Tianjin Science & Technology Pillar Program (No. 13ZCZDGX01098).
文摘With network developing and virtualization rising, more and more indoor environment (POIs) such as care, library, office, even bus and subway can provide plenty of bandwidth and computing resources. Meanwhile many people daily spending much time in them are still suffering from the mobile device with limited resources. This situation implies a novel local cloud computing paradigm in which mobile device can leverage nearby resources to facilitate task execution. In this paper, we implement a mobile local computing system based on indoor virtual cloud. This system mainly contains three key components: 1)As to application, we create a parser to generate the "method call and cost tree" and analyze it to identify resource- intensive methods. 2) As to mobile device, we design a self-learning execution controller to make offtoading decision at runtime. 3) As to cloud, we construct a social scheduling based application-isolation virtual cloud model. The evaluation results demonstrate that our system is effective and efficient by evaluating CPU- intensive calculation application, Memory- intensive image translation application and I/ O-intensive image downloading application.
基金The National Natural Science Foundation of China(No.61202458,61403109)the Natural Science Foundation of Heilongjiang Province of China(No.F2017021)Harbin Science and Technology Innovation Research Funds(No.2016RAQXJ036)
文摘For the cloud computing system,combined wth the memory function and incomplete matching of the biological immune system,a formal modeling and analysis method of the cloud computing system survivability is proposed by analyzing the survival situation of critical cloud services.First,on the basis of the SAIR(susceptible,active,infected,recovered)model,the SEIRS(susceptible,exposed,infected,recovered,susceptible)model and the vulnerability diffusion model of the distributed virtual system,the evolution state of the virus is divided into six types,and then the diffusion rules of the virus in the service domain of the cloud computing system and the propagation rules between service domains are analyzee.Finally,on the basis of Bio-PEPA(biological-performance evaluation process algebra),the formalized modeling of the survivability evolution of critical cloud services is made,and the SLIRAS(susceptible,latent,infected,recovered,antidotal,susceptible)model is obtained.Based on the stochastic simulation and the ODEs(ordinary differential equations)simulation of the Bio-PEPA model,the sensitivity parameters of the model are analyzed from three aspects,namely,the virus propagation speed of inter-domain,recovery ability and memory ability.The results showthat the proposed model has high approximate fitting degree to the actual cloud computing system,and it can well reflect the survivable change of the system.
文摘The traditional collaborative filtering recommendation technology has some shortcomings in the large data environment. To solve this problem, a personalized recommendation method based on cloud computing technology is proposed. The large data set and recommendation computation are decomposed into parallel processing on multiple computers. A parallel recommendation engine based on Hadoop open source framework is established, and the effectiveness of the system is validated by learning recommendation on an English training platform. The experimental results show that the scalability of the recommender system can be greatly improved by using cloud computing technology to handle massive data in the cluster. On the basis of the comparison of traditional recommendation algorithms, combined with the advantages of cloud computing, a personalized recommendation system based on cloud computing is proposed.
文摘A subdynamics theory framework for describing multi coupled quantum computing systems is presented first. A general kinetic equation for the reduced system is given then, enabling a sufficient condition to be formulated for constructing a pure coherent quantum computing system. This reveals that using multi coupled systems to perform quantum computing in Rigged Liouville Space opens the door to controlling or eliminating the intrinsic de coherence of quantum computing systems.
基金Sponsored by the National 863 Plan (Grant No.2002AA1Z2101)the National Tenth Five-Year Research Plan(Grant No. 41316.1.2).
文摘The mode of mobile computing originated from distributed computing and it has the un-idempotent operation property, therefore the deadlock detection algorithm designed for mobile computing systems will face challenges with regard to correctness and high efficiency. This paper attempts a fundamental study of deadlock detection for the AND model of mobile computing systems. First, the existing deadlock detection algorithms for distributed systems are classified into the resource node dependent (RD) and the resource node independent (RI) categories, and their corresponding weaknesses are discussed. Afterwards a new RI algorithm based on the AND model of mobile computing system is presented. The novelties of our algorithm are that: 1) the blocked nodes inform their predecessors and successors simultaneously; 2) the detection messages (agents) hold the predecessors information of their originator; 3) no agent is stored midway. Additionally, the quit-inform scheme is introduced to treat the excessive victim quitting problem raised by the overlapped cycles. By these methods the proposed algorithm can detect a cycle of size n within n-2 steps and with (n^2-n-2)/2 agents. The performance of our algorithm is compared with the most competitive RD and RI algorithms for distributed systems on a mobile agent simulation platform. Experiment results point out that our algorithm outperforms the two algorithms under the vast majority of resource configurations and concurrent workloads. The correctness of the proposed algorithm is formally proven by the invariant verification technique.
基金supported by NSFC(No. 61571055)fund of SKL of MMW (No. K201815)Important National Science & Technology Specific Projects(2017ZX03001028)
文摘Mobile edge computing (MEC) is a novel technique that can reduce mobiles' com- putational burden by tasks offioading, which emerges as a promising paradigm to provide computing capabilities in close proximity to mobile users. In this paper, we will study the scenario where multiple mobiles upload tasks to a MEC server in a sing cell, and allocating the limited server resources and wireless chan- nels between mobiles becomes a challenge. We formulate the optimization problem for the energy saved on mobiles with the tasks being dividable, and utilize a greedy choice to solve the problem. A Select Maximum Saved Energy First (SMSEF) algorithm is proposed to realize the solving process. We examined the saved energy at different number of nodes and channels, and the results show that the proposed scheme can effectively help mobiles to save energy in the MEC system.
基金supported by the NSFC (61602126)the scientific and technological project of Henan province (162102210214)
文摘Fog computing is an emerging paradigm of cloud computing which to meet the growing computation demand of mobile application. It can help mobile devices to overcome resource constraints by offloading the computationally intensive tasks to cloud servers. The challenge of the cloud is to minimize the time of data transfer and task execution to the user, whose location changes owing to mobility, and the energy consumption for the mobile device. To provide satisfactory computation performance is particularly challenging in the fog computing environment. In this paper, we propose a novel fog computing model and offloading policy which can effectively bring the fog computing power closer to the mobile user. The fog computing model consist of remote cloud nodes and local cloud nodes, which is attached to wireless access infrastructure. And we give task offloading policy taking into account executi+on, energy consumption and other expenses. We finally evaluate the performance of our method through experimental simulations. The experimental results show that this method has a significant effect on reducing the execution time of tasks and energy consumption of mobile devices.
基金supported by Beijing Natural Science Foundation (4174100)NSFC(61602054)the Fundamental Research Funds for the Central Universities
文摘Cloud computing is becoming an important solution for providing scalable computing resources via Internet. Because there are tens of thousands of nodes in data center, the probability of server failures is nontrivial. Therefore, it is a critical challenge to guarantee the service reliability. Fault-tolerance strategies, such as checkpoint, are commonly employed. Because of the failure of the edge switches, the checkpoint image may become inaccessible. Therefore, current checkpoint-based fault tolerance method cannot achieve the best effect. In this paper, we propose an optimal checkpoint method with edge switch failure-aware. The edge switch failure-aware checkpoint method includes two algorithms. The first algorithm employs the data center topology and communication characteristic for checkpoint image storage server selection. The second algorithm employs the checkpoint image storage characteristic as well as the data center topology to select the recovery server. Simulation experiments are performed to demonstrate the effectiveness of the proposed method.
基金supported in part by the Natural Science Foundation of Beijing Municipality under Grant 4204099,Grant 19L2022,Grant L182032,Grant L182039 and Grant KZ201911232046the Science and Technology Project of Beijing Municipal Education Commission under Grant KM202011232002 and Grant KM202011232003。
文摘In this manuscript, a cooperative non-orthogonal multiple access based intelligent mobile edge computing(NOMA-MEC) communication system is constructed in detail. The nearby user is viewed as a decoding and forwarding relay, which can assist a distant user in offloading tasks to the intelligent MEC server. Then, the closed-form expressions of offloading outage probability for a pair of users are derived in detail to evaluate the performance of the cooperative NOMA-MEC system. Furthermore, the approximate expressions of offloading outage probability are provided in the high signal-to-noise ratio region. Based on the asymptotic analyses, the diversity order of distant user and nearby user is n+m+1 and n+1, respectively. The system throughput and energy efficiency of cooperative NOMA-MEC are analyzed in delay-limited transmission mode. Numerical results show that 1) Cooperative NOMA-MEC is better than orthogonal multiple access(OMA) in terms of offload performance;2) The offload performance of cooperative NOMA-MEC system improves as the number of transmission task decreases;and 3) Cooperative NOMA-MEC performs better than OMA in energy efficiency.
基金the National Key Re-search and Development Program of China(No.2020YFB1807500)the National Natural Science Foundation of China(No.62102297,No.61902292)+2 种基金the Guangdong Basic and Applied Basic Research Foundation(No.2020A1515110496)the Fundamen-tal Research Funds for the Central Universities(No.XJS210105,No.XJS201502)the Open Project of Shaanxi Key Laboratory of Information Communi-cation Network and Security(No.ICNS202005).
文摘Blockchain and multi-access edge com-puting(MEC)are two emerging promising tech-nologies that have received extensive attention from academia and industry.As a brand-new information storage,dissemination and management mechanism,blockchain technology achieves the reliable transmis-sion of data and value.While as a new computing paradigm,multi-access edge computing enables the high-frequency interaction and real-time transmission of data.The integration of communication and com-puting in blockchain-enabled multi-access edge com-puting networks has been studied without a systemat-ical view.In the survey,we focus on the integration of communication and computing,explores the mu-tual empowerment and mutual promotion effects be-tween the blockchain and MEC,and introduces the resource integration architecture of blockchain and multi-access edge computing.Then,the paper sum-marizes the applications of the resource integration ar-chitecture,resource management,data sharing,incen-tive mechanism,and consensus mechanism,and ana-lyzes corresponding applications in real-world scenar-ios.Finally,future challenges and potentially promis-ing research directions are discussed and present in de-tail.
基金supported by the Natural Science Foundation of Beijing Municipality under Grant No.L192002the National Key R&D Program of China under Grant No.2020YFC1807904the National Natural Science Foundation of China under Grant No.62001011。
文摘To cope with the low latency requirements and security issues of the emerging applications such as Internet of Vehicles(Io V)and Industrial Internet of Things(IIo T),the blockchain-enabled Mobile Edge Computing(MEC)system has received extensive attention.However,blockchain is a computing and communication intensive technology due to the complex consensus mechanisms.To facilitate the implementation of blockchain in the MEC system,this paper adopts the committee-based Practical Byzantine Fault Tolerance(PBFT)consensus algorithm and focuses on the committee selection problem.Vehicles and IIo T devices generate the transactions which are records of the application tasks.Base Stations(BSs)with MEC servers,which serve the transactions according to the wireless channel quality and the available computing resources,are blockchain nodes and candidates for committee members.The income of transaction service fees,the penalty of service delay,the decentralization of the blockchain and the communication complexity of the consensus process constitute the performance index.The committee selection problem is modeled as a Markov decision process,and the Proximal Policy Optimization(PPO)algorithm is adopted in the solution.Simulation results show that the proposed PPO-based committee selection algorithm can adapt to the system design requirements with different emphases and outperforms other comparison methods.
基金presented in part at the EAI CHINACOM 2020supported in part by Natural Science Foundation of Jiangxi Province (Grant No.20202BAB212003)+1 种基金Projects of Humanities and Social Sciences of universities in Jiangxi (JC18224)Science and technology project of Jiangxi Provincial Department of Education(GJJ210817, GJJ210854)
文摘Mobile edge computing has emerged as a new paradigm to enhance computing capabilities by offloading complicated tasks to nearby cloud server.To conserve energy as well as maintain quality of service,low time complexity algorithm is proposed to complete task offloading and server allocation.In this paper,a multi-user with multiple tasks and single server scenario is considered for small network,taking full account of factors including data size,bandwidth,channel state information.Furthermore,we consider a multi-server scenario for bigger network,where the influence of task priority is taken into consideration.To jointly minimize delay and energy cost,we propose a distributed unsupervised learning-based offloading framework for task offloading and server allocation.We exploit a memory pool to store input data and corresponding decisions as key-value pairs for model to learn to solve optimization problems.To further reduce time cost and achieve near-optimal performance,we use convolutional neural networks to process mass data based on fully connected networks.Numerical results show that the proposed algorithm performs better than other offloading schemes,which can generate near-optimal offloading decision timely.
文摘Cloud computing systems play a vital role in national security. This paper describes a conceptual framework called dualsystem architecture for protecting computing environments. While attempting to be logical and rigorous, formalism method is avoided and this paper chooses algebra Communication Sequential Process.
基金Supported by the National Basic Research Program of China (973 Program 2004CB318004), the National Natural Science Foundation of China (NSFC90204016) and the National High Technology Research and Development Program of China (2003AA144030)
文摘Distributed cryptographic computing system plays an important role since cryptographic computing is extremely computation sensitive. However, no general cryptographic computing system is available. Grid technology can give an efficient computational support for cryptographic applications. Therefore, a general-purpose grid-based distributed computing system called DCCS is put forward in this paper. The architecture of DCCS is simply described at first. The policy of task division adapted in DCCS is then presented. The method to manage subtask is further discussed in detail. Furthermore, the building and execution process of a computing job is revealed. Finally, the details of DCCS implementation under Globus Toolkit 4 are illustrated.
基金The National Natural Science Foundation of China(No.61571111)
文摘To further reduce the delay in cellular edge computing systems, a new type of resource scheduling algorithm is proposed. Without assuming the knowledge of the statistics of user task arrival traffic, the analytical formulae of the communication and computing queueing delays in many-to-one multi-server cellular edge computing systems are derived by using the arriving curve and leaving curve. Based on the analytical formulae, an optimization problem of delay minimization is directly formulated, and then a novel scheduling algorithm is designed. The delay performance of the proposed algorithm is evaluated via simulation experiments. Under the considered simulation parameters, the proposed algorithm can achieve 12% less total delay, as compared to the traditional algorithms. System parameters including the weight, the amount of computing resources provided by servers, and the average user task arrival rate have impact on the percentage of delay reduction. Therefore, compared with the queue length optimization based traditional scheduling algorithms, the proposed delay optimization-based scheduling algorithm can further reduce delay.
基金The National Natural Science Foundation of China(No.61571111)the Incubation Project of the National Natural Science Foundation of China at Nanjing University of Posts and Telecommunications(No.NY219106)
文摘To further improve delay performance in multi-cell cellular edge computing systems,a new delay-driven joint communication and computing resource BP(backpressure)scheduling algorithm is proposed.Firstly,the mathematical models of the communication delay and computing delay in multi-cell cellular edge computing systems are established and expressed as virtual delay queues.Then,based on the virtual delay models,a novel joint wireless subcarrier and virtual machine resource scheduling algorithm is proposed to stabilize the virtual delay queues in the framework of the BP scheduling principle.Finally,the delay performance of the proposed virtual queue-based BP scheduling algorithm is evaluated via simulation experiments and compared with the traditional queue length-based BP scheduling algorithm.Results show that under the considered simulation parameters,the total delay of the proposed BP scheduling algorithm is always lower than that of the traditional queue length-based BP scheduling algorithm.The percentage of the reduced total delay can be as high as 51.29%when the computing resources are heterogeneously configured.Therefore,compared with the traditional queue length-based BP scheduling algorithms,the proposed virtual delay queue-based BP scheduling algorithm can further reduce delay in multi-cell cellular edge computing systems.
文摘A new file assignment strategy of parallel I/O, which is named heuristic file sorted assignment algorithm was proposed on cluster computing system. Based on the load balancing, it assigns the files to the same disk according to the similar service time. Firstly, the files were sorted and stored at the set I in descending order in terms of their service time, then one disk of cluster node was selected randomly when the files were to be assigned, and at last the continuous files were taken orderly from the set I to the disk until the disk reached its load maximum. The experimental results show that the new strategy improves the performance by 20.2% when the load of the system is light and by 31.6% when the load is heavy. And the higher the data access rate, the more evident the improvement of the performance obtained by the heuristic file sorted assignment algorithm.