Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal ...Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.展开更多
With the advancement of the Industrial Internet of Things(IoT),the rapidly growing demand for data collection and processing poses a huge challenge to the design of data transmission and computation resources in the i...With the advancement of the Industrial Internet of Things(IoT),the rapidly growing demand for data collection and processing poses a huge challenge to the design of data transmission and computation resources in the industrial scenario.Taking advantage of improved model accuracy by machine learning algorithms,we investigate the inner relationship of system performance and data transmission and computation resources,and then analyze the impacts of bandwidth allocation and computation resources on the accuracy of the system model in this paper.A joint bandwidth allocation and computation resource configuration scheme is proposed and the Karush-Kuhn-Tucker(KKT)conditions are used to get an optimal bandwidth allocation and computation configuration decision,which can minimize the total computation resource requirement and ensure the system accuracy meets the industrial requirements.Simulation results show that the proposed bandwidth allocation and computation resource configuration scheme can reduce the computing resource usage by 10%when compared to the average allocation strategy.展开更多
To efficiently utilize the limited computational resource in real-time sensor networks, this paper focuses on the challenge of computational resource allocation in sensor networks and provides a solution with the meth...To efficiently utilize the limited computational resource in real-time sensor networks, this paper focuses on the challenge of computational resource allocation in sensor networks and provides a solution with the method of economics. It designs a microeconomic system in which the applications distribute their computational resource consumption across sensor networks by virtue of mobile agent. Further, it proposes the market-based computational resource allocation policy named MCRA which satisfies the uniform consumption of computational energy in network and the optimal division of the single computational capacity for multiple tasks. The simulation in the scenario of target tracing demonstrates that MCRA realizes an efficient allocation of computational resources according to the priority of tasks, achieves the superior allocation performance and equilibrium performance compared to traditional allocation policies, and ultimately prolongs the system lifetime.展开更多
In MEC-enabled vehicular network with limited wireless resource and computation resource,stringent delay and high reliability requirements are challenging issues.In order to reduce the total delay in the network as we...In MEC-enabled vehicular network with limited wireless resource and computation resource,stringent delay and high reliability requirements are challenging issues.In order to reduce the total delay in the network as well as ensure the reliability of Vehicular UE(VUE),a Joint Allocation of Wireless resource and MEC Computing resource(JAWC)algorithm is proposed.The JAWC algorithm includes two steps:V2X links clustering and MEC computation resource scheduling.In the V2X links clustering,a Spectral Radius based Interference Cancellation scheme(SR-IC)is proposed to obtain the optimal resource allocation matrix.By converting the calculation of SINR into the calculation of matrix maximum row sum,the accumulated interference of VUE can be constrained and the the SINR calculation complexity can be effectively reduced.In the MEC computation resource scheduling,by transforming the original optimization problem into a convex problem,the optimal task offloading proportion of VUE and MEC computation resource allocation can be obtained.The simulation further demonstrates that the JAWC algorithm can significantly reduce the total delay as well as ensure the communication reliability of VUE in the MEC-enabled vehicular network.展开更多
Based on the monitoring and discovery service 4 (MDS4) model, a monitoring model for a data grid which supports reliable storage and intrusion tolerance is designed. The load characteristics and indicators of comput...Based on the monitoring and discovery service 4 (MDS4) model, a monitoring model for a data grid which supports reliable storage and intrusion tolerance is designed. The load characteristics and indicators of computing resources in the monitoring model are analyzed. Then, a time-series autoregressive prediction model is devised. And an autoregressive support vector regression( ARSVR) monitoring method is put forward to predict the node load of the data grid. Finally, a model for historical observations sequences is set up using the autoregressive (AR) model and the model order is determined. The support vector regression(SVR) model is trained using historical data and the regression function is obtained. Simulation results show that the ARSVR method can effectively predict the node load.展开更多
Cloud computing has gained significant recognition due to its ability to provide a broad range of online services and applications.Nevertheless,existing commercial cloud computing models demonstrate an appropriate des...Cloud computing has gained significant recognition due to its ability to provide a broad range of online services and applications.Nevertheless,existing commercial cloud computing models demonstrate an appropriate design by concentrating computational assets,such as preservation and server infrastructure,in a limited number of large-scale worldwide data facilities.Optimizing the deployment of virtual machines(VMs)is crucial in this scenario to ensure system dependability,performance,and minimal latency.A significant barrier in the present scenario is the load distribution,particularly when striving for improved energy consumption in a hypothetical grid computing framework.This design employs load-balancing techniques to allocate different user workloads across several virtual machines.To address this challenge,we propose using the twin-fold moth flame technique,which serves as a very effective optimization technique.Developers intentionally designed the twin-fold moth flame method to consider various restrictions,including energy efficiency,lifespan analysis,and resource expenditures.It provides a thorough approach to evaluating total costs in the cloud computing environment.When assessing the efficacy of our suggested strategy,the study will analyze significant metrics such as energy efficiency,lifespan analysis,and resource expenditures.This investigation aims to enhance cloud computing techniques by developing a new optimization algorithm that considers multiple factors for effective virtual machine placement and load balancing.The proposed work demonstrates notable improvements of 12.15%,10.68%,8.70%,13.29%,18.46%,and 33.39%for 40 count data of nodes using the artificial bee colony-bat algorithm,ant colony optimization,crow search algorithm,krill herd,whale optimization genetic algorithm,and improved Lévy-based whale optimization algorithm,respectively.展开更多
Maintaining population diversity is an important task in the multimodal multi-objective optimization.Although the zoning search(ZS)can improve the diversity in the decision space,assigning the same computational costs...Maintaining population diversity is an important task in the multimodal multi-objective optimization.Although the zoning search(ZS)can improve the diversity in the decision space,assigning the same computational costs to each search subspace may be wasteful when computational resources are limited,especially on imbalanced problems.To alleviate the above-mentioned issue,a zoning search with adaptive resource allocating(ZS-ARA)method is proposed in the current study.In the proposed ZS-ARA,the entire search space is divided into many subspaces to preserve the diversity in the decision space and to reduce the problem complexity.Moreover,the computational resources can be automatically allocated among all the subspaces.The ZS-ARA is compared with seven algorithms on two different types of multimodal multi-objective problems(MMOPs),namely,balanced and imbalanced MMOPs.The results indicate that,similarly to the ZS,the ZS-ARA achieves high performance with the balanced MMOPs.Also,it can greatly assist a“regular”algorithm in improving its performance on the imbalanced MMOPs,and is capable of allocating the limited computational resources dynamically.展开更多
The centralized radio access cellular network infrastructure based on centralized Super Base Station(CSBS) is a promising solution to reduce the high construction cost and energy consumption of conventional cellular n...The centralized radio access cellular network infrastructure based on centralized Super Base Station(CSBS) is a promising solution to reduce the high construction cost and energy consumption of conventional cellular networks. With CSBS, the computing resource for communication protocol processing could be managed flexibly according the protocol load to improve the resource efficiency. Since the protocol load changes frequently and may exceed the capacity of processors, load balancing is needed. However, existing load balancing mechanisms used in data centers cannot satisfy the real-time requirement of the communication protocol processing. Therefore, a new computing resource adjustment scheme is proposed for communication protocol processing in the CSBS architecture. First of all, the main principles of protocol processing resource adjustment is concluded, followed by the analysis on the processing resource outage probability that the computing resource becomes inadequate for protocol processing as load changes. Following the adjustment principles, the proposed scheme is designed to reduce the processing resource outage probability based onthe optimized connected graph which is constructed by the approximate Kruskal algorithm. Simulation re-sults show that compared with the conventional load balancing mechanisms, the proposed scheme can reduce the occurrence number of inadequate processing resource and the additional resource consumption of adjustment greatly.展开更多
Dispersed computing can link all devices with computing capabilities on a global scale to form a fully decentralized network,which can make full use of idle computing resources.Realizing the overall resource allocatio...Dispersed computing can link all devices with computing capabilities on a global scale to form a fully decentralized network,which can make full use of idle computing resources.Realizing the overall resource allocation of the dispersed computing system is a significant challenge.In detail,by jointly managing the task requests of external users and the resource allocation of the internal system to achieve dynamic balance,the efficient and stable operation of the system can be guaranteed.In this paper,we first propose a task-resource joint management model,which quantifies the dynamic transformation relationship between the resources consumed by task requests and the resources occupied by the system in dispersed computing.Secondly,to avoid downtime caused by an overload of resources,we introduce intelligent control into the task-resource joint management model.The existence and stability of the positive periodic solution of the model can be obtained by theoretical analysis,which means that the stable operation of dispersed computing can be guaranteed through the intelligent feedback control strategy.Additionally,to improve the system utilization,the task-resource joint management model with bi-directional intelligent control is further explored.Setting control thresholds for the two resources not only reverse restrains the system resource overload,but also carries out positive incentive control when a large number of idle resources appear.The existence and stability of the positive periodic solution of the model are proved theoretically,that is,the model effectively avoids the two extreme cases and ensure the efficient and stable operation of the system.Finally,numerical simulation verifies the correctness and validity of the theoretical results.展开更多
Many Task Computing(MTC)is a new class of computing paradigm in which the aggregate number of tasks,quantity of computing,and volumes of data may be extremely large.With the advent of Cloud computing and big data era,...Many Task Computing(MTC)is a new class of computing paradigm in which the aggregate number of tasks,quantity of computing,and volumes of data may be extremely large.With the advent of Cloud computing and big data era,scheduling and executing large-scale computing tasks efficiently and allocating resources to tasks reasonably are becoming a quite challenging problem.To improve both task execution and resource utilization efficiency,we present a task scheduling algorithm with resource attribute selection,which can select the optimal node to execute a task according to its resource requirements and the fitness between the resource node and the task.Experiment results show that there is significant improvement in execution throughput and resource utilization compared with the other three algorithms and four scheduling frameworks.In the scheduling algorithm comparison,the throughput is 77%higher than Min-Min algorithm and the resource utilization can reach 91%.In the scheduling framework comparison,the throughput(with work-stealing)is at least 30%higher than the other frameworks and the resource utilization reaches 94%.The scheduling algorithm can make a good model for practical MTC applications.展开更多
In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the r...In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the redundant, turn on the demanded" strategy here. Firstly, a green cloud computing model is presented, abstracting the task scheduling problem to the virtual machine deployment issue with the virtualization technology. Secondly, the future workloads of system need to be predicted: a cubic exponential smoothing algorithm based on the conservative control(CESCC) strategy is proposed, combining with the current state and resource distribution of system, in order to calculate the demand of resources for the next period of task requests. Then, a multi-objective constrained optimization model of power consumption and a low-energy resource allocation algorithm based on probabilistic matching(RA-PM) are proposed. In order to reduce the power consumption further, the resource allocation algorithm based on the improved simulated annealing(RA-ISA) is designed with the improved simulated annealing algorithm. Experimental results show that the prediction and conservative control strategy make resource pre-allocation catch up with demands, and improve the efficiency of real-time response and the stability of the system. Both RA-PM and RA-ISA can activate fewer hosts, achieve better load balance among the set of high applicable hosts, maximize the utilization of resources, and greatly reduce the power consumption of cloud computing systems.展开更多
Resource reconstruction algorithms are studied in this paper to solve the problem of resource on-demand allocation and improve the efficiency of resource utilization in virtual computing resource pool. Based on the id...Resource reconstruction algorithms are studied in this paper to solve the problem of resource on-demand allocation and improve the efficiency of resource utilization in virtual computing resource pool. Based on the idea of resource virtualization and the analysis of the resource status transition, the resource allocation process and the necessity of resource reconstruction are presented, l^esource reconstruction algorithms are designed to determine the resource reconstruction types, and it is shown that they can achieve the goal of resource on-demand allocation through three methodologies: resource combination, resource split, and resource random adjustment. The effects that the resource users have on the resource reconstruction results, the deviation between resources and requirements, and the uniformity of resource distribution are studied by three experiments. The experiments show that resource reconstruction has a close relationship with resource requirements, but it is not the same with current distribution of resources. The algorithms can complete the resource adjustment with a lower cost and form the logic resources to match the demands of resource users easily.展开更多
In centralized cellular network architecture,the concept of virtualized Base Station(VBS) becomes attracting since it enables all base stations(BSs) to share computing resources in a dynamic manner. This can significa...In centralized cellular network architecture,the concept of virtualized Base Station(VBS) becomes attracting since it enables all base stations(BSs) to share computing resources in a dynamic manner. This can significantly improve the utilization efficiency of computing resources. In this paper,we study the computing resource allocation strategy for one VBS by considering the non-negligible effect of delay introduced by switches. Specifically,we formulate the VBS's sum computing rate maximization as a set optimization problem. To address this problem,we firstly propose a computing resource schedule algorithm,namely,weight before one-step-greedy(WBOSG),which has linear computation complexity and considerable performance. Then,OSG retreat(OSG-R) algorithm is developed to further improve the system performance at the expense of computational complexity. Simulation results under practical setting are provided to validate the proposed two algorithms.展开更多
Web based Computing Resource Publishing is a efficient way to provide additional computing capacity for users who need more computing resources than that they themselves could afford by making use of idle computing re...Web based Computing Resource Publishing is a efficient way to provide additional computing capacity for users who need more computing resources than that they themselves could afford by making use of idle computing resources in the Web.Extensibility and reliability are crucial for agent publishing. The parent child agent framework and primary slave agent framework were proposed respectively and discussed in detail.展开更多
Load-time series data in mobile cloud computing of Internet of Vehicles(IoV)usually have linear and nonlinear composite characteristics.In order to accurately describe the dynamic change trend of such loads,this study...Load-time series data in mobile cloud computing of Internet of Vehicles(IoV)usually have linear and nonlinear composite characteristics.In order to accurately describe the dynamic change trend of such loads,this study designs a load prediction method by using the resource scheduling model for mobile cloud computing of IoV.Firstly,a chaotic analysis algorithm is implemented to process the load-time series,while some learning samples of load prediction are constructed.Secondly,a support vector machine(SVM)is used to establish a load prediction model,and an improved artificial bee colony(IABC)function is designed to enhance the learning ability of the SVM.Finally,a CloudSim simulation platform is created to select the perminute CPU load history data in the mobile cloud computing system,which is composed of 50 vehicles as the data set;and a comparison experiment is conducted by using a grey model,a back propagation neural network,a radial basis function(RBF)neural network and a RBF kernel function of SVM.As shown in the experimental results,the prediction accuracy of the method proposed in this study is significantly higher than other models,with a significantly reduced real-time prediction error for resource loading in mobile cloud environments.Compared with single-prediction models,the prediction method proposed can build up multidimensional time series in capturing complex load time series,fit and describe the load change trends,approximate the load time variability more precisely,and deliver strong generalization ability to load prediction models for mobile cloud computing resources.展开更多
To accommodate the diversified emerging use cases in 5G,radio access networks(RAN)is required to be more flexible,open,and versatile.It is evolving towards cloudification,intelligence and openness.By embedding computi...To accommodate the diversified emerging use cases in 5G,radio access networks(RAN)is required to be more flexible,open,and versatile.It is evolving towards cloudification,intelligence and openness.By embedding computing capabilities within RAN,it helps to transform RAN into a natural cost effective radio edge computing platform,offering great opportunity to further enhance RAN agility for diversified services and improve users’quality of experience(Qo E).In this article,a logical architecture enabling deep convergence of communication and computing in RAN is proposed based on O-RAN.The scenarios and potential benefits of sharing RAN computing resources are first analyzed.Then,the requirements,design principles and logical architecture are introduced.Involved key technologies are also discussed,including heterogeneous computing infrastructure,unified computing and communication task modeling,joint communication and computing orchestration and RAN computing data routing.Followed by that,a VR use case is studied to illustrate the superiority of the joint communication and computing optimization.Finally,challenges and future trends are highlighted to provide some insights on the potential future work for researchers in this field.展开更多
Vehicular Edge Computing(VEC)brings the computational resources in close proximity to the service requestors and thus supports explosive computing demands from smart vehicles.However,the limited computing capability o...Vehicular Edge Computing(VEC)brings the computational resources in close proximity to the service requestors and thus supports explosive computing demands from smart vehicles.However,the limited computing capability of VEC cannot simultaneously respond to large amounts of offloading requests,thus restricting the performance of VEC system.Besides,a mass of traffic data can incur tremendous pressure on the front-haul links between vehicles and the edge server.To strengthen the performance of VEC,in this paper we propose to place services beforehand at the edge server,e.g.,by deploying the services/tasks-oriented data(e.g.,related libraries and databases)in advance at the network edge,instead of downloading them from the remote data center or offloading them from vehicles during the runtime.In this paper,we formulate the service placement problem in VEC to minimize the average response latency for all requested services along the slotted timeline.Specifically,the time slot spanned optimization problem is converted into per-slot optimization problems based on the Lyapunov optimization.Then a greedy heuristic is introduced to the drift-plus-penalty-based algorithm for seeking the approximate solution.The simulation results reveal its advantages over others in terms of optimal values and our strategy can satisfy the long-term energy constraint.展开更多
Cloud computing provides the essential infrastructure for multi-tier Ambient Assisted Living(AAL) applications that facilitate people's lives. Resource provisioning is a critically important problem for AAL applic...Cloud computing provides the essential infrastructure for multi-tier Ambient Assisted Living(AAL) applications that facilitate people's lives. Resource provisioning is a critically important problem for AAL applications in cloud data centers(CDCs). This paper focuses on modeling and analysis of multi-tier AAL applications, and aims to optimize resource provisioning while meeting requests' response time constraint. This paper models a multi-tier AAL application as a hybrid multi-tier queueing model consisting of an M/M/c queueing model and multiple M/M/1 queueing models. Then, virtual machine(VM) allocation is formulated as a constrained optimization problem in a CDC, and is further solved with the proposed heuristic VM allocation algorithm(HVMA). The results demonstrate that the proposed model and algorithm can effectively achieve dynamic resource provisioning while meeting the performance constraint.展开更多
Aiming at the factory with high-complex and multi-terminal in the industrial Internet of things(IIoT),a hierarchical edge networking collaboration(HENC)framework based on the cloud-edge collaboration and computing fir...Aiming at the factory with high-complex and multi-terminal in the industrial Internet of things(IIoT),a hierarchical edge networking collaboration(HENC)framework based on the cloud-edge collaboration and computing first networking(CFN)is proposed to improve the capability of task processing with fixed computing resources on the edge effectively.To optimize the delay and energy consumption in HENC,a multi-objective optimization(MOO)problem is formulated.Furthermore,to improve the efficiency and reliability of the system,a resource prediction model based on ridge regression(RR)is proposed to forecast the task size of the next time slot,and an emergency-aware(EA)computing resource allocation algorithm is proposed to reallocate tasks in edge CFN.Based on the simulation result,the EA algorithm is superior to the greedy resource allocation in time delay,energy consumption,quality of service(QoS)especially with limited computing resources.展开更多
Conventional resource provision algorithms focus on how to maximize resource utilization and meet a fixed constraint of response time which be written in service level agreement(SLA).Unfortunately,the expected respo...Conventional resource provision algorithms focus on how to maximize resource utilization and meet a fixed constraint of response time which be written in service level agreement(SLA).Unfortunately,the expected response time is highly variable and it is usually longer than the value of SLA.So,it leads to a poor resource utilization and unnecessary servers migration.We develop a framework for customer-driven dynamic resource allocation in cloud computing.Termed CDSMS(customer-driven service manage system),and the framework’s contributions are twofold.First,it can reduce the total migration times by adjusting the value of parameters of response time dynamically according to customers’profiles.Second,it can choose a best resource provision algorithm automatically in different scenarios to improve resource utilization.Finally,we perform a serious experiment in a real cloud computing platform.Experimental results show that CDSMS provides a satisfactory solution for the prediction of expected response time and the interval period between two tasks and reduce the total resource usage cost.展开更多
基金supported by National Key Research and Development Program of China(2018YFC1504502).
文摘Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.
基金supported in part by the National Natural Science Foundation of China under Grant No. 62172445in part by the Young Talents Plan of Hunan Province,China
文摘With the advancement of the Industrial Internet of Things(IoT),the rapidly growing demand for data collection and processing poses a huge challenge to the design of data transmission and computation resources in the industrial scenario.Taking advantage of improved model accuracy by machine learning algorithms,we investigate the inner relationship of system performance and data transmission and computation resources,and then analyze the impacts of bandwidth allocation and computation resources on the accuracy of the system model in this paper.A joint bandwidth allocation and computation resource configuration scheme is proposed and the Karush-Kuhn-Tucker(KKT)conditions are used to get an optimal bandwidth allocation and computation configuration decision,which can minimize the total computation resource requirement and ensure the system accuracy meets the industrial requirements.Simulation results show that the proposed bandwidth allocation and computation resource configuration scheme can reduce the computing resource usage by 10%when compared to the average allocation strategy.
文摘To efficiently utilize the limited computational resource in real-time sensor networks, this paper focuses on the challenge of computational resource allocation in sensor networks and provides a solution with the method of economics. It designs a microeconomic system in which the applications distribute their computational resource consumption across sensor networks by virtue of mobile agent. Further, it proposes the market-based computational resource allocation policy named MCRA which satisfies the uniform consumption of computational energy in network and the optimal division of the single computational capacity for multiple tasks. The simulation in the scenario of target tracing demonstrates that MCRA realizes an efficient allocation of computational resources according to the priority of tasks, achieves the superior allocation performance and equilibrium performance compared to traditional allocation policies, and ultimately prolongs the system lifetime.
基金This work was supported in part by the National Key R&D Program of China under Grant 2019YFE0114000in part by the National Natural Science Foundation of China under Grant 61701042+1 种基金in part by the 111 Project of China(Grant No.B16006)the research foundation of Ministry of EducationChina Mobile under Grant MCM20180101.
文摘In MEC-enabled vehicular network with limited wireless resource and computation resource,stringent delay and high reliability requirements are challenging issues.In order to reduce the total delay in the network as well as ensure the reliability of Vehicular UE(VUE),a Joint Allocation of Wireless resource and MEC Computing resource(JAWC)algorithm is proposed.The JAWC algorithm includes two steps:V2X links clustering and MEC computation resource scheduling.In the V2X links clustering,a Spectral Radius based Interference Cancellation scheme(SR-IC)is proposed to obtain the optimal resource allocation matrix.By converting the calculation of SINR into the calculation of matrix maximum row sum,the accumulated interference of VUE can be constrained and the the SINR calculation complexity can be effectively reduced.In the MEC computation resource scheduling,by transforming the original optimization problem into a convex problem,the optimal task offloading proportion of VUE and MEC computation resource allocation can be obtained.The simulation further demonstrates that the JAWC algorithm can significantly reduce the total delay as well as ensure the communication reliability of VUE in the MEC-enabled vehicular network.
基金The National High Technology Research and Development Program of China (863 Program) (No2007AA01Z404)
文摘Based on the monitoring and discovery service 4 (MDS4) model, a monitoring model for a data grid which supports reliable storage and intrusion tolerance is designed. The load characteristics and indicators of computing resources in the monitoring model are analyzed. Then, a time-series autoregressive prediction model is devised. And an autoregressive support vector regression( ARSVR) monitoring method is put forward to predict the node load of the data grid. Finally, a model for historical observations sequences is set up using the autoregressive (AR) model and the model order is determined. The support vector regression(SVR) model is trained using historical data and the regression function is obtained. Simulation results show that the ARSVR method can effectively predict the node load.
基金This work was supported in part by the Natural Science Foundation of the Education Department of Henan Province(Grant 22A520025)the National Natural Science Foundation of China(Grant 61975053)the National Key Research and Development of Quality Information Control Technology for Multi-Modal Grain Transportation Efficient Connection(2022YFD2100202).
文摘Cloud computing has gained significant recognition due to its ability to provide a broad range of online services and applications.Nevertheless,existing commercial cloud computing models demonstrate an appropriate design by concentrating computational assets,such as preservation and server infrastructure,in a limited number of large-scale worldwide data facilities.Optimizing the deployment of virtual machines(VMs)is crucial in this scenario to ensure system dependability,performance,and minimal latency.A significant barrier in the present scenario is the load distribution,particularly when striving for improved energy consumption in a hypothetical grid computing framework.This design employs load-balancing techniques to allocate different user workloads across several virtual machines.To address this challenge,we propose using the twin-fold moth flame technique,which serves as a very effective optimization technique.Developers intentionally designed the twin-fold moth flame method to consider various restrictions,including energy efficiency,lifespan analysis,and resource expenditures.It provides a thorough approach to evaluating total costs in the cloud computing environment.When assessing the efficacy of our suggested strategy,the study will analyze significant metrics such as energy efficiency,lifespan analysis,and resource expenditures.This investigation aims to enhance cloud computing techniques by developing a new optimization algorithm that considers multiple factors for effective virtual machine placement and load balancing.The proposed work demonstrates notable improvements of 12.15%,10.68%,8.70%,13.29%,18.46%,and 33.39%for 40 count data of nodes using the artificial bee colony-bat algorithm,ant colony optimization,crow search algorithm,krill herd,whale optimization genetic algorithm,and improved Lévy-based whale optimization algorithm,respectively.
基金This work was partially supported by the Shandong Joint Fund of the National Nature Science Foundation of China(U2006228)the National Nature Science Foundation of China(61603244).
文摘Maintaining population diversity is an important task in the multimodal multi-objective optimization.Although the zoning search(ZS)can improve the diversity in the decision space,assigning the same computational costs to each search subspace may be wasteful when computational resources are limited,especially on imbalanced problems.To alleviate the above-mentioned issue,a zoning search with adaptive resource allocating(ZS-ARA)method is proposed in the current study.In the proposed ZS-ARA,the entire search space is divided into many subspaces to preserve the diversity in the decision space and to reduce the problem complexity.Moreover,the computational resources can be automatically allocated among all the subspaces.The ZS-ARA is compared with seven algorithms on two different types of multimodal multi-objective problems(MMOPs),namely,balanced and imbalanced MMOPs.The results indicate that,similarly to the ZS,the ZS-ARA achieves high performance with the balanced MMOPs.Also,it can greatly assist a“regular”algorithm in improving its performance on the imbalanced MMOPs,and is capable of allocating the limited computational resources dynamically.
基金supported in part by the National Science Foundationof China under Grant number 61431001the Beijing Talents Fund under Grant number 2015000021223ZK31
文摘The centralized radio access cellular network infrastructure based on centralized Super Base Station(CSBS) is a promising solution to reduce the high construction cost and energy consumption of conventional cellular networks. With CSBS, the computing resource for communication protocol processing could be managed flexibly according the protocol load to improve the resource efficiency. Since the protocol load changes frequently and may exceed the capacity of processors, load balancing is needed. However, existing load balancing mechanisms used in data centers cannot satisfy the real-time requirement of the communication protocol processing. Therefore, a new computing resource adjustment scheme is proposed for communication protocol processing in the CSBS architecture. First of all, the main principles of protocol processing resource adjustment is concluded, followed by the analysis on the processing resource outage probability that the computing resource becomes inadequate for protocol processing as load changes. Following the adjustment principles, the proposed scheme is designed to reduce the processing resource outage probability based onthe optimized connected graph which is constructed by the approximate Kruskal algorithm. Simulation re-sults show that compared with the conventional load balancing mechanisms, the proposed scheme can reduce the occurrence number of inadequate processing resource and the additional resource consumption of adjustment greatly.
基金supported in part by the National Science Foundation Project of P.R.China(No.61931001)the Scientific and Technological Innovation Foundation of Foshan,USTB(No.BK20AF003)。
文摘Dispersed computing can link all devices with computing capabilities on a global scale to form a fully decentralized network,which can make full use of idle computing resources.Realizing the overall resource allocation of the dispersed computing system is a significant challenge.In detail,by jointly managing the task requests of external users and the resource allocation of the internal system to achieve dynamic balance,the efficient and stable operation of the system can be guaranteed.In this paper,we first propose a task-resource joint management model,which quantifies the dynamic transformation relationship between the resources consumed by task requests and the resources occupied by the system in dispersed computing.Secondly,to avoid downtime caused by an overload of resources,we introduce intelligent control into the task-resource joint management model.The existence and stability of the positive periodic solution of the model can be obtained by theoretical analysis,which means that the stable operation of dispersed computing can be guaranteed through the intelligent feedback control strategy.Additionally,to improve the system utilization,the task-resource joint management model with bi-directional intelligent control is further explored.Setting control thresholds for the two resources not only reverse restrains the system resource overload,but also carries out positive incentive control when a large number of idle resources appear.The existence and stability of the positive periodic solution of the model are proved theoretically,that is,the model effectively avoids the two extreme cases and ensure the efficient and stable operation of the system.Finally,numerical simulation verifies the correctness and validity of the theoretical results.
基金ACKNOWLEDGEMENTS The authors would like to thank the reviewers for their detailed reviews and constructive comments, which have helped improve the quality of this paper. The research has been partly supported by National Natural Science Foundation of China No. 61272528 and No. 61034005, and the Central University Fund (ID-ZYGX2013J073).
文摘Many Task Computing(MTC)is a new class of computing paradigm in which the aggregate number of tasks,quantity of computing,and volumes of data may be extremely large.With the advent of Cloud computing and big data era,scheduling and executing large-scale computing tasks efficiently and allocating resources to tasks reasonably are becoming a quite challenging problem.To improve both task execution and resource utilization efficiency,we present a task scheduling algorithm with resource attribute selection,which can select the optimal node to execute a task according to its resource requirements and the fitness between the resource node and the task.Experiment results show that there is significant improvement in execution throughput and resource utilization compared with the other three algorithms and four scheduling frameworks.In the scheduling algorithm comparison,the throughput is 77%higher than Min-Min algorithm and the resource utilization can reach 91%.In the scheduling framework comparison,the throughput(with work-stealing)is at least 30%higher than the other frameworks and the resource utilization reaches 94%.The scheduling algorithm can make a good model for practical MTC applications.
基金supported by the National Natural Science Foundation of China(6147219261202004)+1 种基金the Special Fund for Fast Sharing of Science Paper in Net Era by CSTD(2013116)the Natural Science Fund of Higher Education of Jiangsu Province(14KJB520014)
文摘In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the redundant, turn on the demanded" strategy here. Firstly, a green cloud computing model is presented, abstracting the task scheduling problem to the virtual machine deployment issue with the virtualization technology. Secondly, the future workloads of system need to be predicted: a cubic exponential smoothing algorithm based on the conservative control(CESCC) strategy is proposed, combining with the current state and resource distribution of system, in order to calculate the demand of resources for the next period of task requests. Then, a multi-objective constrained optimization model of power consumption and a low-energy resource allocation algorithm based on probabilistic matching(RA-PM) are proposed. In order to reduce the power consumption further, the resource allocation algorithm based on the improved simulated annealing(RA-ISA) is designed with the improved simulated annealing algorithm. Experimental results show that the prediction and conservative control strategy make resource pre-allocation catch up with demands, and improve the efficiency of real-time response and the stability of the system. Both RA-PM and RA-ISA can activate fewer hosts, achieve better load balance among the set of high applicable hosts, maximize the utilization of resources, and greatly reduce the power consumption of cloud computing systems.
基金supported by National High Technology Research and Development Program of China (863 Program)(No. 2007AA010305)the Excellent Doctor Degree Dissertation Fund of Xi an University of Technology (No. 102-211007)
文摘Resource reconstruction algorithms are studied in this paper to solve the problem of resource on-demand allocation and improve the efficiency of resource utilization in virtual computing resource pool. Based on the idea of resource virtualization and the analysis of the resource status transition, the resource allocation process and the necessity of resource reconstruction are presented, l^esource reconstruction algorithms are designed to determine the resource reconstruction types, and it is shown that they can achieve the goal of resource on-demand allocation through three methodologies: resource combination, resource split, and resource random adjustment. The effects that the resource users have on the resource reconstruction results, the deviation between resources and requirements, and the uniformity of resource distribution are studied by three experiments. The experiments show that resource reconstruction has a close relationship with resource requirements, but it is not the same with current distribution of resources. The algorithms can complete the resource adjustment with a lower cost and form the logic resources to match the demands of resource users easily.
基金funded by the key project of the National Natural Science Foundation of China (No.61431001)the National High-Tech R&D Program (863 Program 2015AA01A705)New Technology Star Plan of Beijing (No.xx2013052)
文摘In centralized cellular network architecture,the concept of virtualized Base Station(VBS) becomes attracting since it enables all base stations(BSs) to share computing resources in a dynamic manner. This can significantly improve the utilization efficiency of computing resources. In this paper,we study the computing resource allocation strategy for one VBS by considering the non-negligible effect of delay introduced by switches. Specifically,we formulate the VBS's sum computing rate maximization as a set optimization problem. To address this problem,we firstly propose a computing resource schedule algorithm,namely,weight before one-step-greedy(WBOSG),which has linear computation complexity and considerable performance. Then,OSG retreat(OSG-R) algorithm is developed to further improve the system performance at the expense of computational complexity. Simulation results under practical setting are provided to validate the proposed two algorithms.
基金Sponsored by National Nature Science Foundation of China.
文摘Web based Computing Resource Publishing is a efficient way to provide additional computing capacity for users who need more computing resources than that they themselves could afford by making use of idle computing resources in the Web.Extensibility and reliability are crucial for agent publishing. The parent child agent framework and primary slave agent framework were proposed respectively and discussed in detail.
基金This work was supported by Shandong medical and health science and technology development plan project(No.202012070393).
文摘Load-time series data in mobile cloud computing of Internet of Vehicles(IoV)usually have linear and nonlinear composite characteristics.In order to accurately describe the dynamic change trend of such loads,this study designs a load prediction method by using the resource scheduling model for mobile cloud computing of IoV.Firstly,a chaotic analysis algorithm is implemented to process the load-time series,while some learning samples of load prediction are constructed.Secondly,a support vector machine(SVM)is used to establish a load prediction model,and an improved artificial bee colony(IABC)function is designed to enhance the learning ability of the SVM.Finally,a CloudSim simulation platform is created to select the perminute CPU load history data in the mobile cloud computing system,which is composed of 50 vehicles as the data set;and a comparison experiment is conducted by using a grey model,a back propagation neural network,a radial basis function(RBF)neural network and a RBF kernel function of SVM.As shown in the experimental results,the prediction accuracy of the method proposed in this study is significantly higher than other models,with a significantly reduced real-time prediction error for resource loading in mobile cloud environments.Compared with single-prediction models,the prediction method proposed can build up multidimensional time series in capturing complex load time series,fit and describe the load change trends,approximate the load time variability more precisely,and deliver strong generalization ability to load prediction models for mobile cloud computing resources.
基金jointly supported by the Beijing University of Posts and Telecommunications-China Mobile Research Institute Joint Innovation Centerthe National Key Research and Development Program of China under Grant 2021YFB2900200the National Natural Science Foundation of China under Grant 62201073 and 61925101。
文摘To accommodate the diversified emerging use cases in 5G,radio access networks(RAN)is required to be more flexible,open,and versatile.It is evolving towards cloudification,intelligence and openness.By embedding computing capabilities within RAN,it helps to transform RAN into a natural cost effective radio edge computing platform,offering great opportunity to further enhance RAN agility for diversified services and improve users’quality of experience(Qo E).In this article,a logical architecture enabling deep convergence of communication and computing in RAN is proposed based on O-RAN.The scenarios and potential benefits of sharing RAN computing resources are first analyzed.Then,the requirements,design principles and logical architecture are introduced.Involved key technologies are also discussed,including heterogeneous computing infrastructure,unified computing and communication task modeling,joint communication and computing orchestration and RAN computing data routing.Followed by that,a VR use case is studied to illustrate the superiority of the joint communication and computing optimization.Finally,challenges and future trends are highlighted to provide some insights on the potential future work for researchers in this field.
基金supported by National Natural Science Foundation of China(No.62071327)Tianjin Science and Technology Planning Project(No.22ZYYYJC00020)。
文摘Vehicular Edge Computing(VEC)brings the computational resources in close proximity to the service requestors and thus supports explosive computing demands from smart vehicles.However,the limited computing capability of VEC cannot simultaneously respond to large amounts of offloading requests,thus restricting the performance of VEC system.Besides,a mass of traffic data can incur tremendous pressure on the front-haul links between vehicles and the edge server.To strengthen the performance of VEC,in this paper we propose to place services beforehand at the edge server,e.g.,by deploying the services/tasks-oriented data(e.g.,related libraries and databases)in advance at the network edge,instead of downloading them from the remote data center or offloading them from vehicles during the runtime.In this paper,we formulate the service placement problem in VEC to minimize the average response latency for all requested services along the slotted timeline.Specifically,the time slot spanned optimization problem is converted into per-slot optimization problems based on the Lyapunov optimization.Then a greedy heuristic is introduced to the drift-plus-penalty-based algorithm for seeking the approximate solution.The simulation results reveal its advantages over others in terms of optimal values and our strategy can satisfy the long-term energy constraint.
文摘Cloud computing provides the essential infrastructure for multi-tier Ambient Assisted Living(AAL) applications that facilitate people's lives. Resource provisioning is a critically important problem for AAL applications in cloud data centers(CDCs). This paper focuses on modeling and analysis of multi-tier AAL applications, and aims to optimize resource provisioning while meeting requests' response time constraint. This paper models a multi-tier AAL application as a hybrid multi-tier queueing model consisting of an M/M/c queueing model and multiple M/M/1 queueing models. Then, virtual machine(VM) allocation is formulated as a constrained optimization problem in a CDC, and is further solved with the proposed heuristic VM allocation algorithm(HVMA). The results demonstrate that the proposed model and algorithm can effectively achieve dynamic resource provisioning while meeting the performance constraint.
基金supported by the National Natural Science Foundation of China(61971050)。
文摘Aiming at the factory with high-complex and multi-terminal in the industrial Internet of things(IIoT),a hierarchical edge networking collaboration(HENC)framework based on the cloud-edge collaboration and computing first networking(CFN)is proposed to improve the capability of task processing with fixed computing resources on the edge effectively.To optimize the delay and energy consumption in HENC,a multi-objective optimization(MOO)problem is formulated.Furthermore,to improve the efficiency and reliability of the system,a resource prediction model based on ridge regression(RR)is proposed to forecast the task size of the next time slot,and an emergency-aware(EA)computing resource allocation algorithm is proposed to reallocate tasks in edge CFN.Based on the simulation result,the EA algorithm is superior to the greedy resource allocation in time delay,energy consumption,quality of service(QoS)especially with limited computing resources.
基金Supported by the National Natural Science Foundation of China(61272454)
文摘Conventional resource provision algorithms focus on how to maximize resource utilization and meet a fixed constraint of response time which be written in service level agreement(SLA).Unfortunately,the expected response time is highly variable and it is usually longer than the value of SLA.So,it leads to a poor resource utilization and unnecessary servers migration.We develop a framework for customer-driven dynamic resource allocation in cloud computing.Termed CDSMS(customer-driven service manage system),and the framework’s contributions are twofold.First,it can reduce the total migration times by adjusting the value of parameters of response time dynamically according to customers’profiles.Second,it can choose a best resource provision algorithm automatically in different scenarios to improve resource utilization.Finally,we perform a serious experiment in a real cloud computing platform.Experimental results show that CDSMS provides a satisfactory solution for the prediction of expected response time and the interval period between two tasks and reduce the total resource usage cost.