期刊文献+
共找到20篇文章
< 1 >
每页显示 20 50 100
Online Learning-Based Offloading Decision and Resource Allocation in Mobile Edge Computing-Enabled Satellite-Terrestrial Networks
1
作者 Tong Minglei Li Song +1 位作者 Han Wanjiang Wang Xiaoxiang 《China Communications》 SCIE CSCD 2024年第3期230-246,共17页
Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal ... Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes. 展开更多
关键词 computing resource allocation mobile edge computing satellite-terrestrial networks task offloading decision
下载PDF
Efficient Bandwidth Allocation and Computation Configuration in Industrial IoT
2
作者 HUANG Rui LI Huilin ZHANG Yongmin 《ZTE Communications》 2023年第1期55-63,共9页
With the advancement of the Industrial Internet of Things(IoT),the rapidly growing demand for data collection and processing poses a huge challenge to the design of data transmission and computation resources in the i... With the advancement of the Industrial Internet of Things(IoT),the rapidly growing demand for data collection and processing poses a huge challenge to the design of data transmission and computation resources in the industrial scenario.Taking advantage of improved model accuracy by machine learning algorithms,we investigate the inner relationship of system performance and data transmission and computation resources,and then analyze the impacts of bandwidth allocation and computation resources on the accuracy of the system model in this paper.A joint bandwidth allocation and computation resource configuration scheme is proposed and the Karush-Kuhn-Tucker(KKT)conditions are used to get an optimal bandwidth allocation and computation configuration decision,which can minimize the total computation resource requirement and ensure the system accuracy meets the industrial requirements.Simulation results show that the proposed bandwidth allocation and computation resource configuration scheme can reduce the computing resource usage by 10%when compared to the average allocation strategy. 展开更多
关键词 bandwidth allocation computation resource management industrial IoT system accuracy
下载PDF
Adaptive computational resource allocation for sensor networks
3
作者 王典洪 费娥 阎毓杰 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2008年第1期129-134,共6页
To efficiently utilize the limited computational resource in real-time sensor networks, this paper focuses on the challenge of computational resource allocation in sensor networks and provides a solution with the meth... To efficiently utilize the limited computational resource in real-time sensor networks, this paper focuses on the challenge of computational resource allocation in sensor networks and provides a solution with the method of economics. It designs a microeconomic system in which the applications distribute their computational resource consumption across sensor networks by virtue of mobile agent. Further, it proposes the market-based computational resource allocation policy named MCRA which satisfies the uniform consumption of computational energy in network and the optimal division of the single computational capacity for multiple tasks. The simulation in the scenario of target tracing demonstrates that MCRA realizes an efficient allocation of computational resources according to the priority of tasks, achieves the superior allocation performance and equilibrium performance compared to traditional allocation policies, and ultimately prolongs the system lifetime. 展开更多
关键词 sensor networks computational resource allocation market mechanism mobile agent Nash equilibrium
下载PDF
Joint Allocation of Wireless Resource and Computing Capability in MEC-Enabled Vehicular Network 被引量:8
4
作者 Yanzhao Hou Chengrui Wang +3 位作者 Min Zhu Xiaodong Xu Xiaofeng Tao Xunchao Wu 《China Communications》 SCIE CSCD 2021年第6期64-76,共13页
In MEC-enabled vehicular network with limited wireless resource and computation resource,stringent delay and high reliability requirements are challenging issues.In order to reduce the total delay in the network as we... In MEC-enabled vehicular network with limited wireless resource and computation resource,stringent delay and high reliability requirements are challenging issues.In order to reduce the total delay in the network as well as ensure the reliability of Vehicular UE(VUE),a Joint Allocation of Wireless resource and MEC Computing resource(JAWC)algorithm is proposed.The JAWC algorithm includes two steps:V2X links clustering and MEC computation resource scheduling.In the V2X links clustering,a Spectral Radius based Interference Cancellation scheme(SR-IC)is proposed to obtain the optimal resource allocation matrix.By converting the calculation of SINR into the calculation of matrix maximum row sum,the accumulated interference of VUE can be constrained and the the SINR calculation complexity can be effectively reduced.In the MEC computation resource scheduling,by transforming the original optimization problem into a convex problem,the optimal task offloading proportion of VUE and MEC computation resource allocation can be obtained.The simulation further demonstrates that the JAWC algorithm can significantly reduce the total delay as well as ensure the communication reliability of VUE in the MEC-enabled vehicular network. 展开更多
关键词 vehicular network delay optimization wireless resource allocation matrix spectral radius MEC computation resource allocation
下载PDF
Load prediction of grid computing resources based on ARSVR method
5
作者 黄刚 王汝传 +1 位作者 解永娟 石小娟 《Journal of Southeast University(English Edition)》 EI CAS 2009年第4期451-455,共5页
Based on the monitoring and discovery service 4 (MDS4) model, a monitoring model for a data grid which supports reliable storage and intrusion tolerance is designed. The load characteristics and indicators of comput... Based on the monitoring and discovery service 4 (MDS4) model, a monitoring model for a data grid which supports reliable storage and intrusion tolerance is designed. The load characteristics and indicators of computing resources in the monitoring model are analyzed. Then, a time-series autoregressive prediction model is devised. And an autoregressive support vector regression( ARSVR) monitoring method is put forward to predict the node load of the data grid. Finally, a model for historical observations sequences is set up using the autoregressive (AR) model and the model order is determined. The support vector regression(SVR) model is trained using historical data and the regression function is obtained. Simulation results show that the ARSVR method can effectively predict the node load. 展开更多
关键词 GRID autoregressive support vector regression algorithm computing resource load prediction
下载PDF
Systematic Cloud-Based Optimization: Twin-Fold Moth Flame Algorithm for VM Deployment and Load-Balancing
6
作者 Umer Nauman Yuhong Zhang +1 位作者 Zhihui Li Tong Zhen 《Intelligent Automation & Soft Computing》 2024年第3期477-510,共34页
Cloud computing has gained significant recognition due to its ability to provide a broad range of online services and applications.Nevertheless,existing commercial cloud computing models demonstrate an appropriate des... Cloud computing has gained significant recognition due to its ability to provide a broad range of online services and applications.Nevertheless,existing commercial cloud computing models demonstrate an appropriate design by concentrating computational assets,such as preservation and server infrastructure,in a limited number of large-scale worldwide data facilities.Optimizing the deployment of virtual machines(VMs)is crucial in this scenario to ensure system dependability,performance,and minimal latency.A significant barrier in the present scenario is the load distribution,particularly when striving for improved energy consumption in a hypothetical grid computing framework.This design employs load-balancing techniques to allocate different user workloads across several virtual machines.To address this challenge,we propose using the twin-fold moth flame technique,which serves as a very effective optimization technique.Developers intentionally designed the twin-fold moth flame method to consider various restrictions,including energy efficiency,lifespan analysis,and resource expenditures.It provides a thorough approach to evaluating total costs in the cloud computing environment.When assessing the efficacy of our suggested strategy,the study will analyze significant metrics such as energy efficiency,lifespan analysis,and resource expenditures.This investigation aims to enhance cloud computing techniques by developing a new optimization algorithm that considers multiple factors for effective virtual machine placement and load balancing.The proposed work demonstrates notable improvements of 12.15%,10.68%,8.70%,13.29%,18.46%,and 33.39%for 40 count data of nodes using the artificial bee colony-bat algorithm,ant colony optimization,crow search algorithm,krill herd,whale optimization genetic algorithm,and improved Lévy-based whale optimization algorithm,respectively. 展开更多
关键词 Optimizing cloud computing deployment of virtual machines LOAD-BALANCING twin-fold moth flame algorithm grid computing computational resource distribution data virtualization
下载PDF
Zoning Search With Adaptive Resource Allocating Method for Balanced and Imbalanced Multimodal Multi-Objective Optimization 被引量:5
7
作者 Qinqin Fan Okan K.Ersoy 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第6期1163-1176,共14页
Maintaining population diversity is an important task in the multimodal multi-objective optimization.Although the zoning search(ZS)can improve the diversity in the decision space,assigning the same computational costs... Maintaining population diversity is an important task in the multimodal multi-objective optimization.Although the zoning search(ZS)can improve the diversity in the decision space,assigning the same computational costs to each search subspace may be wasteful when computational resources are limited,especially on imbalanced problems.To alleviate the above-mentioned issue,a zoning search with adaptive resource allocating(ZS-ARA)method is proposed in the current study.In the proposed ZS-ARA,the entire search space is divided into many subspaces to preserve the diversity in the decision space and to reduce the problem complexity.Moreover,the computational resources can be automatically allocated among all the subspaces.The ZS-ARA is compared with seven algorithms on two different types of multimodal multi-objective problems(MMOPs),namely,balanced and imbalanced MMOPs.The results indicate that,similarly to the ZS,the ZS-ARA achieves high performance with the balanced MMOPs.Also,it can greatly assist a“regular”algorithm in improving its performance on the imbalanced MMOPs,and is capable of allocating the limited computational resources dynamically. 展开更多
关键词 computational resource allocation decision space decomposition evolutionary computation multimodal multi-objective optimization
下载PDF
A Computing Resource Adjustment Mechanism for Communication Protocol Processing in Centralized Radio Access Networks 被引量:3
8
作者 Guowei Zhai Lin Tian +2 位作者 Yiqing Zhou Qian Sun Jinglin Shi 《China Communications》 SCIE CSCD 2016年第12期79-89,共11页
The centralized radio access cellular network infrastructure based on centralized Super Base Station(CSBS) is a promising solution to reduce the high construction cost and energy consumption of conventional cellular n... The centralized radio access cellular network infrastructure based on centralized Super Base Station(CSBS) is a promising solution to reduce the high construction cost and energy consumption of conventional cellular networks. With CSBS, the computing resource for communication protocol processing could be managed flexibly according the protocol load to improve the resource efficiency. Since the protocol load changes frequently and may exceed the capacity of processors, load balancing is needed. However, existing load balancing mechanisms used in data centers cannot satisfy the real-time requirement of the communication protocol processing. Therefore, a new computing resource adjustment scheme is proposed for communication protocol processing in the CSBS architecture. First of all, the main principles of protocol processing resource adjustment is concluded, followed by the analysis on the processing resource outage probability that the computing resource becomes inadequate for protocol processing as load changes. Following the adjustment principles, the proposed scheme is designed to reduce the processing resource outage probability based onthe optimized connected graph which is constructed by the approximate Kruskal algorithm. Simulation re-sults show that compared with the conventional load balancing mechanisms, the proposed scheme can reduce the occurrence number of inadequate processing resource and the additional resource consumption of adjustment greatly. 展开更多
关键词 computing resource adjustment communication protocol processing cloud RAN super BS processing resource outage probability optimized connected graph
下载PDF
A Task-Resource Joint Management Model with Intelligent Control for Mission-Aware Dispersed Computing 被引量:2
9
作者 Chengcheng Zhou Chao Gong +2 位作者 Hongwen Hui Fuhong Lin Guangping Zeng 《China Communications》 SCIE CSCD 2021年第10期214-232,共19页
Dispersed computing can link all devices with computing capabilities on a global scale to form a fully decentralized network,which can make full use of idle computing resources.Realizing the overall resource allocatio... Dispersed computing can link all devices with computing capabilities on a global scale to form a fully decentralized network,which can make full use of idle computing resources.Realizing the overall resource allocation of the dispersed computing system is a significant challenge.In detail,by jointly managing the task requests of external users and the resource allocation of the internal system to achieve dynamic balance,the efficient and stable operation of the system can be guaranteed.In this paper,we first propose a task-resource joint management model,which quantifies the dynamic transformation relationship between the resources consumed by task requests and the resources occupied by the system in dispersed computing.Secondly,to avoid downtime caused by an overload of resources,we introduce intelligent control into the task-resource joint management model.The existence and stability of the positive periodic solution of the model can be obtained by theoretical analysis,which means that the stable operation of dispersed computing can be guaranteed through the intelligent feedback control strategy.Additionally,to improve the system utilization,the task-resource joint management model with bi-directional intelligent control is further explored.Setting control thresholds for the two resources not only reverse restrains the system resource overload,but also carries out positive incentive control when a large number of idle resources appear.The existence and stability of the positive periodic solution of the model are proved theoretically,that is,the model effectively avoids the two extreme cases and ensure the efficient and stable operation of the system.Finally,numerical simulation verifies the correctness and validity of the theoretical results. 展开更多
关键词 dispersed computing computing resource management intelligent control
下载PDF
Efficient Task Scheduling for Many Task Computing with Resource Attribute Selection 被引量:3
10
作者 ZHAO Yong CHEN Liang LI Youfu TIAN Wenhong 《China Communications》 SCIE CSCD 2014年第12期125-140,共16页
Many Task Computing(MTC)is a new class of computing paradigm in which the aggregate number of tasks,quantity of computing,and volumes of data may be extremely large.With the advent of Cloud computing and big data era,... Many Task Computing(MTC)is a new class of computing paradigm in which the aggregate number of tasks,quantity of computing,and volumes of data may be extremely large.With the advent of Cloud computing and big data era,scheduling and executing large-scale computing tasks efficiently and allocating resources to tasks reasonably are becoming a quite challenging problem.To improve both task execution and resource utilization efficiency,we present a task scheduling algorithm with resource attribute selection,which can select the optimal node to execute a task according to its resource requirements and the fitness between the resource node and the task.Experiment results show that there is significant improvement in execution throughput and resource utilization compared with the other three algorithms and four scheduling frameworks.In the scheduling algorithm comparison,the throughput is 77%higher than Min-Min algorithm and the resource utilization can reach 91%.In the scheduling framework comparison,the throughput(with work-stealing)is at least 30%higher than the other frameworks and the resource utilization reaches 94%.The scheduling algorithm can make a good model for practical MTC applications. 展开更多
关键词 task scheduling resource attribute selection many task computing resource utilization work-stealing
下载PDF
Resource pre-allocation algorithms for low-energy task scheduling of cloud computing 被引量:4
11
作者 Xiaolong Xu Lingling Cao Xinheng Wang 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2016年第2期457-469,共13页
In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the r... In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the redundant, turn on the demanded" strategy here. Firstly, a green cloud computing model is presented, abstracting the task scheduling problem to the virtual machine deployment issue with the virtualization technology. Secondly, the future workloads of system need to be predicted: a cubic exponential smoothing algorithm based on the conservative control(CESCC) strategy is proposed, combining with the current state and resource distribution of system, in order to calculate the demand of resources for the next period of task requests. Then, a multi-objective constrained optimization model of power consumption and a low-energy resource allocation algorithm based on probabilistic matching(RA-PM) are proposed. In order to reduce the power consumption further, the resource allocation algorithm based on the improved simulated annealing(RA-ISA) is designed with the improved simulated annealing algorithm. Experimental results show that the prediction and conservative control strategy make resource pre-allocation catch up with demands, and improve the efficiency of real-time response and the stability of the system. Both RA-PM and RA-ISA can activate fewer hosts, achieve better load balance among the set of high applicable hosts, maximize the utilization of resources, and greatly reduce the power consumption of cloud computing systems. 展开更多
关键词 green cloud computing power consumption prediction resource allocation probabilistic matching simulated annealing
下载PDF
Resource Reconstruction Algorithms for On-demand Allocation in Virtual Computing Resource Pool
12
作者 Xiao-Jun Chen Jing Zhang +1 位作者 Jun-Huai Li Xiang Li 《International Journal of Automation and computing》 EI 2012年第2期142-154,共13页
Resource reconstruction algorithms are studied in this paper to solve the problem of resource on-demand allocation and improve the efficiency of resource utilization in virtual computing resource pool. Based on the id... Resource reconstruction algorithms are studied in this paper to solve the problem of resource on-demand allocation and improve the efficiency of resource utilization in virtual computing resource pool. Based on the idea of resource virtualization and the analysis of the resource status transition, the resource allocation process and the necessity of resource reconstruction are presented, l^esource reconstruction algorithms are designed to determine the resource reconstruction types, and it is shown that they can achieve the goal of resource on-demand allocation through three methodologies: resource combination, resource split, and resource random adjustment. The effects that the resource users have on the resource reconstruction results, the deviation between resources and requirements, and the uniformity of resource distribution are studied by three experiments. The experiments show that resource reconstruction has a close relationship with resource requirements, but it is not the same with current distribution of resources. The algorithms can complete the resource adjustment with a lower cost and form the logic resources to match the demands of resource users easily. 展开更多
关键词 Virtual computing systems virtual computing resource pool resource allocation resource reconstruction status tran-sition resource combination resource split resource adjustment.
下载PDF
Switching Delay Aware Computing Resource Allocation in Virtualized Base Station
13
作者 Mingjin Gao He(Henry) Chen +2 位作者 Yonghui Li Yiqing Zhou Jinglin Shi 《China Communications》 SCIE CSCD 2016年第11期226-233,共8页
In centralized cellular network architecture,the concept of virtualized Base Station(VBS) becomes attracting since it enables all base stations(BSs) to share computing resources in a dynamic manner. This can significa... In centralized cellular network architecture,the concept of virtualized Base Station(VBS) becomes attracting since it enables all base stations(BSs) to share computing resources in a dynamic manner. This can significantly improve the utilization efficiency of computing resources. In this paper,we study the computing resource allocation strategy for one VBS by considering the non-negligible effect of delay introduced by switches. Specifically,we formulate the VBS's sum computing rate maximization as a set optimization problem. To address this problem,we firstly propose a computing resource schedule algorithm,namely,weight before one-step-greedy(WBOSG),which has linear computation complexity and considerable performance. Then,OSG retreat(OSG-R) algorithm is developed to further improve the system performance at the expense of computational complexity. Simulation results under practical setting are provided to validate the proposed two algorithms. 展开更多
关键词 virtualized base station parallel computing computing resource allocation C-RAN
下载PDF
Web-Based Computing Resource Agent Publishing
14
作者 吴志刚 Fang Binxing Hu Mingzeng 《High Technology Letters》 EI CAS 2000年第4期46-49,共4页
Web based Computing Resource Publishing is a efficient way to provide additional computing capacity for users who need more computing resources than that they themselves could afford by making use of idle computing re... Web based Computing Resource Publishing is a efficient way to provide additional computing capacity for users who need more computing resources than that they themselves could afford by making use of idle computing resources in the Web.Extensibility and reliability are crucial for agent publishing. The parent child agent framework and primary slave agent framework were proposed respectively and discussed in detail. 展开更多
关键词 AGENT EXTENSIBILITY RELIABILITY Web-based Computing resource Publishing
下载PDF
Resource Load Prediction of Internet of Vehicles Mobile Cloud Computing
15
作者 Wenbin Bi Fang Yu +1 位作者 Ning Cao Russell Higgs 《Computers, Materials & Continua》 SCIE EI 2022年第10期165-180,共16页
Load-time series data in mobile cloud computing of Internet of Vehicles(IoV)usually have linear and nonlinear composite characteristics.In order to accurately describe the dynamic change trend of such loads,this study... Load-time series data in mobile cloud computing of Internet of Vehicles(IoV)usually have linear and nonlinear composite characteristics.In order to accurately describe the dynamic change trend of such loads,this study designs a load prediction method by using the resource scheduling model for mobile cloud computing of IoV.Firstly,a chaotic analysis algorithm is implemented to process the load-time series,while some learning samples of load prediction are constructed.Secondly,a support vector machine(SVM)is used to establish a load prediction model,and an improved artificial bee colony(IABC)function is designed to enhance the learning ability of the SVM.Finally,a CloudSim simulation platform is created to select the perminute CPU load history data in the mobile cloud computing system,which is composed of 50 vehicles as the data set;and a comparison experiment is conducted by using a grey model,a back propagation neural network,a radial basis function(RBF)neural network and a RBF kernel function of SVM.As shown in the experimental results,the prediction accuracy of the method proposed in this study is significantly higher than other models,with a significantly reduced real-time prediction error for resource loading in mobile cloud environments.Compared with single-prediction models,the prediction method proposed can build up multidimensional time series in capturing complex load time series,fit and describe the load change trends,approximate the load time variability more precisely,and deliver strong generalization ability to load prediction models for mobile cloud computing resources. 展开更多
关键词 Internet of Vehicles mobile cloud computing resource load predicting multi distributed resource computing scheduling chaos analysis algorithm improved artificial bee colony function
下载PDF
Towards the Deep Convergence of Communication and Computing in RAN: Scenarios, Architecture, Key Technologies, Challenges and Future Trends 被引量:2
16
作者 Nan Li Qi Sun +5 位作者 Xiang Li Fengxian Guo Yuhong Huang Ziqi Chen Yiwei Yan Mugen Peng 《China Communications》 SCIE CSCD 2023年第3期218-235,共18页
To accommodate the diversified emerging use cases in 5G,radio access networks(RAN)is required to be more flexible,open,and versatile.It is evolving towards cloudification,intelligence and openness.By embedding computi... To accommodate the diversified emerging use cases in 5G,radio access networks(RAN)is required to be more flexible,open,and versatile.It is evolving towards cloudification,intelligence and openness.By embedding computing capabilities within RAN,it helps to transform RAN into a natural cost effective radio edge computing platform,offering great opportunity to further enhance RAN agility for diversified services and improve users’quality of experience(Qo E).In this article,a logical architecture enabling deep convergence of communication and computing in RAN is proposed based on O-RAN.The scenarios and potential benefits of sharing RAN computing resources are first analyzed.Then,the requirements,design principles and logical architecture are introduced.Involved key technologies are also discussed,including heterogeneous computing infrastructure,unified computing and communication task modeling,joint communication and computing orchestration and RAN computing data routing.Followed by that,a VR use case is studied to illustrate the superiority of the joint communication and computing optimization.Finally,challenges and future trends are highlighted to provide some insights on the potential future work for researchers in this field. 展开更多
关键词 RAN computing resource sharing com-munication and computing joint design
下载PDF
Lyapunov-Guided Optimal Service Placement in Vehicular Edge Computing
17
作者 Chaogang Tang Yubin Zhao Huaming Wu 《China Communications》 SCIE CSCD 2023年第3期201-217,共17页
Vehicular Edge Computing(VEC)brings the computational resources in close proximity to the service requestors and thus supports explosive computing demands from smart vehicles.However,the limited computing capability o... Vehicular Edge Computing(VEC)brings the computational resources in close proximity to the service requestors and thus supports explosive computing demands from smart vehicles.However,the limited computing capability of VEC cannot simultaneously respond to large amounts of offloading requests,thus restricting the performance of VEC system.Besides,a mass of traffic data can incur tremendous pressure on the front-haul links between vehicles and the edge server.To strengthen the performance of VEC,in this paper we propose to place services beforehand at the edge server,e.g.,by deploying the services/tasks-oriented data(e.g.,related libraries and databases)in advance at the network edge,instead of downloading them from the remote data center or offloading them from vehicles during the runtime.In this paper,we formulate the service placement problem in VEC to minimize the average response latency for all requested services along the slotted timeline.Specifically,the time slot spanned optimization problem is converted into per-slot optimization problems based on the Lyapunov optimization.Then a greedy heuristic is introduced to the drift-plus-penalty-based algorithm for seeking the approximate solution.The simulation results reveal its advantages over others in terms of optimal values and our strategy can satisfy the long-term energy constraint. 展开更多
关键词 vehicular edge computing service place-ment response latency computational resources
下载PDF
Heuristic Virtual Machine Allocation for Multi-Tier Ambient Assisted Living Applications in a Cloud Data Center
18
作者 Jing Bi Haitao Yuan +1 位作者 Ming Tie Xiao Song 《China Communications》 SCIE CSCD 2016年第5期56-65,共10页
Cloud computing provides the essential infrastructure for multi-tier Ambient Assisted Living(AAL) applications that facilitate people's lives. Resource provisioning is a critically important problem for AAL applic... Cloud computing provides the essential infrastructure for multi-tier Ambient Assisted Living(AAL) applications that facilitate people's lives. Resource provisioning is a critically important problem for AAL applications in cloud data centers(CDCs). This paper focuses on modeling and analysis of multi-tier AAL applications, and aims to optimize resource provisioning while meeting requests' response time constraint. This paper models a multi-tier AAL application as a hybrid multi-tier queueing model consisting of an M/M/c queueing model and multiple M/M/1 queueing models. Then, virtual machine(VM) allocation is formulated as a constrained optimization problem in a CDC, and is further solved with the proposed heuristic VM allocation algorithm(HVMA). The results demonstrate that the proposed model and algorithm can effectively achieve dynamic resource provisioning while meeting the performance constraint. 展开更多
关键词 ambient assisted living cloud computing resource provisioning virtual machine heuristic optimization
下载PDF
Prediction based dynamic resource allocation method for edge computing first networking
19
作者 Zhang Luying Liu Xiaokai +2 位作者 Li Zhao Xu Fangmin Zhao Chenglin 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2023年第3期78-87,共10页
Aiming at the factory with high-complex and multi-terminal in the industrial Internet of things(IIoT),a hierarchical edge networking collaboration(HENC)framework based on the cloud-edge collaboration and computing fir... Aiming at the factory with high-complex and multi-terminal in the industrial Internet of things(IIoT),a hierarchical edge networking collaboration(HENC)framework based on the cloud-edge collaboration and computing first networking(CFN)is proposed to improve the capability of task processing with fixed computing resources on the edge effectively.To optimize the delay and energy consumption in HENC,a multi-objective optimization(MOO)problem is formulated.Furthermore,to improve the efficiency and reliability of the system,a resource prediction model based on ridge regression(RR)is proposed to forecast the task size of the next time slot,and an emergency-aware(EA)computing resource allocation algorithm is proposed to reallocate tasks in edge CFN.Based on the simulation result,the EA algorithm is superior to the greedy resource allocation in time delay,energy consumption,quality of service(QoS)especially with limited computing resources. 展开更多
关键词 cloud-edge collaboration computing first networking(CFN) computing resource allocation multi-objective optimization(MOO)
原文传递
An Optimal Resource Provision Policy in Cloud Computing Based on Customer Profiles
20
作者 ZHOU Jingcai ZHANG Huying CHEN Yibo 《Wuhan University Journal of Natural Sciences》 CAS 2014年第3期213-220,共8页
Conventional resource provision algorithms focus on how to maximize resource utilization and meet a fixed constraint of response time which be written in service level agreement(SLA).Unfortunately,the expected respo... Conventional resource provision algorithms focus on how to maximize resource utilization and meet a fixed constraint of response time which be written in service level agreement(SLA).Unfortunately,the expected response time is highly variable and it is usually longer than the value of SLA.So,it leads to a poor resource utilization and unnecessary servers migration.We develop a framework for customer-driven dynamic resource allocation in cloud computing.Termed CDSMS(customer-driven service manage system),and the framework’s contributions are twofold.First,it can reduce the total migration times by adjusting the value of parameters of response time dynamically according to customers’profiles.Second,it can choose a best resource provision algorithm automatically in different scenarios to improve resource utilization.Finally,we perform a serious experiment in a real cloud computing platform.Experimental results show that CDSMS provides a satisfactory solution for the prediction of expected response time and the interval period between two tasks and reduce the total resource usage cost. 展开更多
关键词 cloud computing service level agreement quality of experience resource provision policy customers profiles
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部