To solve the deadlock problem of tasks that the interdependence between tasks fails to consider during the course of resource assignment and task scheduling based on the heuristics algorithm, an improved ant colony sy...To solve the deadlock problem of tasks that the interdependence between tasks fails to consider during the course of resource assignment and task scheduling based on the heuristics algorithm, an improved ant colony system (ACS) based algorithm is proposed. First, how to map the resource assignment and task scheduling (RATS) problem into the optimization selection problem of task resource assignment graph (TRAG) and to add the semaphore mechanism in the optimal TRAG to solve deadlocks are explained. Secondly, how to utilize the grid pheromone system model to realize the algorithm based on ACS is explicated. This refers to the construction of TRAG by the random selection of appropriate resources for each task by the user agent and the optimization of TRAG through the positive feedback and distributed parallel computing mechanism of the ACS. Simulation results show that the proposed algorithm is effective and efficient in solving the deadlock problem.展开更多
Cloud computing represents a novel computing model in the contemporary technology world. In a cloud system, the com- puting power of virtual machines (VMs) and network status can greatly affect the completion time o...Cloud computing represents a novel computing model in the contemporary technology world. In a cloud system, the com- puting power of virtual machines (VMs) and network status can greatly affect the completion time of data intensive tasks. How- ever, most of the current resource allocation policies focus only on network conditions and physical hosts. And the computing power of VMs is largely ignored. This paper proposes a comprehensive resource allocation policy which consists of a data intensive task scheduling algorithm that takes account of computing power of VMs and a VM allocation policy that considers bandwidth between storage nodes and hosts. The VM allocation policy includes VM placement and VM migration algorithms. Related simulations show that the proposed algorithms can greatly reduce the task comple- tion time and keep good load balance of physical hosts at the same time.展开更多
A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process u...A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA.展开更多
With the rapid development and popularization of 5G and the Internetof Things, a number of new applications have emerged, such as driverless cars.Most of these applications are time-delay sensitive, and some deficienc...With the rapid development and popularization of 5G and the Internetof Things, a number of new applications have emerged, such as driverless cars.Most of these applications are time-delay sensitive, and some deficiencies werefound during data processing through the cloud centric architecture. The data generated by terminals at the edge of the network is an urgent problem to be solved atpresent. In 5 g environments, edge computing can better meet the needs of lowdelay and wide connection applications, and support the fast request of terminalusers. However, edge computing only has the edge layer computing advantage,and it is difficult to achieve global resource scheduling and configuration, whichmay lead to the problems of low resource utilization rate, long task processingdelay and unbalanced system load, so as to lead to affect the service quality ofusers. To solve this problem, this paper studies task scheduling and resource collaboration based on a Cloud-Edge-Terminal collaborative architecture, proposes agenetic simulated annealing fusion algorithm, called GSA-EDGE, to achieve taskscheduling and resource allocation, and designs a series of experiments to verifythe effectiveness of the GSA-EDGE algorithm. The experimental results showthat the proposed method can reduce the time delay of task processing comparedwith the local task processing method and the task average allocation method.展开更多
With the continuous evolution of smart grid and global energy interconnection technology,amount of intelligent terminals have been connected to power grid,which can be used for providing resource services as edge node...With the continuous evolution of smart grid and global energy interconnection technology,amount of intelligent terminals have been connected to power grid,which can be used for providing resource services as edge nodes.Traditional cloud computing can be used to provide storage services and task computing services in the power grid,but it faces challenges such as resource bottlenecks,time delays,and limited network bandwidth resources.Edge computing is an effective supplement for cloud computing,because it can provide users with local computing services with lower latency.However,because the resources in a single edge node are limited,resource-intensive tasks need to be divided into many subtasks and then assigned to different edge nodes by resource cooperation.Making task scheduling more efficient is an important issue.In this paper,a two-layer resource management scheme is proposed based on the concept of edge computing.In addition,a new task scheduling algorithm named GA-EC(Genetic Algorithm for Edge Computing)is put forth,based on a genetic algorithm,that can dynamically schedule tasks according to different scheduling goals.The simulation shows that the proposed algorithm has a beneficial effect on energy consumption and load balancing,and reduces time delay.展开更多
Internet of Things (IoT) is transforming the technical setting ofconventional systems and finds applicability in smart cities, smart healthcare, smart industry, etc. In addition, the application areas relating to theI...Internet of Things (IoT) is transforming the technical setting ofconventional systems and finds applicability in smart cities, smart healthcare, smart industry, etc. In addition, the application areas relating to theIoT enabled models are resource-limited and necessitate crisp responses, lowlatencies, and high bandwidth, which are beyond their abilities. Cloud computing (CC) is treated as a resource-rich solution to the above mentionedchallenges. But the intrinsic high latency of CC makes it nonviable. The longerlatency degrades the outcome of IoT based smart systems. CC is an emergentdispersed, inexpensive computing pattern with massive assembly of heterogeneous autonomous systems. The effective use of task scheduling minimizes theenergy utilization of the cloud infrastructure and rises the income of serviceproviders by the minimization of the processing time of the user job. Withthis motivation, this paper presents an intelligent Chaotic Artificial ImmuneOptimization Algorithm for Task Scheduling (CAIOA-RS) in IoT enabledcloud environment. The proposed CAIOA-RS algorithm solves the issue ofresource allocation in the IoT enabled cloud environment. It also satisfiesthe makespan by carrying out the optimum task scheduling process with thedistinct strategies of incoming tasks. The design of CAIOA-RS techniqueincorporates the concept of chaotic maps into the conventional AIOA toenhance its performance. A series of experiments were carried out on theCloudSim platform. The simulation results demonstrate that the CAIOA-RStechnique indicates that the proposed model outperforms the original version,as well as other heuristics and metaheuristics.展开更多
In today’s world, Cloud Computing (CC) enables the users to accesscomputing resources and services over cloud without any need to own the infrastructure. Cloud Computing is a concept in which a network of devices, l...In today’s world, Cloud Computing (CC) enables the users to accesscomputing resources and services over cloud without any need to own the infrastructure. Cloud Computing is a concept in which a network of devices, located inremote locations, is integrated to perform operations like data collection, processing, data profiling and data storage. In this context, resource allocation and taskscheduling are important processes which must be managed based on the requirements of a user. In order to allocate the resources effectively, hybrid cloud isemployed since it is a capable solution to process large-scale consumer applications in a pay-by-use manner. Hence, the model is to be designed as a profit-driven framework to reduce cost and make span. With this motivation, the currentresearch work develops a Cost-Effective Optimal Task Scheduling Model(CEOTS). A novel algorithm called Target-based Cost Derivation (TCD) modelis used in the proposed work for hybrid clouds. Moreover, the algorithm workson the basis of multi-intentional task completion process with optimal resourceallocation. The model was successfully simulated to validate its effectivenessbased on factors such as processing time, make span and efficient utilization ofvirtual machines. The results infer that the proposed model outperformed theexisting works and can be relied in future for real-time applications.展开更多
t In this paper an overall scheme of the task management system of ternary optical computer (TOC) is proposed, and the software architecture chart is given. The function and accomplishment of each module in the syst...t In this paper an overall scheme of the task management system of ternary optical computer (TOC) is proposed, and the software architecture chart is given. The function and accomplishment of each module in the system are described in general. In addition, according to the aforementioned scheme a prototype of TOC task management system is implemented, and the feasibility, rationality and completeness of the scheme are verified via running and testing the prototype.展开更多
MapReduce is a widely used programming model for large-scale data processing.However,it still suffers from the skew problem,which refers to the case in which load is imbalanced among tasks.This problem can cause a sma...MapReduce is a widely used programming model for large-scale data processing.However,it still suffers from the skew problem,which refers to the case in which load is imbalanced among tasks.This problem can cause a small number of tasks to consume much more time than other tasks,thereby prolonging the total job completion time.Existing solutions to this problem commonly predict the loads of tasks and then rebalance the load among them.However,solutions of this kind often incur high performance overhead due to the load prediction and rebalancing.Moreover,existing solutions target the partitioning skew for reduce tasks,but cannot mitigate the computational skew for map tasks.Accordingly,in this paper,we present DynamicAdjust,a run-time dynamic resource adjustment technique for mitigating skew.Rather than rebalancing the load among tasks,DynamicAdjust monitors the run-time execution of tasks and dynamically increases resources for those tasks that require more computation.In so doing,DynamicAdjust can not only eliminate the overhead incurred by load prediction and rebalancing,but also culls both the partitioning skew and the computational skew.Experiments are conducted based on a 21-node real cluster using real-world datasets.The results show that DynamicAdjust can mitigate the negative impact of the skew and shorten the job completion time by up to 40.85%.展开更多
The rapid growth of service-oriented and cloud computing has created large-scale data centres worldwide.Modern data centres’operating costs mostly come from back-end cloud infrastructure and energy consumption.In clo...The rapid growth of service-oriented and cloud computing has created large-scale data centres worldwide.Modern data centres’operating costs mostly come from back-end cloud infrastructure and energy consumption.In cloud computing,extensive communication resources are required.Moreover,cloud applications require more bandwidth to transfer large amounts of data to satisfy end-user requirements.It is also essential that no communication source can cause congestion or bag loss owing to unnecessary switching buffers.This paper proposes a novel Energy and Communication(EC)aware scheduling(EC-scheduler)algorithm for green cloud computing,which optimizes data centre energy consumption and traffic load.The primary goal of the proposed EC-scheduler is to assign user applications to cloud data centre resources with minimal utilization of data centres.We first introduce a Multi-Objective Leader Salp Swarm(MLSS)algorithm for task sorting,which ensures traffic load balancing,and then an Emotional Artificial Neural Network(EANN)for efficient resource allocation.EC-scheduler schedules cloud user requirements to the cloud server by optimizing both energy and communication delay,which supports the lower emission of carbon dioxide by the cloud server system,enabling a green,unalloyed environment.We tested the proposed plan and existing cloud scheduling methods using the GreenCloud simulator to analyze the efficiency of optimizing data centre energy and other scheduler metrics.The EC-scheduler parameters Power Usage Effectiveness(PUE),Data Centre Energy Productivity(DCEP),Throughput,Average Execution Time(AET),Energy Consumption,and Makespan showed up to 26.738%,37.59%,50%,4.34%,34.2%,and 33.54%higher efficiency,respectively,than existing state of the art schedulers concerning number of user applications and number of user requests.展开更多
文摘To solve the deadlock problem of tasks that the interdependence between tasks fails to consider during the course of resource assignment and task scheduling based on the heuristics algorithm, an improved ant colony system (ACS) based algorithm is proposed. First, how to map the resource assignment and task scheduling (RATS) problem into the optimization selection problem of task resource assignment graph (TRAG) and to add the semaphore mechanism in the optimal TRAG to solve deadlocks are explained. Secondly, how to utilize the grid pheromone system model to realize the algorithm based on ACS is explicated. This refers to the construction of TRAG by the random selection of appropriate resources for each task by the user agent and the optimization of TRAG through the positive feedback and distributed parallel computing mechanism of the ACS. Simulation results show that the proposed algorithm is effective and efficient in solving the deadlock problem.
基金supported by the National Natural Science Foundation of China(6120235461272422)the Scientific and Technological Support Project(Industry)of Jiangsu Province(BE2011189)
文摘Cloud computing represents a novel computing model in the contemporary technology world. In a cloud system, the com- puting power of virtual machines (VMs) and network status can greatly affect the completion time of data intensive tasks. How- ever, most of the current resource allocation policies focus only on network conditions and physical hosts. And the computing power of VMs is largely ignored. This paper proposes a comprehensive resource allocation policy which consists of a data intensive task scheduling algorithm that takes account of computing power of VMs and a VM allocation policy that considers bandwidth between storage nodes and hosts. The VM allocation policy includes VM placement and VM migration algorithms. Related simulations show that the proposed algorithms can greatly reduce the task comple- tion time and keep good load balance of physical hosts at the same time.
基金This work was supported by the National Key Research and Development Program of China(2021YFB2900603)the National Natural Science Foundation of China(61831008).
文摘A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA.
基金supported by the Social Science Foundation of Hebei Province(No.HB19JL007)the Education technology Foundation of the Ministry of Education(No.2017A01020)the Natural Science Foundation of Hebei Province(F2021207005).
文摘With the rapid development and popularization of 5G and the Internetof Things, a number of new applications have emerged, such as driverless cars.Most of these applications are time-delay sensitive, and some deficiencies werefound during data processing through the cloud centric architecture. The data generated by terminals at the edge of the network is an urgent problem to be solved atpresent. In 5 g environments, edge computing can better meet the needs of lowdelay and wide connection applications, and support the fast request of terminalusers. However, edge computing only has the edge layer computing advantage,and it is difficult to achieve global resource scheduling and configuration, whichmay lead to the problems of low resource utilization rate, long task processingdelay and unbalanced system load, so as to lead to affect the service quality ofusers. To solve this problem, this paper studies task scheduling and resource collaboration based on a Cloud-Edge-Terminal collaborative architecture, proposes agenetic simulated annealing fusion algorithm, called GSA-EDGE, to achieve taskscheduling and resource allocation, and designs a series of experiments to verifythe effectiveness of the GSA-EDGE algorithm. The experimental results showthat the proposed method can reduce the time delay of task processing comparedwith the local task processing method and the task average allocation method.
基金This work was supported by the“National Key Research and Development Program of China”(No.2020YFB0905900).
文摘With the continuous evolution of smart grid and global energy interconnection technology,amount of intelligent terminals have been connected to power grid,which can be used for providing resource services as edge nodes.Traditional cloud computing can be used to provide storage services and task computing services in the power grid,but it faces challenges such as resource bottlenecks,time delays,and limited network bandwidth resources.Edge computing is an effective supplement for cloud computing,because it can provide users with local computing services with lower latency.However,because the resources in a single edge node are limited,resource-intensive tasks need to be divided into many subtasks and then assigned to different edge nodes by resource cooperation.Making task scheduling more efficient is an important issue.In this paper,a two-layer resource management scheme is proposed based on the concept of edge computing.In addition,a new task scheduling algorithm named GA-EC(Genetic Algorithm for Edge Computing)is put forth,based on a genetic algorithm,that can dynamically schedule tasks according to different scheduling goals.The simulation shows that the proposed algorithm has a beneficial effect on energy consumption and load balancing,and reduces time delay.
基金This research was supported by Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)and the Soonchunhyang University Research Fund.
文摘Internet of Things (IoT) is transforming the technical setting ofconventional systems and finds applicability in smart cities, smart healthcare, smart industry, etc. In addition, the application areas relating to theIoT enabled models are resource-limited and necessitate crisp responses, lowlatencies, and high bandwidth, which are beyond their abilities. Cloud computing (CC) is treated as a resource-rich solution to the above mentionedchallenges. But the intrinsic high latency of CC makes it nonviable. The longerlatency degrades the outcome of IoT based smart systems. CC is an emergentdispersed, inexpensive computing pattern with massive assembly of heterogeneous autonomous systems. The effective use of task scheduling minimizes theenergy utilization of the cloud infrastructure and rises the income of serviceproviders by the minimization of the processing time of the user job. Withthis motivation, this paper presents an intelligent Chaotic Artificial ImmuneOptimization Algorithm for Task Scheduling (CAIOA-RS) in IoT enabledcloud environment. The proposed CAIOA-RS algorithm solves the issue ofresource allocation in the IoT enabled cloud environment. It also satisfiesthe makespan by carrying out the optimum task scheduling process with thedistinct strategies of incoming tasks. The design of CAIOA-RS techniqueincorporates the concept of chaotic maps into the conventional AIOA toenhance its performance. A series of experiments were carried out on theCloudSim platform. The simulation results demonstrate that the CAIOA-RStechnique indicates that the proposed model outperforms the original version,as well as other heuristics and metaheuristics.
文摘In today’s world, Cloud Computing (CC) enables the users to accesscomputing resources and services over cloud without any need to own the infrastructure. Cloud Computing is a concept in which a network of devices, located inremote locations, is integrated to perform operations like data collection, processing, data profiling and data storage. In this context, resource allocation and taskscheduling are important processes which must be managed based on the requirements of a user. In order to allocate the resources effectively, hybrid cloud isemployed since it is a capable solution to process large-scale consumer applications in a pay-by-use manner. Hence, the model is to be designed as a profit-driven framework to reduce cost and make span. With this motivation, the currentresearch work develops a Cost-Effective Optimal Task Scheduling Model(CEOTS). A novel algorithm called Target-based Cost Derivation (TCD) modelis used in the proposed work for hybrid clouds. Moreover, the algorithm workson the basis of multi-intentional task completion process with optimal resourceallocation. The model was successfully simulated to validate its effectivenessbased on factors such as processing time, make span and efficient utilization ofvirtual machines. The results infer that the proposed model outperformed theexisting works and can be relied in future for real-time applications.
基金Project supported by the National Natural Science Foundation of China(Grant No.61073049)the Ph D Programs Foundation of the Ministry of Education of China(Grant No.20093108110016)the Shanghai Leading Academic Discipline Project(Grant No.J50103)
文摘t In this paper an overall scheme of the task management system of ternary optical computer (TOC) is proposed, and the software architecture chart is given. The function and accomplishment of each module in the system are described in general. In addition, according to the aforementioned scheme a prototype of TOC task management system is implemented, and the feasibility, rationality and completeness of the scheme are verified via running and testing the prototype.
基金funded by the Key Area Research and Development Program of Guangdong Province(2019B010137005)the National Natural Science Foundation of China(61906209).
文摘MapReduce is a widely used programming model for large-scale data processing.However,it still suffers from the skew problem,which refers to the case in which load is imbalanced among tasks.This problem can cause a small number of tasks to consume much more time than other tasks,thereby prolonging the total job completion time.Existing solutions to this problem commonly predict the loads of tasks and then rebalance the load among them.However,solutions of this kind often incur high performance overhead due to the load prediction and rebalancing.Moreover,existing solutions target the partitioning skew for reduce tasks,but cannot mitigate the computational skew for map tasks.Accordingly,in this paper,we present DynamicAdjust,a run-time dynamic resource adjustment technique for mitigating skew.Rather than rebalancing the load among tasks,DynamicAdjust monitors the run-time execution of tasks and dynamically increases resources for those tasks that require more computation.In so doing,DynamicAdjust can not only eliminate the overhead incurred by load prediction and rebalancing,but also culls both the partitioning skew and the computational skew.Experiments are conducted based on a 21-node real cluster using real-world datasets.The results show that DynamicAdjust can mitigate the negative impact of the skew and shorten the job completion time by up to 40.85%.
文摘The rapid growth of service-oriented and cloud computing has created large-scale data centres worldwide.Modern data centres’operating costs mostly come from back-end cloud infrastructure and energy consumption.In cloud computing,extensive communication resources are required.Moreover,cloud applications require more bandwidth to transfer large amounts of data to satisfy end-user requirements.It is also essential that no communication source can cause congestion or bag loss owing to unnecessary switching buffers.This paper proposes a novel Energy and Communication(EC)aware scheduling(EC-scheduler)algorithm for green cloud computing,which optimizes data centre energy consumption and traffic load.The primary goal of the proposed EC-scheduler is to assign user applications to cloud data centre resources with minimal utilization of data centres.We first introduce a Multi-Objective Leader Salp Swarm(MLSS)algorithm for task sorting,which ensures traffic load balancing,and then an Emotional Artificial Neural Network(EANN)for efficient resource allocation.EC-scheduler schedules cloud user requirements to the cloud server by optimizing both energy and communication delay,which supports the lower emission of carbon dioxide by the cloud server system,enabling a green,unalloyed environment.We tested the proposed plan and existing cloud scheduling methods using the GreenCloud simulator to analyze the efficiency of optimizing data centre energy and other scheduler metrics.The EC-scheduler parameters Power Usage Effectiveness(PUE),Data Centre Energy Productivity(DCEP),Throughput,Average Execution Time(AET),Energy Consumption,and Makespan showed up to 26.738%,37.59%,50%,4.34%,34.2%,and 33.54%higher efficiency,respectively,than existing state of the art schedulers concerning number of user applications and number of user requests.