Cloud computing represents a novel computing model in the contemporary technology world. In a cloud system, the com- puting power of virtual machines (VMs) and network status can greatly affect the completion time o...Cloud computing represents a novel computing model in the contemporary technology world. In a cloud system, the com- puting power of virtual machines (VMs) and network status can greatly affect the completion time of data intensive tasks. How- ever, most of the current resource allocation policies focus only on network conditions and physical hosts. And the computing power of VMs is largely ignored. This paper proposes a comprehensive resource allocation policy which consists of a data intensive task scheduling algorithm that takes account of computing power of VMs and a VM allocation policy that considers bandwidth between storage nodes and hosts. The VM allocation policy includes VM placement and VM migration algorithms. Related simulations show that the proposed algorithms can greatly reduce the task comple- tion time and keep good load balance of physical hosts at the same time.展开更多
In the cloud data centers,how to map virtual machines(VMs) on physical machines(PMs) to reduce the energy consumption is becoming one of the major issues,and the existing VM scheduling schemes are mostly to reduce ene...In the cloud data centers,how to map virtual machines(VMs) on physical machines(PMs) to reduce the energy consumption is becoming one of the major issues,and the existing VM scheduling schemes are mostly to reduce energy consumption by optimizing the utilization of physical servers or network elements.However,the aggressive consolidation of these resources may lead to network performance degradation.In view of this,this paper proposes a two-stage VM scheduling scheme:(1) We propose a static VM placement scheme to minimize the number of activating PMs and network elements to reduce the energy consumption;(2) In the premise of minimizing the migration costs,we propose a dynamic VM migration scheme to minimize the maximum link utilization to improve the network performance.This scheme makes a tradeoff between energy efficiency and network performance.We design a new twostage heuristic algorithm for a solution,and the simulations show that our solution achieves good results.展开更多
Services provided by internet need guaranteed network performance. Efficient packet queuing and scheduling schemes play key role in achieving this. Internet engineering task force(IETF) has proposed Differentiated Ser...Services provided by internet need guaranteed network performance. Efficient packet queuing and scheduling schemes play key role in achieving this. Internet engineering task force(IETF) has proposed Differentiated Services(Diff Serv) architecture for IP network which is based on classifying packets in to different service classes and scheduling them. Scheduling schemes of today's wireless broadband networks work on service differentiation. In this paper, we present a novel packet queue scheduling algorithm called dynamically weighted low complexity fair queuing(DWLC-FQ) which is an improvement over weighted fair queuing(WFQ) and worstcase fair weighted fair queuing+(WF2Q+). The proposed algorithm incorporates dynamic weight adjustment mechanism to cope with dynamics of data traffic such as burst and overload. It also reduces complexity associated with virtual time update and hence makes it suitable for high speed networks. Simulation results of proposed packet scheduling scheme demonstrate improvement in delay and drop rate performance for constant bit rate and video applications with very little or negligible impact on fairness.展开更多
Service-oriented future internet architecture(SOFIA) is a clean-slate network architecture. In SOFIA, a service request is mainly processed through service resolution and network resource allocation. To realize the ...Service-oriented future internet architecture(SOFIA) is a clean-slate network architecture. In SOFIA, a service request is mainly processed through service resolution and network resource allocation. To realize the network resource allocation, we reference the idea of network virtualization and propose resource scheduling virtualization. In resource scheduling virtualization, a service request is abstracted as a virtual network(VN) and the network resources are allocated by mapping the VN onto the physical network. Resource scheduling virtualization provides centralized resource scheduling control within an autonomous system(AS) and achieves better controllability compared with the distributed schemes. Besides, resource scheduling virtualization supports multi-site selection as well. Meanwhile, we propose a collection of resource scheduling algorithms based on maximum resource tree(MRT) adapting to different scenarios. According to the simulation results, the proposed algorithms show good performance on the key metrics, such as acceptance ratio, revenue, cost and utilization. Moreover, the simulation results reveal that our algorithm is more efficient than the traditional ones.展开更多
We design a task mapper TPCM for assigning tasks to virtual machines, and an application-aware virtual machine scheduler TPCS oriented for parallel computing to achieve a high performance in virtual computing systems....We design a task mapper TPCM for assigning tasks to virtual machines, and an application-aware virtual machine scheduler TPCS oriented for parallel computing to achieve a high performance in virtual computing systems. To solve the problem of mapping tasks to virtual machines, a virtual machine mapping algorithm (VMMA) in TPCM is presented to achieve load balance in a cluster. Based on such mapping results, TPCS is constructed including three components: a middleware supporting an application-driven scheduling, a device driver in the guest OS kernel, and a virtual machine scheduling algorithm. These components are implemented in the user space, guest OS, and the CPU virtualization subsystem of the Xen hypervisor, respectively. In TPCS, the progress statuses of tasks are transmitted to the underlying kernel from the user space, thus enabling virtual machine scheduling policy to schedule based on the progress of tasks. This policy aims to exchange completion time of tasks for resource utilization. Experimental results show that TPCM can mine the parallelism among tasks to implement the mapping from tasks to virtual machines based on the relations among subtasks. The TPCS scheduler can complete the tasks in a shorter time than can Credit and other schedulers, because it uses task progress to ensure that the tasks in virtual machines complete simultaneously, thereby reducing the time spent in pending, synchronization, communication, and switching. Therefore, parallel tasks can collaborate with each other to achieve higher resource utilization and lower overheads. We conclude that the TPCS scheduler can overcome the shortcomings of present algorithms in perceiving the progress of tasks, making it better than schedulers currently used in parallel computing.展开更多
基金supported by the National Natural Science Foundation of China(6120235461272422)the Scientific and Technological Support Project(Industry)of Jiangsu Province(BE2011189)
文摘Cloud computing represents a novel computing model in the contemporary technology world. In a cloud system, the com- puting power of virtual machines (VMs) and network status can greatly affect the completion time of data intensive tasks. How- ever, most of the current resource allocation policies focus only on network conditions and physical hosts. And the computing power of VMs is largely ignored. This paper proposes a comprehensive resource allocation policy which consists of a data intensive task scheduling algorithm that takes account of computing power of VMs and a VM allocation policy that considers bandwidth between storage nodes and hosts. The VM allocation policy includes VM placement and VM migration algorithms. Related simulations show that the proposed algorithms can greatly reduce the task comple- tion time and keep good load balance of physical hosts at the same time.
基金supported by the National Natural Science Foundation of China(61002011)the National High Technology Research and Development Program of China(863 Program)(2013AA013303)+1 种基金the Fundamental Research Funds for the Central Universities(2013RC1104)the Open Fund of the State Key Laboratory of Software Development Environment(SKLSDE-2009KF-2-08)
文摘In the cloud data centers,how to map virtual machines(VMs) on physical machines(PMs) to reduce the energy consumption is becoming one of the major issues,and the existing VM scheduling schemes are mostly to reduce energy consumption by optimizing the utilization of physical servers or network elements.However,the aggressive consolidation of these resources may lead to network performance degradation.In view of this,this paper proposes a two-stage VM scheduling scheme:(1) We propose a static VM placement scheme to minimize the number of activating PMs and network elements to reduce the energy consumption;(2) In the premise of minimizing the migration costs,we propose a dynamic VM migration scheme to minimize the maximum link utilization to improve the network performance.This scheme makes a tradeoff between energy efficiency and network performance.We design a new twostage heuristic algorithm for a solution,and the simulations show that our solution achieves good results.
文摘Services provided by internet need guaranteed network performance. Efficient packet queuing and scheduling schemes play key role in achieving this. Internet engineering task force(IETF) has proposed Differentiated Services(Diff Serv) architecture for IP network which is based on classifying packets in to different service classes and scheduling them. Scheduling schemes of today's wireless broadband networks work on service differentiation. In this paper, we present a novel packet queue scheduling algorithm called dynamically weighted low complexity fair queuing(DWLC-FQ) which is an improvement over weighted fair queuing(WFQ) and worstcase fair weighted fair queuing+(WF2Q+). The proposed algorithm incorporates dynamic weight adjustment mechanism to cope with dynamics of data traffic such as burst and overload. It also reduces complexity associated with virtual time update and hence makes it suitable for high speed networks. Simulation results of proposed packet scheduling scheme demonstrate improvement in delay and drop rate performance for constant bit rate and video applications with very little or negligible impact on fairness.
基金supported by the National Natural Science Foundation of China (61201153)the National Basic Research Program of China (2012CB315801)+1 种基金the Fundamental Research Funds for the Central Universities (2013RC0118)the Prospective Research Project on Future Networks in Jiangsu Future Networks Innovation Institute (BY2013095-2-16)
文摘Service-oriented future internet architecture(SOFIA) is a clean-slate network architecture. In SOFIA, a service request is mainly processed through service resolution and network resource allocation. To realize the network resource allocation, we reference the idea of network virtualization and propose resource scheduling virtualization. In resource scheduling virtualization, a service request is abstracted as a virtual network(VN) and the network resources are allocated by mapping the VN onto the physical network. Resource scheduling virtualization provides centralized resource scheduling control within an autonomous system(AS) and achieves better controllability compared with the distributed schemes. Besides, resource scheduling virtualization supports multi-site selection as well. Meanwhile, we propose a collection of resource scheduling algorithms based on maximum resource tree(MRT) adapting to different scenarios. According to the simulation results, the proposed algorithms show good performance on the key metrics, such as acceptance ratio, revenue, cost and utilization. Moreover, the simulation results reveal that our algorithm is more efficient than the traditional ones.
基金Project (No. 2007AA010305) supported by the National High-Tech R&D Program (863) of China
文摘We design a task mapper TPCM for assigning tasks to virtual machines, and an application-aware virtual machine scheduler TPCS oriented for parallel computing to achieve a high performance in virtual computing systems. To solve the problem of mapping tasks to virtual machines, a virtual machine mapping algorithm (VMMA) in TPCM is presented to achieve load balance in a cluster. Based on such mapping results, TPCS is constructed including three components: a middleware supporting an application-driven scheduling, a device driver in the guest OS kernel, and a virtual machine scheduling algorithm. These components are implemented in the user space, guest OS, and the CPU virtualization subsystem of the Xen hypervisor, respectively. In TPCS, the progress statuses of tasks are transmitted to the underlying kernel from the user space, thus enabling virtual machine scheduling policy to schedule based on the progress of tasks. This policy aims to exchange completion time of tasks for resource utilization. Experimental results show that TPCM can mine the parallelism among tasks to implement the mapping from tasks to virtual machines based on the relations among subtasks. The TPCS scheduler can complete the tasks in a shorter time than can Credit and other schedulers, because it uses task progress to ensure that the tasks in virtual machines complete simultaneously, thereby reducing the time spent in pending, synchronization, communication, and switching. Therefore, parallel tasks can collaborate with each other to achieve higher resource utilization and lower overheads. We conclude that the TPCS scheduler can overcome the shortcomings of present algorithms in perceiving the progress of tasks, making it better than schedulers currently used in parallel computing.