虽然Peer to Peer结构的可扩展性已经成为普遍的共识,但如何简单、有效地构造具有良好特性的P2P计算网络仍然是一个开放问题。本文提出了一个自组织的Peer to Peer重叠计算网络的构造方法以及基于该网络的计算任务调度算法。仿真结果说...虽然Peer to Peer结构的可扩展性已经成为普遍的共识,但如何简单、有效地构造具有良好特性的P2P计算网络仍然是一个开放问题。本文提出了一个自组织的Peer to Peer重叠计算网络的构造方法以及基于该网络的计算任务调度算法。仿真结果说明,本文构造的计算网络表现出明显的自组织特性,具有较好的可扩展性和自组织能力,能较好地为计算资源的调度提供支持。展开更多
Heterogeneous computing (HC) environment utilizes diverse resources with different computational capabilities to solve computing-intensive applications having diverse computational requirements and constraints. The ta...Heterogeneous computing (HC) environment utilizes diverse resources with different computational capabilities to solve computing-intensive applications having diverse computational requirements and constraints. The task assignment problem in HC environment can be formally defined as for a given set of tasks and machines, assigning tasks to machines to achieve the minimum makespan. In this paper we propose a new task scheduling heuristic, high standard deviation first (HSTDF), which considers the standard deviation of the expected execution time of a task as a selection criterion. Standard deviation of the ex- pected execution time of a task represents the amount of variation in task execution time on different machines. Our conclusion is that tasks having high standard deviation must be assigned first for scheduling. A large number of experiments were carried out to check the effectiveness of the proposed heuristic in different scenarios, and the comparison with the existing heuristics (Max-min, Sufferage, Segmented Min-average, Segmented Min-min, and Segmented Max-min) clearly reveals that the proposed heuristic outperforms all existing heuristics in terms of average makespan.展开更多
Task scheduling is one of the core steps to effectively exploit the capabilities of heterogeneous re-sources in the grid.This paper presents a new hybrid differential evolution(HDE)algorithm for findingan optimal or n...Task scheduling is one of the core steps to effectively exploit the capabilities of heterogeneous re-sources in the grid.This paper presents a new hybrid differential evolution(HDE)algorithm for findingan optimal or near-optimal schedule within reasonable time.The encoding scheme and the adaptation ofclassical differential evolution algorithm for dealing with discrete variables are discussed.A simple but ef-fective local search is incorporated into differential evolution to stress exploitation.The performance of theproposed HDE algorithm is showed by being compared with a genetic algorithm(GA)on a known staticbenchmark for the problem.Experimental results indicate that the proposed algorithm has better perfor-mance than GA in terms of both solution quality and computational time,and thus it can be used to de-sign efficient dynamic schedulers in batch mode for real grid systems.展开更多
High performance computing(HPC)is a powerful tool to accelerate the Kohn–Sham density functional theory(KS-DFT)calculations on modern heterogeneous supercomputers.Here,we describe a massively parallel implementation ...High performance computing(HPC)is a powerful tool to accelerate the Kohn–Sham density functional theory(KS-DFT)calculations on modern heterogeneous supercomputers.Here,we describe a massively parallel implementation of discontinuous Galerkin density functional theory(DGDFT)method on the Sunway Taihu Light supercomputer.The DGDFT method uses the adaptive local basis(ALB)functions generated on-the-fly during the self-consistent field(SCF)iteration to solve the KS equations with high precision comparable to plane-wave basis set.In particular,the DGDFT method adopts a two-level parallelization strategy that deals with various types of data distribution,task scheduling,and data communication schemes,and combines with the master–slave multi-thread heterogeneous parallelism of SW26010 processor,resulting in large-scale HPC KS-DFT calculations on the Sunway Taihu Light supercomputer.We show that the DGDFT method can scale up to 8,519,680 processing cores(131,072 core groups)on the Sunway Taihu Light supercomputer for studying the electronic structures of twodimensional(2 D)metallic graphene systems that contain tens of thousands of carbon atoms.展开更多
文摘虽然Peer to Peer结构的可扩展性已经成为普遍的共识,但如何简单、有效地构造具有良好特性的P2P计算网络仍然是一个开放问题。本文提出了一个自组织的Peer to Peer重叠计算网络的构造方法以及基于该网络的计算任务调度算法。仿真结果说明,本文构造的计算网络表现出明显的自组织特性,具有较好的可扩展性和自组织能力,能较好地为计算资源的调度提供支持。
基金Project supported by the National Natural Science Foundation of China (No. 60703012)the National Basic Research Program (973) of China (No. 2006CB303000)the Heilongjiang Provincial Scientific and Technological Special Fund for Young Scholars (No. QC06C033),China
文摘Heterogeneous computing (HC) environment utilizes diverse resources with different computational capabilities to solve computing-intensive applications having diverse computational requirements and constraints. The task assignment problem in HC environment can be formally defined as for a given set of tasks and machines, assigning tasks to machines to achieve the minimum makespan. In this paper we propose a new task scheduling heuristic, high standard deviation first (HSTDF), which considers the standard deviation of the expected execution time of a task as a selection criterion. Standard deviation of the ex- pected execution time of a task represents the amount of variation in task execution time on different machines. Our conclusion is that tasks having high standard deviation must be assigned first for scheduling. A large number of experiments were carried out to check the effectiveness of the proposed heuristic in different scenarios, and the comparison with the existing heuristics (Max-min, Sufferage, Segmented Min-average, Segmented Min-min, and Segmented Max-min) clearly reveals that the proposed heuristic outperforms all existing heuristics in terms of average makespan.
基金supported by the National Basic Research Program of China(No.2007CB316502)the National Natural Science Foundation of China(No.60534060)
文摘Task scheduling is one of the core steps to effectively exploit the capabilities of heterogeneous re-sources in the grid.This paper presents a new hybrid differential evolution(HDE)algorithm for findingan optimal or near-optimal schedule within reasonable time.The encoding scheme and the adaptation ofclassical differential evolution algorithm for dealing with discrete variables are discussed.A simple but ef-fective local search is incorporated into differential evolution to stress exploitation.The performance of theproposed HDE algorithm is showed by being compared with a genetic algorithm(GA)on a known staticbenchmark for the problem.Experimental results indicate that the proposed algorithm has better perfor-mance than GA in terms of both solution quality and computational time,and thus it can be used to de-sign efficient dynamic schedulers in batch mode for real grid systems.
基金partly supported by the Supercomputer Application Project Trail Funding from Wuxi Jiangnan Institute of Computing Technology(BB2340000016)the Strategic Priority Research Program of Chinese Academy of Sciences(XDC01040100)+6 种基金the National Natural Science Foundation of China(21688102,21803066)the Anhui Initiative in Quantum Information Technologies(AHY090400)the National Key Research and Development Program of China(2016YFA0200604)the Fundamental Research Funds for Central Universities(WK2340000091)the Chinese Academy of Sciences Pioneer Hundred Talents Program(KJ2340000031)the Research Start-Up Grants(KY2340000094)the Academic Leading Talents Training Program(KY2340000103)from University of Science and Technology of China。
文摘High performance computing(HPC)is a powerful tool to accelerate the Kohn–Sham density functional theory(KS-DFT)calculations on modern heterogeneous supercomputers.Here,we describe a massively parallel implementation of discontinuous Galerkin density functional theory(DGDFT)method on the Sunway Taihu Light supercomputer.The DGDFT method uses the adaptive local basis(ALB)functions generated on-the-fly during the self-consistent field(SCF)iteration to solve the KS equations with high precision comparable to plane-wave basis set.In particular,the DGDFT method adopts a two-level parallelization strategy that deals with various types of data distribution,task scheduling,and data communication schemes,and combines with the master–slave multi-thread heterogeneous parallelism of SW26010 processor,resulting in large-scale HPC KS-DFT calculations on the Sunway Taihu Light supercomputer.We show that the DGDFT method can scale up to 8,519,680 processing cores(131,072 core groups)on the Sunway Taihu Light supercomputer for studying the electronic structures of twodimensional(2 D)metallic graphene systems that contain tens of thousands of carbon atoms.