Further improving the railway innovation capacity and technological strength is the important goal of the 14th Five-Year Plan for railway scientific and technological innovation.It includes promoting the deep integrat...Further improving the railway innovation capacity and technological strength is the important goal of the 14th Five-Year Plan for railway scientific and technological innovation.It includes promoting the deep integration of cutting-edge technologies with the railway systems,strengthening the research and application of intelligent railway technologies,applying green computing technologies and advancing the collaborative sharing of transportation big data.The high-speed rail system tasks need to process huge amounts of data and heavy workload with the requirement of ultra-fast response.Therefore,it is of great necessity to promote computation efficiency by applying High Performance Computing(HPC)to high-speed rail systems.The HPC technique is a great solution for improving the performance,efficiency,and safety of high-speed rail systems.In this review,we introduce and analyze the application research of high performance computing technology in the field of highspeed railways.These HPC applications are cataloged into four broad categories,namely:fault diagnosis,network and communication,management system,and simulations.Moreover,challenges and issues to be addressed are discussed and further directions are suggested.展开更多
High performance computing(HPC)is a powerful tool to accelerate the Kohn–Sham density functional theory(KS-DFT)calculations on modern heterogeneous supercomputers.Here,we describe a massively parallel implementation ...High performance computing(HPC)is a powerful tool to accelerate the Kohn–Sham density functional theory(KS-DFT)calculations on modern heterogeneous supercomputers.Here,we describe a massively parallel implementation of discontinuous Galerkin density functional theory(DGDFT)method on the Sunway Taihu Light supercomputer.The DGDFT method uses the adaptive local basis(ALB)functions generated on-the-fly during the self-consistent field(SCF)iteration to solve the KS equations with high precision comparable to plane-wave basis set.In particular,the DGDFT method adopts a two-level parallelization strategy that deals with various types of data distribution,task scheduling,and data communication schemes,and combines with the master–slave multi-thread heterogeneous parallelism of SW26010 processor,resulting in large-scale HPC KS-DFT calculations on the Sunway Taihu Light supercomputer.We show that the DGDFT method can scale up to 8,519,680 processing cores(131,072 core groups)on the Sunway Taihu Light supercomputer for studying the electronic structures of twodimensional(2 D)metallic graphene systems that contain tens of thousands of carbon atoms.展开更多
Traditional high performance computing(HPC)systems provide a standard preset environment to support scientific computation.However,HPC development needs to provide support for more and more diverse applications,such a...Traditional high performance computing(HPC)systems provide a standard preset environment to support scientific computation.However,HPC development needs to provide support for more and more diverse applications,such as artificial intelligence and big data.The standard preset environment can no longer meet these diverse requirements.If users still run these emerging applications on HPC systems,they need to manually maintain the specific dependencies(libraries,environment variables,and so on)of their applications.This increases the development and deployment burden for users.Moreover,the multi-user mode brings about privacy problems among users.Containers like Docker and Singularity can encapsulate the job’s execution environment,but in a highly customized HPC system,cross-environment application deployment of Docker and Singularity is limited.The introduction of container images also imposes a maintenance burden on system administrators.Facing the above-mentioned problems,in this paper we propose a self-deployed execution environment(SDEE)for HPC.SDEE combines the advantages of traditional virtualization and modern containers.SDEE provides an isolated and customizable environment(similar to a virtual machine)to the user.The user is the root user in this environment.The user develops and debugs the application and deploys its special dependencies in this environment.Then the user can load the job to compute nodes directly through the traditional HPC job management system.The job and its dependencies are analyzed,packaged,deployed,and executed automatically.This process enables transparent and rapid job deployment,which not only reduces the burden on users,but also protects user privacy.Experiments show that the overhead introduced by SDEE is negligible and lower than those of both Docker and Singularity.展开更多
Objective:As a high computation cost discipline,nuclear science and engineering still relies heavily on traditional high performance computing(HPC)clusters.However,the usage of traditional HPC for nuclear science and ...Objective:As a high computation cost discipline,nuclear science and engineering still relies heavily on traditional high performance computing(HPC)clusters.However,the usage of traditional HPC for nuclear science and engineering has been limited due to the poor flexibility,the software compatibility and the poor user interfaces.Virtualized/virtual HPC(vHPC)can mimic an HPC by using a cloud computing platform.In this work,we designed and developed a vHPC system for employment in nuclear engineering.Methods:The system is tested using the computation of the numberπby Monte Carlo and an X-ray digital imaging system simulation.The performance of the vHPC system is compared with that of the traditional HPCs.Results:As the number of the simulated particles increases,the virtual cluster computing time grows propor-tionally.The time used for the simulation of the X-ray imaging was about 21.1 h over a 12 kernels virtual server.Experimental results show that the performance of virtual cluster computing and the actual physical machine is almost the same.Conclusions:From these tests,it is concluded that vHPC is a good alternative for employing in nuclear engineering.The proposed vHPC in this paper will make HPC flexible and easy to deploy.展开更多
Low temperature complementary metal oxide semiconductor(CMOS)or cryogenic CMOS is a promising avenue for the continuation of Moore’s law while serving the needs of high performance computing.With temperature as a con...Low temperature complementary metal oxide semiconductor(CMOS)or cryogenic CMOS is a promising avenue for the continuation of Moore’s law while serving the needs of high performance computing.With temperature as a control“knob”to steepen the subthreshold slope behavior of CMOS devices,the supply voltage of operation can be reduced with no impact on operating speed.With the optimal threshold voltage engineering,the device ON current can be further enhanced,translating to higher performance.In this article,the experimentally calibrated data was adopted to tune the threshold voltage and investigated the power performance area of cryogenic CMOS at device,circuit and system level.We also presented results from measurement and analysis of functional memory chips fabricated in 28 nm bulk CMOS and 22 nm fully depleted silicon on insulator(FDSOI)operating at cryogenic temperature.Finally,the challenges and opportunities in the further development and deployment of such systems were discussed.展开更多
In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of ...In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of finite element analysis benchmark tests, the MFLOPS (million floating operations per second) of LDL^T factorization of benchmark tests vary on a Dell Pentium IV 850 MHz machine from 100 to 456 depending on the average size of the super-equations, i.e., on the average depth of unrolling. In this paper, a new sparse static solver with two-level unrolling that employs the concept of master-equations and searches for an appropriate depths of unrolling is proposed. The new solver provides higher MFLOPS for LDL^T factorization of benchmark tests, and therefore speeds up the solution process.展开更多
This paper analyzes the physical potential, computing performance benefi t and power consumption of optical interconnects. Compared with electrical interconnections, optical ones show undoubted advantages based on phy...This paper analyzes the physical potential, computing performance benefi t and power consumption of optical interconnects. Compared with electrical interconnections, optical ones show undoubted advantages based on physical factor analysis. At the same time, since the recent developments drive us to think about whether these optical interconnect technologies with higher bandwidth but higher cost are worthy to be deployed, the computing performance comparison is performed. To meet the increasing demand of large-scale parallel or multi-processor computing tasks, an analytic method to evaluate parallel computing performance ofinterconnect systems is proposed in this paper. Both bandwidth-limit model and full-bandwidth model are under our investigation. Speedup and effi ciency are selected to represent the parallel performance of an interconnect system. Deploying the proposed models, we depict the performance gap between the optical and electrically interconnected systems. Another investigation on power consumption of commercial products showed that if the parallel interconnections are deployed, the unit power consumption will be reduced. Therefore, from the analysis of computing influence and power dissipation, we found that parallel optical interconnect is valuable combination of high performance and low energy consumption. Considering the possible data center under construction, huge power could be saved if parallel optical interconnects technologies are used.展开更多
A new direct method for solving unsymmetrical sparse linear systems(USLS) arising from meshless methods was introduced. Computation of certain meshless methods such as meshless local Petrov-Galerkin (MLPG) method ...A new direct method for solving unsymmetrical sparse linear systems(USLS) arising from meshless methods was introduced. Computation of certain meshless methods such as meshless local Petrov-Galerkin (MLPG) method need to solve large USLS. The proposed solution method for unsymmetrical case performs factorization processes symmetrically on the upper and lower triangular portion of matrix, which differs from previous work based on general unsymmetrical process, and attains higher performance. It is shown that the solution algorithm for USLS can be simply derived from the existing approaches for the symmetrical case. The new matrix factorization algorithm in our method can be implemented easily by modifying a standard JKI symmetrical matrix factorization code. Multi-blocked out-of-core strategies were also developed to expand the solution scale. The approach convincingly increases the speed of the solution process, which is demonstrated with the numerical tests.展开更多
The integration of clusters,grids,clouds,edges and other computing platforms result in contemporary technology of jungle computing.This novel technique has the aptitude to tackle high performance computation systems a...The integration of clusters,grids,clouds,edges and other computing platforms result in contemporary technology of jungle computing.This novel technique has the aptitude to tackle high performance computation systems and it manages the usage of all computing platforms at a time.Federated learning is a collaborative machine learning approach without centralized training data.The proposed system effectively detects the intrusion attack without human intervention and subsequently detects anomalous deviations in device communication behavior,potentially caused by malicious adversaries and it can emerge with new and unknown attacks.The main objective is to learn overall behavior of an intruder while performing attacks to the assumed target service.Moreover,the updated system model is send to the centralized server in jungle computing,to detect their pattern.Federated learning greatly helps the machine to study the type of attack from each device and this technique paves a way to complete dominion over all malicious behaviors.In our proposed work,we have implemented an intrusion detection system that has high accuracy,low False Positive Rate(FPR)scalable,and versatile for the jungle computing environment.The execution time taken to complete a round is less than two seconds,with an accuracy rate of 96%.展开更多
In recent years, the widespread adoption of parallel computing, especially in multi-core processors and high-performance computing environments, ushered in a new era of efficiency and speed. This trend was particularl...In recent years, the widespread adoption of parallel computing, especially in multi-core processors and high-performance computing environments, ushered in a new era of efficiency and speed. This trend was particularly noteworthy in the field of image processing, which witnessed significant advancements. This parallel computing project explored the field of parallel image processing, with a focus on the grayscale conversion of colorful images. Our approach involved integrating OpenMP into our framework for parallelization to execute a critical image processing task: grayscale conversion. By using OpenMP, we strategically enhanced the overall performance of the conversion process by distributing the workload across multiple threads. The primary objectives of our project revolved around optimizing computation time and improving overall efficiency, particularly in the task of grayscale conversion of colorful images. Utilizing OpenMP for concurrent processing across multiple cores significantly reduced execution times through the effective distribution of tasks among these cores. The speedup values for various image sizes highlighted the efficacy of parallel processing, especially for large images. However, a detailed examination revealed a potential decline in parallelization efficiency with an increasing number of cores. This underscored the importance of a carefully optimized parallelization strategy, considering factors like load balancing and minimizing communication overhead. Despite challenges, the overall scalability and efficiency achieved with parallel image processing underscored OpenMP’s effectiveness in accelerating image manipulation tasks.展开更多
The high-performance computing paradigm needs high-speed switching fabrics to meet the heavy traffic generated by their applications.These switching fabrics are efficiently driven by the deployed scheduling algorithms...The high-performance computing paradigm needs high-speed switching fabrics to meet the heavy traffic generated by their applications.These switching fabrics are efficiently driven by the deployed scheduling algorithms.In this paper,we proposed two scheduling algorithms for input queued switches whose operations are based on ranking procedures.At first,we proposed a Simple 2-Bit(S2B)scheme which uses binary ranking procedure and queue size for scheduling the packets.Here,the Virtual Output Queue(VOQ)set with maximum number of empty queues receives higher rank than other VOQ’s.Through simulation,we showed S2B has better throughput performance than Highest Ranking First(HRF)arbitration under uniform,and non-uniform traffic patterns.To further improve the throughput-delay performance,an Enhanced 2-Bit(E2B)approach is proposed.This approach adopts an integer representation for rank,which is the number of empty queues in a VOQ set.The simulation result shows E2B outperforms S2B and HRF scheduling algorithms with maximum throughput-delay performance.Furthermore,the algorithms are simulated under hotspot traffic and E2B proves to be more efficient.展开更多
This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and...This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and hosting,so that if two virtual severs be active on a host and energy load be more on a host,it would allocated the energy of other hosts(virtual host)to itself to stay steady and this option usually leads to hardware overflow errors and users dissatisfaction.This problem has been removed in methods based on cloud processing but not perfectly,therefore,providing an algorithm not only will implement a suitable security background but also it will suitably divide energy consumption and load balancing among virtual severs.The proposed algorithm is compared with several previously proposed Security Strategy including SC-PSSF,PSSF and DEEAC.Comparisons show that the proposed method offers high performance computing,efficiency and consumes lower energy in the network.展开更多
Today the PC class machines are quite popular for HPC area, especially on the problemsthat require the good cost/performance ratios. One of the drawback of these machines is the poormemory throughput performance. And ...Today the PC class machines are quite popular for HPC area, especially on the problemsthat require the good cost/performance ratios. One of the drawback of these machines is the poormemory throughput performance. And one of the reasons of the poor performance is depend on the lack of the mapping capability of the TLB which is a buffer to accelerate the virtual memory access. In this report, I present that the mapping capability and the performance can be improved with the multi granularity TLB feature that some processors have. And I also present that the new TLB handling routine can be incorporated into the demand paging system of Linux.展开更多
Cloud computing is expanding widely in the world of IT infrastructure. This is due partly to the cost-saving effect of economies of scale. Fair market conditions can in theory provide a healthy environment to reflect ...Cloud computing is expanding widely in the world of IT infrastructure. This is due partly to the cost-saving effect of economies of scale. Fair market conditions can in theory provide a healthy environment to reflect the most reasonable costs of computations. While fixed cloud pricing provides an attractive low entry barrier for compute-intensive applications, both the consumer and supplier of computing resources can see high efficiency for their investments by participating in auction-based exchanges. There are huge incentives for the cloud provider to offer auctioned resources. However, from the consumer perspective, using these resources is a sparsely discussed challenge. This paper reports a methodology and framework designed to address the challenges of using HPC (High Performance Computing) applications on auction-based cloud clusters. The authors focus on HPC applications and describe a method for determining bid-aware checkpointing intervals. They extend a theoretical model for determining checkpoint intervals using statistical analysis of pricing histories. Also the latest developments in the SpotHPC framework are introduced which aim at facilitating the managed execution of real MPI applications on auction-based cloud environments. The authors use their model to simulate a set of algorithms with different computing and communication densities. The results show the complex interactions between optimal bidding strategies and parallel applications performance.展开更多
In this paper, a brief survey of smart citiy projects in Europe is presented. This survey shows the extent of transport and logistics in smart cities. We concentrate on a smart city project we have been working on tha...In this paper, a brief survey of smart citiy projects in Europe is presented. This survey shows the extent of transport and logistics in smart cities. We concentrate on a smart city project we have been working on that is related to A Logistic Mobile Application (ALMA). The application is based on Internet of Things and combines a communication infrastructure and a High Performance Computing infrastructure in order to deliver mobile logistic services with high quality of service and adaptation to the dynamic nature of logistic operations.展开更多
With supercomputers developing towards exascale, the number of compute cores increases dramatically, making more complex and larger-scale applications possible. The input/output (I/O) requirements of large-scale app...With supercomputers developing towards exascale, the number of compute cores increases dramatically, making more complex and larger-scale applications possible. The input/output (I/O) requirements of large-scale applications, workflow applications, and their checkpointing include substantial bandwidth and an extremely low latency, posing a serious challenge to high performance computing (HPC) storage systems. Current hard disk drive (HDD) based underlying storage systems are becoming more and more incompetent to meet the requirements of next-generation exascale supercomputers. To rise to the challenge, we propose a hierarchical hybrid storage system, on-line and near-line file system (ONFS). It leverages dynamic random access memory (DRAM) and solid state drive (SSD) in compute nodes, and HDD in storage servers to build a three-level storage system in a unified namespace. It supports portable operating system interface (POSIX) semantics, and provides high bandwidth, low latency, and huge storage capacity. In this paper, we present the technical details on distributed metadata management, the strategy of memory borrow and return, data consistency, parallel access control, and mechanisms guiding downward and upward migration in ONFS. We implement an ONFS prototype on the TH-1A supercomputer, and conduct experiments to test its I/O performance and scalability. The results show that the bandwidths of single-thread and multi-thread 'read'/'write' are 6-fold and 5-fold better than HDD-based Lustre, respectively. The I/O bandwidth of data-intensive applications in ONFS can be 6.35 timcs that in Lustre.展开更多
A new approach for the implementation of variogram models and ordinary kriging using the R statistical language, in conjunction with Fortran, the MPI (Message Passing Interface), and the "pbdDMAT" package within R...A new approach for the implementation of variogram models and ordinary kriging using the R statistical language, in conjunction with Fortran, the MPI (Message Passing Interface), and the "pbdDMAT" package within R on the Bridges and Stampede Supercomputers will be described. This new technique has led to great improvements in timing as compared to those in R alone, or R with C and MPI. These improvements include processing and forecasting vectors of size 25,000 in an average time of 6 minutes on the Stampede Supercomputer and 2.5 minutes on the Bridges Supercomputer as compared to previous processing times of 3.5 hours.展开更多
A multifrontal code is introduced for the efficient solution of the linear system of equations arising from the analysis of structures. The factorization phase is reduced into a series of interleaved element assembly ...A multifrontal code is introduced for the efficient solution of the linear system of equations arising from the analysis of structures. The factorization phase is reduced into a series of interleaved element assembly and dense matrix operations for which the BLAS3 kernels are used. A similar approach is generalized for the forward and back substitution phases for the efficient solution of structures having multiple load conditions. The program performs all assembly and solution steps in parallel. Examples are presented which demonstrate the code’s performance on single and dual core processor computers.展开更多
In this paper we study the computational performance of variants of an algebraic additive Schwarz preconditioner for the Schur complement for the solution of large sparse linear systems.In earlier works,the local Schu...In this paper we study the computational performance of variants of an algebraic additive Schwarz preconditioner for the Schur complement for the solution of large sparse linear systems.In earlier works,the local Schur complements were computed exactly using a sparse direct solver.The robustness of the preconditioner comes at the price of this memory and time intensive computation that is the main bottleneck of the approach for tackling huge problems.In this work we investigate the use of sparse approximation of the dense local Schur complements.These approximations are computed using a partial incomplete LU factorization.Such a numerical calculation is the core of the multi-level incomplete factorization such as the one implemented in pARMS. The numerical and computing performance of the new numerical scheme is illustrated on a set of large 3D convection-diffusion problems;preliminary experiments on linear systems arising from structural mechanics are also reported.展开更多
基金supported in part by the Talent Fund of Beijing Jiaotong University(2023XKRC017)in part by Research and Development Project of China State Railway Group Co.,Ltd.(P2022Z003).
文摘Further improving the railway innovation capacity and technological strength is the important goal of the 14th Five-Year Plan for railway scientific and technological innovation.It includes promoting the deep integration of cutting-edge technologies with the railway systems,strengthening the research and application of intelligent railway technologies,applying green computing technologies and advancing the collaborative sharing of transportation big data.The high-speed rail system tasks need to process huge amounts of data and heavy workload with the requirement of ultra-fast response.Therefore,it is of great necessity to promote computation efficiency by applying High Performance Computing(HPC)to high-speed rail systems.The HPC technique is a great solution for improving the performance,efficiency,and safety of high-speed rail systems.In this review,we introduce and analyze the application research of high performance computing technology in the field of highspeed railways.These HPC applications are cataloged into four broad categories,namely:fault diagnosis,network and communication,management system,and simulations.Moreover,challenges and issues to be addressed are discussed and further directions are suggested.
基金partly supported by the Supercomputer Application Project Trail Funding from Wuxi Jiangnan Institute of Computing Technology(BB2340000016)the Strategic Priority Research Program of Chinese Academy of Sciences(XDC01040100)+6 种基金the National Natural Science Foundation of China(21688102,21803066)the Anhui Initiative in Quantum Information Technologies(AHY090400)the National Key Research and Development Program of China(2016YFA0200604)the Fundamental Research Funds for Central Universities(WK2340000091)the Chinese Academy of Sciences Pioneer Hundred Talents Program(KJ2340000031)the Research Start-Up Grants(KY2340000094)the Academic Leading Talents Training Program(KY2340000103)from University of Science and Technology of China。
文摘High performance computing(HPC)is a powerful tool to accelerate the Kohn–Sham density functional theory(KS-DFT)calculations on modern heterogeneous supercomputers.Here,we describe a massively parallel implementation of discontinuous Galerkin density functional theory(DGDFT)method on the Sunway Taihu Light supercomputer.The DGDFT method uses the adaptive local basis(ALB)functions generated on-the-fly during the self-consistent field(SCF)iteration to solve the KS equations with high precision comparable to plane-wave basis set.In particular,the DGDFT method adopts a two-level parallelization strategy that deals with various types of data distribution,task scheduling,and data communication schemes,and combines with the master–slave multi-thread heterogeneous parallelism of SW26010 processor,resulting in large-scale HPC KS-DFT calculations on the Sunway Taihu Light supercomputer.We show that the DGDFT method can scale up to 8,519,680 processing cores(131,072 core groups)on the Sunway Taihu Light supercomputer for studying the electronic structures of twodimensional(2 D)metallic graphene systems that contain tens of thousands of carbon atoms.
基金the Tianhe Supercomputer Project(No.2018YFB0204301)the National Natural Science Foundation of China(No.61902405)+1 种基金the PDL Research Fund(No.6142110190404)the National High-Level Personnel for Defense Technology Program(No.2017-JCJQ-ZQ-013)。
文摘Traditional high performance computing(HPC)systems provide a standard preset environment to support scientific computation.However,HPC development needs to provide support for more and more diverse applications,such as artificial intelligence and big data.The standard preset environment can no longer meet these diverse requirements.If users still run these emerging applications on HPC systems,they need to manually maintain the specific dependencies(libraries,environment variables,and so on)of their applications.This increases the development and deployment burden for users.Moreover,the multi-user mode brings about privacy problems among users.Containers like Docker and Singularity can encapsulate the job’s execution environment,but in a highly customized HPC system,cross-environment application deployment of Docker and Singularity is limited.The introduction of container images also imposes a maintenance burden on system administrators.Facing the above-mentioned problems,in this paper we propose a self-deployed execution environment(SDEE)for HPC.SDEE combines the advantages of traditional virtualization and modern containers.SDEE provides an isolated and customizable environment(similar to a virtual machine)to the user.The user is the root user in this environment.The user develops and debugs the application and deploys its special dependencies in this environment.Then the user can load the job to compute nodes directly through the traditional HPC job management system.The job and its dependencies are analyzed,packaged,deployed,and executed automatically.This process enables transparent and rapid job deployment,which not only reduces the burden on users,but also protects user privacy.Experiments show that the overhead introduced by SDEE is negligible and lower than those of both Docker and Singularity.
基金supported by National Key Research and Development Program 2016YFC0105406National Natural Science Foundation of China(11575095,61571262)。
文摘Objective:As a high computation cost discipline,nuclear science and engineering still relies heavily on traditional high performance computing(HPC)clusters.However,the usage of traditional HPC for nuclear science and engineering has been limited due to the poor flexibility,the software compatibility and the poor user interfaces.Virtualized/virtual HPC(vHPC)can mimic an HPC by using a cloud computing platform.In this work,we designed and developed a vHPC system for employment in nuclear engineering.Methods:The system is tested using the computation of the numberπby Monte Carlo and an X-ray digital imaging system simulation.The performance of the vHPC system is compared with that of the traditional HPCs.Results:As the number of the simulated particles increases,the virtual cluster computing time grows propor-tionally.The time used for the simulation of the X-ray imaging was about 21.1 h over a 12 kernels virtual server.Experimental results show that the performance of virtual cluster computing and the actual physical machine is almost the same.Conclusions:From these tests,it is concluded that vHPC is a good alternative for employing in nuclear engineering.The proposed vHPC in this paper will make HPC flexible and easy to deploy.
基金funded by the Defense Advanced Research Project Agency(DARPA)Low Temperature Logic Technology(LTLT)program.
文摘Low temperature complementary metal oxide semiconductor(CMOS)or cryogenic CMOS is a promising avenue for the continuation of Moore’s law while serving the needs of high performance computing.With temperature as a control“knob”to steepen the subthreshold slope behavior of CMOS devices,the supply voltage of operation can be reduced with no impact on operating speed.With the optimal threshold voltage engineering,the device ON current can be further enhanced,translating to higher performance.In this article,the experimentally calibrated data was adopted to tune the threshold voltage and investigated the power performance area of cryogenic CMOS at device,circuit and system level.We also presented results from measurement and analysis of functional memory chips fabricated in 28 nm bulk CMOS and 22 nm fully depleted silicon on insulator(FDSOI)operating at cryogenic temperature.Finally,the challenges and opportunities in the further development and deployment of such systems were discussed.
基金Project supported by the Research Fund for the Doctoral Program of Higher Education (No.20030001112).
文摘In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of finite element analysis benchmark tests, the MFLOPS (million floating operations per second) of LDL^T factorization of benchmark tests vary on a Dell Pentium IV 850 MHz machine from 100 to 456 depending on the average size of the super-equations, i.e., on the average depth of unrolling. In this paper, a new sparse static solver with two-level unrolling that employs the concept of master-equations and searches for an appropriate depths of unrolling is proposed. The new solver provides higher MFLOPS for LDL^T factorization of benchmark tests, and therefore speeds up the solution process.
基金supported in part by National 863 Program (2009AA01Z256,No.2009AA01A345)National 973 Program (2007CB310705)the NSFC (60932004),P.R.China
文摘This paper analyzes the physical potential, computing performance benefi t and power consumption of optical interconnects. Compared with electrical interconnections, optical ones show undoubted advantages based on physical factor analysis. At the same time, since the recent developments drive us to think about whether these optical interconnect technologies with higher bandwidth but higher cost are worthy to be deployed, the computing performance comparison is performed. To meet the increasing demand of large-scale parallel or multi-processor computing tasks, an analytic method to evaluate parallel computing performance ofinterconnect systems is proposed in this paper. Both bandwidth-limit model and full-bandwidth model are under our investigation. Speedup and effi ciency are selected to represent the parallel performance of an interconnect system. Deploying the proposed models, we depict the performance gap between the optical and electrically interconnected systems. Another investigation on power consumption of commercial products showed that if the parallel interconnections are deployed, the unit power consumption will be reduced. Therefore, from the analysis of computing influence and power dissipation, we found that parallel optical interconnect is valuable combination of high performance and low energy consumption. Considering the possible data center under construction, huge power could be saved if parallel optical interconnects technologies are used.
基金Project supported by the National Natural Science Foundation of China (Nos. 10232040, 10572002 and 10572003)
文摘A new direct method for solving unsymmetrical sparse linear systems(USLS) arising from meshless methods was introduced. Computation of certain meshless methods such as meshless local Petrov-Galerkin (MLPG) method need to solve large USLS. The proposed solution method for unsymmetrical case performs factorization processes symmetrically on the upper and lower triangular portion of matrix, which differs from previous work based on general unsymmetrical process, and attains higher performance. It is shown that the solution algorithm for USLS can be simply derived from the existing approaches for the symmetrical case. The new matrix factorization algorithm in our method can be implemented easily by modifying a standard JKI symmetrical matrix factorization code. Multi-blocked out-of-core strategies were also developed to expand the solution scale. The approach convincingly increases the speed of the solution process, which is demonstrated with the numerical tests.
文摘The integration of clusters,grids,clouds,edges and other computing platforms result in contemporary technology of jungle computing.This novel technique has the aptitude to tackle high performance computation systems and it manages the usage of all computing platforms at a time.Federated learning is a collaborative machine learning approach without centralized training data.The proposed system effectively detects the intrusion attack without human intervention and subsequently detects anomalous deviations in device communication behavior,potentially caused by malicious adversaries and it can emerge with new and unknown attacks.The main objective is to learn overall behavior of an intruder while performing attacks to the assumed target service.Moreover,the updated system model is send to the centralized server in jungle computing,to detect their pattern.Federated learning greatly helps the machine to study the type of attack from each device and this technique paves a way to complete dominion over all malicious behaviors.In our proposed work,we have implemented an intrusion detection system that has high accuracy,low False Positive Rate(FPR)scalable,and versatile for the jungle computing environment.The execution time taken to complete a round is less than two seconds,with an accuracy rate of 96%.
文摘In recent years, the widespread adoption of parallel computing, especially in multi-core processors and high-performance computing environments, ushered in a new era of efficiency and speed. This trend was particularly noteworthy in the field of image processing, which witnessed significant advancements. This parallel computing project explored the field of parallel image processing, with a focus on the grayscale conversion of colorful images. Our approach involved integrating OpenMP into our framework for parallelization to execute a critical image processing task: grayscale conversion. By using OpenMP, we strategically enhanced the overall performance of the conversion process by distributing the workload across multiple threads. The primary objectives of our project revolved around optimizing computation time and improving overall efficiency, particularly in the task of grayscale conversion of colorful images. Utilizing OpenMP for concurrent processing across multiple cores significantly reduced execution times through the effective distribution of tasks among these cores. The speedup values for various image sizes highlighted the efficacy of parallel processing, especially for large images. However, a detailed examination revealed a potential decline in parallelization efficiency with an increasing number of cores. This underscored the importance of a carefully optimized parallelization strategy, considering factors like load balancing and minimizing communication overhead. Despite challenges, the overall scalability and efficiency achieved with parallel image processing underscored OpenMP’s effectiveness in accelerating image manipulation tasks.
文摘The high-performance computing paradigm needs high-speed switching fabrics to meet the heavy traffic generated by their applications.These switching fabrics are efficiently driven by the deployed scheduling algorithms.In this paper,we proposed two scheduling algorithms for input queued switches whose operations are based on ranking procedures.At first,we proposed a Simple 2-Bit(S2B)scheme which uses binary ranking procedure and queue size for scheduling the packets.Here,the Virtual Output Queue(VOQ)set with maximum number of empty queues receives higher rank than other VOQ’s.Through simulation,we showed S2B has better throughput performance than Highest Ranking First(HRF)arbitration under uniform,and non-uniform traffic patterns.To further improve the throughput-delay performance,an Enhanced 2-Bit(E2B)approach is proposed.This approach adopts an integer representation for rank,which is the number of empty queues in a VOQ set.The simulation result shows E2B outperforms S2B and HRF scheduling algorithms with maximum throughput-delay performance.Furthermore,the algorithms are simulated under hotspot traffic and E2B proves to be more efficient.
文摘This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and hosting,so that if two virtual severs be active on a host and energy load be more on a host,it would allocated the energy of other hosts(virtual host)to itself to stay steady and this option usually leads to hardware overflow errors and users dissatisfaction.This problem has been removed in methods based on cloud processing but not perfectly,therefore,providing an algorithm not only will implement a suitable security background but also it will suitably divide energy consumption and load balancing among virtual severs.The proposed algorithm is compared with several previously proposed Security Strategy including SC-PSSF,PSSF and DEEAC.Comparisons show that the proposed method offers high performance computing,efficiency and consumes lower energy in the network.
文摘Today the PC class machines are quite popular for HPC area, especially on the problemsthat require the good cost/performance ratios. One of the drawback of these machines is the poormemory throughput performance. And one of the reasons of the poor performance is depend on the lack of the mapping capability of the TLB which is a buffer to accelerate the virtual memory access. In this report, I present that the mapping capability and the performance can be improved with the multi granularity TLB feature that some processors have. And I also present that the new TLB handling routine can be incorporated into the demand paging system of Linux.
基金"This paper is an extended version of "SpotMPl: a framework for auction-based HPC computing using amazon spot instances" published in the International Symposium on Advances of Distributed Computing and Networking (ADCN 2011).Acknowledgment This research is supported in part by the National Science Foundation grant CNS 0958854 and educational resource grants from Amazon.com.
文摘Cloud computing is expanding widely in the world of IT infrastructure. This is due partly to the cost-saving effect of economies of scale. Fair market conditions can in theory provide a healthy environment to reflect the most reasonable costs of computations. While fixed cloud pricing provides an attractive low entry barrier for compute-intensive applications, both the consumer and supplier of computing resources can see high efficiency for their investments by participating in auction-based exchanges. There are huge incentives for the cloud provider to offer auctioned resources. However, from the consumer perspective, using these resources is a sparsely discussed challenge. This paper reports a methodology and framework designed to address the challenges of using HPC (High Performance Computing) applications on auction-based cloud clusters. The authors focus on HPC applications and describe a method for determining bid-aware checkpointing intervals. They extend a theoretical model for determining checkpoint intervals using statistical analysis of pricing histories. Also the latest developments in the SpotHPC framework are introduced which aim at facilitating the managed execution of real MPI applications on auction-based cloud environments. The authors use their model to simulate a set of algorithms with different computing and communication densities. The results show the complex interactions between optimal bidding strategies and parallel applications performance.
文摘In this paper, a brief survey of smart citiy projects in Europe is presented. This survey shows the extent of transport and logistics in smart cities. We concentrate on a smart city project we have been working on that is related to A Logistic Mobile Application (ALMA). The application is based on Internet of Things and combines a communication infrastructure and a High Performance Computing infrastructure in order to deliver mobile logistic services with high quality of service and adaptation to the dynamic nature of logistic operations.
基金Project supported by the National Key Research and Development Program of China(No.2016YFB0200402)
文摘With supercomputers developing towards exascale, the number of compute cores increases dramatically, making more complex and larger-scale applications possible. The input/output (I/O) requirements of large-scale applications, workflow applications, and their checkpointing include substantial bandwidth and an extremely low latency, posing a serious challenge to high performance computing (HPC) storage systems. Current hard disk drive (HDD) based underlying storage systems are becoming more and more incompetent to meet the requirements of next-generation exascale supercomputers. To rise to the challenge, we propose a hierarchical hybrid storage system, on-line and near-line file system (ONFS). It leverages dynamic random access memory (DRAM) and solid state drive (SSD) in compute nodes, and HDD in storage servers to build a three-level storage system in a unified namespace. It supports portable operating system interface (POSIX) semantics, and provides high bandwidth, low latency, and huge storage capacity. In this paper, we present the technical details on distributed metadata management, the strategy of memory borrow and return, data consistency, parallel access control, and mechanisms guiding downward and upward migration in ONFS. We implement an ONFS prototype on the TH-1A supercomputer, and conduct experiments to test its I/O performance and scalability. The results show that the bandwidths of single-thread and multi-thread 'read'/'write' are 6-fold and 5-fold better than HDD-based Lustre, respectively. The I/O bandwidth of data-intensive applications in ONFS can be 6.35 timcs that in Lustre.
文摘A new approach for the implementation of variogram models and ordinary kriging using the R statistical language, in conjunction with Fortran, the MPI (Message Passing Interface), and the "pbdDMAT" package within R on the Bridges and Stampede Supercomputers will be described. This new technique has led to great improvements in timing as compared to those in R alone, or R with C and MPI. These improvements include processing and forecasting vectors of size 25,000 in an average time of 6 minutes on the Stampede Supercomputer and 2.5 minutes on the Bridges Supercomputer as compared to previous processing times of 3.5 hours.
文摘A multifrontal code is introduced for the efficient solution of the linear system of equations arising from the analysis of structures. The factorization phase is reduced into a series of interleaved element assembly and dense matrix operations for which the BLAS3 kernels are used. A similar approach is generalized for the forward and back substitution phases for the efficient solution of structures having multiple load conditions. The program performs all assembly and solution steps in parallel. Examples are presented which demonstrate the code’s performance on single and dual core processor computers.
基金developed in the framework of the associated team PhyLeas(Study of parallel hybrid sparse linear solvers) funded by INRIA where the partners are INRIA,T.U.Brunswick and University of Minnesotasupported by the US Department of Energy under grant DE-FG-08ER25841 and by the Minnesota Supercomputer Institute.
文摘In this paper we study the computational performance of variants of an algebraic additive Schwarz preconditioner for the Schur complement for the solution of large sparse linear systems.In earlier works,the local Schur complements were computed exactly using a sparse direct solver.The robustness of the preconditioner comes at the price of this memory and time intensive computation that is the main bottleneck of the approach for tackling huge problems.In this work we investigate the use of sparse approximation of the dense local Schur complements.These approximations are computed using a partial incomplete LU factorization.Such a numerical calculation is the core of the multi-level incomplete factorization such as the one implemented in pARMS. The numerical and computing performance of the new numerical scheme is illustrated on a set of large 3D convection-diffusion problems;preliminary experiments on linear systems arising from structural mechanics are also reported.