Further improving the railway innovation capacity and technological strength is the important goal of the 14th Five-Year Plan for railway scientific and technological innovation.It includes promoting the deep integrat...Further improving the railway innovation capacity and technological strength is the important goal of the 14th Five-Year Plan for railway scientific and technological innovation.It includes promoting the deep integration of cutting-edge technologies with the railway systems,strengthening the research and application of intelligent railway technologies,applying green computing technologies and advancing the collaborative sharing of transportation big data.The high-speed rail system tasks need to process huge amounts of data and heavy workload with the requirement of ultra-fast response.Therefore,it is of great necessity to promote computation efficiency by applying High Performance Computing(HPC)to high-speed rail systems.The HPC technique is a great solution for improving the performance,efficiency,and safety of high-speed rail systems.In this review,we introduce and analyze the application research of high performance computing technology in the field of highspeed railways.These HPC applications are cataloged into four broad categories,namely:fault diagnosis,network and communication,management system,and simulations.Moreover,challenges and issues to be addressed are discussed and further directions are suggested.展开更多
As an important branch of information technology, high-performance computing has expanded its application field and its influence has been expanding. High-performance computing is always a key area of application in m...As an important branch of information technology, high-performance computing has expanded its application field and its influence has been expanding. High-performance computing is always a key area of application in meteorology. We used field research and literature review methods to study the application of high performance computing in China’s meteorological department, and obtained the following results: 1) China Meteorological Department gradually established the first high-performance computer system since 1978. High-performance computing services can support operational numerical weather prediction models. 2) The Chinese meteorological department has always used the relatively advanced high-performance computing technology, and the business system capability has been continuously improved. The computing power has become an important symbol of the level of meteorological modernization. 3) High-performance computing technology and meteorological numerical forecasting applications are increasingly integrated, and continue to innovate and develop. 4) In the future, high-performance computing resource management will gradually transit from the current local pre-allocation mode to the local remote unified scheduling and shared use. In summary, we have come to the conclusion that the performance calculation business of the meteorological department will usher in a better tomorrow.展开更多
This work started out with the in-depth feasibil-ity study and limitation analysis on the current disease spread estimating and countermea-sures evaluating models, then we identify that the population variability is a...This work started out with the in-depth feasibil-ity study and limitation analysis on the current disease spread estimating and countermea-sures evaluating models, then we identify that the population variability is a crucial impact which has been always ignored or less empha-sized. Taking HIV/AIDS as the application and validation background, we propose a novel al-gorithm model system, EEA model system, a new way to estimate the spread situation, evaluate different countermeasures and analyze the development of ARV-resistant disease strains. The model is a series of solvable ordi-nary differential equation (ODE) models to es-timate the spread of HIV/AIDS infections, which not only require only one year’s data to deduce the situation in any year, but also apply the piecewise constant method to employ multi- year information at the same time. We simulate the effects of therapy and vaccine, then evaluate the difference between them, and offer the smallest proportion of the vaccination in the population to defeat HIV/AIDS, especially the advantage of using the vaccination while the deficiency of using therapy separately. Then we analyze the development of ARV-resistant dis-ease strains by the piecewise constant method. Last but not least, high performance computing (HPC) platform is applied to simulate the situa-tion with variable large scale areas divided by grids, and especially the acceleration rate will come to around 4 to 5.5.展开更多
The meteorological high-performance computing resource is the support platform for the weather forecast and climate prediction numerical model operation. The scientific and objective method to evaluate the application...The meteorological high-performance computing resource is the support platform for the weather forecast and climate prediction numerical model operation. The scientific and objective method to evaluate the application of meteorological high-performance computing resources can not only provide reference for the optimization of active resources, but also provide a quantitative basis for future resource construction and planning. In this paper, the concept of the utility value B and index compliance rate E of the meteorological high performance computing system are presented. The evaluation process, evaluation index and calculation method of the high performance computing resource application benefits are introduced.展开更多
The influences of water/cement ratio and admixtures on carbonation resistance of sulphoaluminate cement-based high performance concrete (HPC) were investigated. The experimental results show that with the decreasing...The influences of water/cement ratio and admixtures on carbonation resistance of sulphoaluminate cement-based high performance concrete (HPC) were investigated. The experimental results show that with the decreasing water/cement ratio, the carbonation depth of sulphoaluminate cement-based HPC is decreased remarkably, and the carbonation resistance capability is also improved with the adding admixtures. The morphologies and structure characteristics of sulphoaluminate cement hydration products before and after carbonation were analyzed using SEM and XRD. The analysis results reveal that the main hydration product of sulphoaluminate cement, that is ettringite (AFt), decomposes after carbonation.展开更多
The integration of clusters,grids,clouds,edges and other computing platforms result in contemporary technology of jungle computing.This novel technique has the aptitude to tackle high performance computation systems a...The integration of clusters,grids,clouds,edges and other computing platforms result in contemporary technology of jungle computing.This novel technique has the aptitude to tackle high performance computation systems and it manages the usage of all computing platforms at a time.Federated learning is a collaborative machine learning approach without centralized training data.The proposed system effectively detects the intrusion attack without human intervention and subsequently detects anomalous deviations in device communication behavior,potentially caused by malicious adversaries and it can emerge with new and unknown attacks.The main objective is to learn overall behavior of an intruder while performing attacks to the assumed target service.Moreover,the updated system model is send to the centralized server in jungle computing,to detect their pattern.Federated learning greatly helps the machine to study the type of attack from each device and this technique paves a way to complete dominion over all malicious behaviors.In our proposed work,we have implemented an intrusion detection system that has high accuracy,low False Positive Rate(FPR)scalable,and versatile for the jungle computing environment.The execution time taken to complete a round is less than two seconds,with an accuracy rate of 96%.展开更多
A new direct method for solving unsymmetrical sparse linear systems(USLS) arising from meshless methods was introduced. Computation of certain meshless methods such as meshless local Petrov-Galerkin (MLPG) method ...A new direct method for solving unsymmetrical sparse linear systems(USLS) arising from meshless methods was introduced. Computation of certain meshless methods such as meshless local Petrov-Galerkin (MLPG) method need to solve large USLS. The proposed solution method for unsymmetrical case performs factorization processes symmetrically on the upper and lower triangular portion of matrix, which differs from previous work based on general unsymmetrical process, and attains higher performance. It is shown that the solution algorithm for USLS can be simply derived from the existing approaches for the symmetrical case. The new matrix factorization algorithm in our method can be implemented easily by modifying a standard JKI symmetrical matrix factorization code. Multi-blocked out-of-core strategies were also developed to expand the solution scale. The approach convincingly increases the speed of the solution process, which is demonstrated with the numerical tests.展开更多
In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of ...In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of finite element analysis benchmark tests, the MFLOPS (million floating operations per second) of LDL^T factorization of benchmark tests vary on a Dell Pentium IV 850 MHz machine from 100 to 456 depending on the average size of the super-equations, i.e., on the average depth of unrolling. In this paper, a new sparse static solver with two-level unrolling that employs the concept of master-equations and searches for an appropriate depths of unrolling is proposed. The new solver provides higher MFLOPS for LDL^T factorization of benchmark tests, and therefore speeds up the solution process.展开更多
Factors that have effect on concrete creep include mixture composition,curing conditions,ambient exposure conditions,and element geometry.Considering concrete mixtures influence and in order to improve the prediction ...Factors that have effect on concrete creep include mixture composition,curing conditions,ambient exposure conditions,and element geometry.Considering concrete mixtures influence and in order to improve the prediction of prestress loss in important structures,an experimental test under laboratory conditions was carried out to investigate compression creep of two high performance concrete mixtures used for prestressed members in one bridge.Based on the experimental results,a power exponent function of creep degree for structural numerical analysis was used to model the creep degree of two HPCs,and two series of parameters of this function for two HPCs were calculated with evolution program optimum method.The experimental data was compared with CEB-FIP 90 and ACI 209(92) models,and the two code models both overestimated creep degrees of the two HPCs.So it is recommended that the power exponent function should be used in this bridge structure analysis.展开更多
In order to investigate the compression creep of two kinds of high-performance concrete mixtures used for prestressed members in a bridge,an experimental test under laboratory conditions was carried out.Based on the e...In order to investigate the compression creep of two kinds of high-performance concrete mixtures used for prestressed members in a bridge,an experimental test under laboratory conditions was carried out.Based on the experimental results,a power exponent function was used to model the creep degree of these high-performance concretes(HPCs) for structural numerical analysis,and two series parameters of this function for the HPCs were given with the optimum method of evolution program.The experimental data were compared with CEB-FIP 90 and ACI 92 models.Results show that the two code models both overestimate the creep degree of two HPCs,so it is recommended that the power exponent function should be used for the creep analysis of bridge structure.展开更多
This research evaluated the suitability of stone dust in the design and production of High Perfor-mance Concrete (HPC). HPC mix was designed, tested, costed and a comparison of concrete classes used in the market (Cla...This research evaluated the suitability of stone dust in the design and production of High Perfor-mance Concrete (HPC). HPC mix was designed, tested, costed and a comparison of concrete classes used in the market (Class 25, 30 and 35) done using Cost Benefit Analysis (CBA). The cost benefit was analyzed using Internal Rate of Return (IRR) and Net Present Value (NPV). Laboratory tests established the properties concrete obtained from the design mix. Compressive strength, slump, and modulus of elasticity were tested and analyzed. Structural analysis using BS 8110 was done for a 10 storey office building to establish the structural member sizes. Members obtained from concrete Classes 25, 30, 35 and the new compressive strengths from HPC (Class 80) were obtained and compared. Analysis was done for structural members’ sizes and area freed as a result of de-signing with HPC as well as the steel reinforcement used. To justify the initial cost of HPC if ado- pted, the Cost Benefit Analysis (CBA) was used to estimate increased costs versus income resulting from increased let table space created. The minimum class of concrete used in design was limited to Class 25 N/mm2. The research shows that it is possible to manufacture high strength concrete using locally available stone dust. The stone dust sampled from Mlolongo quarries achieved a characteristic strength of 86.7 N/mm2 at a water cement ratio of 0.32. With the results structural analysis of a 10 storey office structures with columns spaced at 8 meters center to center was de-signed using the four classes and results compared. There was a reduction of columns from 1.2 m wide to 0.65 m wide (over 45%) when concrete class changes from Class 25 to Class 80 creating over 3% of the total space area per floor. Cost benefit analysis using Net Present Value (NPV) and Internal Rate of Return (IRR) presented business case for the use of HPC. With Class 80, the IRR was at 3% and NPV being 8% of the total initial investment. The steel reinforcement increased by 8.64% using Class 30, 11.68% using Class 35 and reduced by 8.37% at Class 80. Further analysis needs to be done to understand the trend of steel reinforcement keeping all the member sizes the same. In this study the member sizes were optimized based on the steel reinforcement and serviceability. This paper provides useful information to design Engineers and Architects and inform future design of multi storey structures.展开更多
This paper analyzes the physical potential, computing performance benefi t and power consumption of optical interconnects. Compared with electrical interconnections, optical ones show undoubted advantages based on phy...This paper analyzes the physical potential, computing performance benefi t and power consumption of optical interconnects. Compared with electrical interconnections, optical ones show undoubted advantages based on physical factor analysis. At the same time, since the recent developments drive us to think about whether these optical interconnect technologies with higher bandwidth but higher cost are worthy to be deployed, the computing performance comparison is performed. To meet the increasing demand of large-scale parallel or multi-processor computing tasks, an analytic method to evaluate parallel computing performance ofinterconnect systems is proposed in this paper. Both bandwidth-limit model and full-bandwidth model are under our investigation. Speedup and effi ciency are selected to represent the parallel performance of an interconnect system. Deploying the proposed models, we depict the performance gap between the optical and electrically interconnected systems. Another investigation on power consumption of commercial products showed that if the parallel interconnections are deployed, the unit power consumption will be reduced. Therefore, from the analysis of computing influence and power dissipation, we found that parallel optical interconnect is valuable combination of high performance and low energy consumption. Considering the possible data center under construction, huge power could be saved if parallel optical interconnects technologies are used.展开更多
In this paper, by using a matrix technique, a dynamic model of high power factor induction motor with floating winding in parallel connection with capacitors is established. Then, the starting performance of this mot...In this paper, by using a matrix technique, a dynamic model of high power factor induction motor with floating winding in parallel connection with capacitors is established. Then, the starting performance of this motor is analyzed by computer simulation. By comparison of the tested and computed results, which are in good agreement, the dynamic model and simulative method are verified.展开更多
This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and...This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and hosting,so that if two virtual severs be active on a host and energy load be more on a host,it would allocated the energy of other hosts(virtual host)to itself to stay steady and this option usually leads to hardware overflow errors and users dissatisfaction.This problem has been removed in methods based on cloud processing but not perfectly,therefore,providing an algorithm not only will implement a suitable security background but also it will suitably divide energy consumption and load balancing among virtual severs.The proposed algorithm is compared with several previously proposed Security Strategy including SC-PSSF,PSSF and DEEAC.Comparisons show that the proposed method offers high performance computing,efficiency and consumes lower energy in the network.展开更多
Today the PC class machines are quite popular for HPC area, especially on the problemsthat require the good cost/performance ratios. One of the drawback of these machines is the poormemory throughput performance. And ...Today the PC class machines are quite popular for HPC area, especially on the problemsthat require the good cost/performance ratios. One of the drawback of these machines is the poormemory throughput performance. And one of the reasons of the poor performance is depend on the lack of the mapping capability of the TLB which is a buffer to accelerate the virtual memory access. In this report, I present that the mapping capability and the performance can be improved with the multi granularity TLB feature that some processors have. And I also present that the new TLB handling routine can be incorporated into the demand paging system of Linux.展开更多
With the rapid development of high-rise buildings and long-span structures in the recent years, high performance com- putation (HPC) is becoming more and more important, sometimes even crucial, for the design and cons...With the rapid development of high-rise buildings and long-span structures in the recent years, high performance com- putation (HPC) is becoming more and more important, sometimes even crucial, for the design and construction of com- plex building structures. To satisfy the engineering requirements of HPC, a parallel FEA computing kernel, which is designed typically for the analysis of complex building structures, will be presented and illustrated in this paper. This kernel program is based on the Intel Math Kernel Library (MKL) and coded by FORTRAN 2008 syntax, which is a parallel computer language. To improve the capability and efficiency of the computing kernel program, the parallel concepts of modern FORTRAN, such as elemental procedure, do concurrent, etc., have been applied extensively in coding and the famous PARDISO solver in MKL has been called to solve the Large-sparse system of linear equations. The ultimate objective of developing the computing kernel is to make the personal computer have the ability to analysis large building structures up to ten million degree of freedoms (DOFs). Up to now, the linear static analysis and dynamic analysis have been achieved while the nonlinear analysis, including geometric and material nonlinearity, has not been finished yet. Therefore, the numerical examples in this paper will be concentrated on demonstrating the validity and efficiency of the linear analysis and modal analysis for large FE models, while ignoring the verification of the nonlinear analysis capabilities.展开更多
Item response theory (IRT) is a modern test theory that has been used in various aspects of educational and psychological measurement. The fully Bayesian approach shows promise for estimating IRT models. Given that it...Item response theory (IRT) is a modern test theory that has been used in various aspects of educational and psychological measurement. The fully Bayesian approach shows promise for estimating IRT models. Given that it is computation- ally expensive, the procedure is limited in practical applications. It is hence important to seek ways to reduce the execution time. A suitable solution is the use of high performance computing. This study focuses on the fully Bayesian algorithm for a conventional IRT model so that it can be implemented on a high performance parallel machine. Empirical results suggest that this parallel version of the algorithm achieves a considerable speedup and thus reduces the execution time considerably.展开更多
基金supported in part by the Talent Fund of Beijing Jiaotong University(2023XKRC017)in part by Research and Development Project of China State Railway Group Co.,Ltd.(P2022Z003).
文摘Further improving the railway innovation capacity and technological strength is the important goal of the 14th Five-Year Plan for railway scientific and technological innovation.It includes promoting the deep integration of cutting-edge technologies with the railway systems,strengthening the research and application of intelligent railway technologies,applying green computing technologies and advancing the collaborative sharing of transportation big data.The high-speed rail system tasks need to process huge amounts of data and heavy workload with the requirement of ultra-fast response.Therefore,it is of great necessity to promote computation efficiency by applying High Performance Computing(HPC)to high-speed rail systems.The HPC technique is a great solution for improving the performance,efficiency,and safety of high-speed rail systems.In this review,we introduce and analyze the application research of high performance computing technology in the field of highspeed railways.These HPC applications are cataloged into four broad categories,namely:fault diagnosis,network and communication,management system,and simulations.Moreover,challenges and issues to be addressed are discussed and further directions are suggested.
文摘As an important branch of information technology, high-performance computing has expanded its application field and its influence has been expanding. High-performance computing is always a key area of application in meteorology. We used field research and literature review methods to study the application of high performance computing in China’s meteorological department, and obtained the following results: 1) China Meteorological Department gradually established the first high-performance computer system since 1978. High-performance computing services can support operational numerical weather prediction models. 2) The Chinese meteorological department has always used the relatively advanced high-performance computing technology, and the business system capability has been continuously improved. The computing power has become an important symbol of the level of meteorological modernization. 3) High-performance computing technology and meteorological numerical forecasting applications are increasingly integrated, and continue to innovate and develop. 4) In the future, high-performance computing resource management will gradually transit from the current local pre-allocation mode to the local remote unified scheduling and shared use. In summary, we have come to the conclusion that the performance calculation business of the meteorological department will usher in a better tomorrow.
文摘This work started out with the in-depth feasibil-ity study and limitation analysis on the current disease spread estimating and countermea-sures evaluating models, then we identify that the population variability is a crucial impact which has been always ignored or less empha-sized. Taking HIV/AIDS as the application and validation background, we propose a novel al-gorithm model system, EEA model system, a new way to estimate the spread situation, evaluate different countermeasures and analyze the development of ARV-resistant disease strains. The model is a series of solvable ordi-nary differential equation (ODE) models to es-timate the spread of HIV/AIDS infections, which not only require only one year’s data to deduce the situation in any year, but also apply the piecewise constant method to employ multi- year information at the same time. We simulate the effects of therapy and vaccine, then evaluate the difference between them, and offer the smallest proportion of the vaccination in the population to defeat HIV/AIDS, especially the advantage of using the vaccination while the deficiency of using therapy separately. Then we analyze the development of ARV-resistant dis-ease strains by the piecewise constant method. Last but not least, high performance computing (HPC) platform is applied to simulate the situa-tion with variable large scale areas divided by grids, and especially the acceleration rate will come to around 4 to 5.5.
文摘The meteorological high-performance computing resource is the support platform for the weather forecast and climate prediction numerical model operation. The scientific and objective method to evaluate the application of meteorological high-performance computing resources can not only provide reference for the optimization of active resources, but also provide a quantitative basis for future resource construction and planning. In this paper, the concept of the utility value B and index compliance rate E of the meteorological high performance computing system are presented. The evaluation process, evaluation index and calculation method of the high performance computing resource application benefits are introduced.
基金Funded by the National Natural Science Foundation of China(No.50872043)
文摘The influences of water/cement ratio and admixtures on carbonation resistance of sulphoaluminate cement-based high performance concrete (HPC) were investigated. The experimental results show that with the decreasing water/cement ratio, the carbonation depth of sulphoaluminate cement-based HPC is decreased remarkably, and the carbonation resistance capability is also improved with the adding admixtures. The morphologies and structure characteristics of sulphoaluminate cement hydration products before and after carbonation were analyzed using SEM and XRD. The analysis results reveal that the main hydration product of sulphoaluminate cement, that is ettringite (AFt), decomposes after carbonation.
文摘The integration of clusters,grids,clouds,edges and other computing platforms result in contemporary technology of jungle computing.This novel technique has the aptitude to tackle high performance computation systems and it manages the usage of all computing platforms at a time.Federated learning is a collaborative machine learning approach without centralized training data.The proposed system effectively detects the intrusion attack without human intervention and subsequently detects anomalous deviations in device communication behavior,potentially caused by malicious adversaries and it can emerge with new and unknown attacks.The main objective is to learn overall behavior of an intruder while performing attacks to the assumed target service.Moreover,the updated system model is send to the centralized server in jungle computing,to detect their pattern.Federated learning greatly helps the machine to study the type of attack from each device and this technique paves a way to complete dominion over all malicious behaviors.In our proposed work,we have implemented an intrusion detection system that has high accuracy,low False Positive Rate(FPR)scalable,and versatile for the jungle computing environment.The execution time taken to complete a round is less than two seconds,with an accuracy rate of 96%.
基金Project supported by the National Natural Science Foundation of China (Nos. 10232040, 10572002 and 10572003)
文摘A new direct method for solving unsymmetrical sparse linear systems(USLS) arising from meshless methods was introduced. Computation of certain meshless methods such as meshless local Petrov-Galerkin (MLPG) method need to solve large USLS. The proposed solution method for unsymmetrical case performs factorization processes symmetrically on the upper and lower triangular portion of matrix, which differs from previous work based on general unsymmetrical process, and attains higher performance. It is shown that the solution algorithm for USLS can be simply derived from the existing approaches for the symmetrical case. The new matrix factorization algorithm in our method can be implemented easily by modifying a standard JKI symmetrical matrix factorization code. Multi-blocked out-of-core strategies were also developed to expand the solution scale. The approach convincingly increases the speed of the solution process, which is demonstrated with the numerical tests.
基金Project supported by the Research Fund for the Doctoral Program of Higher Education (No.20030001112).
文摘In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of finite element analysis benchmark tests, the MFLOPS (million floating operations per second) of LDL^T factorization of benchmark tests vary on a Dell Pentium IV 850 MHz machine from 100 to 456 depending on the average size of the super-equations, i.e., on the average depth of unrolling. In this paper, a new sparse static solver with two-level unrolling that employs the concept of master-equations and searches for an appropriate depths of unrolling is proposed. The new solver provides higher MFLOPS for LDL^T factorization of benchmark tests, and therefore speeds up the solution process.
文摘Factors that have effect on concrete creep include mixture composition,curing conditions,ambient exposure conditions,and element geometry.Considering concrete mixtures influence and in order to improve the prediction of prestress loss in important structures,an experimental test under laboratory conditions was carried out to investigate compression creep of two high performance concrete mixtures used for prestressed members in one bridge.Based on the experimental results,a power exponent function of creep degree for structural numerical analysis was used to model the creep degree of two HPCs,and two series of parameters of this function for two HPCs were calculated with evolution program optimum method.The experimental data was compared with CEB-FIP 90 and ACI 209(92) models,and the two code models both overestimated creep degrees of the two HPCs.So it is recommended that the power exponent function should be used in this bridge structure analysis.
文摘In order to investigate the compression creep of two kinds of high-performance concrete mixtures used for prestressed members in a bridge,an experimental test under laboratory conditions was carried out.Based on the experimental results,a power exponent function was used to model the creep degree of these high-performance concretes(HPCs) for structural numerical analysis,and two series parameters of this function for the HPCs were given with the optimum method of evolution program.The experimental data were compared with CEB-FIP 90 and ACI 92 models.Results show that the two code models both overestimate the creep degree of two HPCs,so it is recommended that the power exponent function should be used for the creep analysis of bridge structure.
文摘This research evaluated the suitability of stone dust in the design and production of High Perfor-mance Concrete (HPC). HPC mix was designed, tested, costed and a comparison of concrete classes used in the market (Class 25, 30 and 35) done using Cost Benefit Analysis (CBA). The cost benefit was analyzed using Internal Rate of Return (IRR) and Net Present Value (NPV). Laboratory tests established the properties concrete obtained from the design mix. Compressive strength, slump, and modulus of elasticity were tested and analyzed. Structural analysis using BS 8110 was done for a 10 storey office building to establish the structural member sizes. Members obtained from concrete Classes 25, 30, 35 and the new compressive strengths from HPC (Class 80) were obtained and compared. Analysis was done for structural members’ sizes and area freed as a result of de-signing with HPC as well as the steel reinforcement used. To justify the initial cost of HPC if ado- pted, the Cost Benefit Analysis (CBA) was used to estimate increased costs versus income resulting from increased let table space created. The minimum class of concrete used in design was limited to Class 25 N/mm2. The research shows that it is possible to manufacture high strength concrete using locally available stone dust. The stone dust sampled from Mlolongo quarries achieved a characteristic strength of 86.7 N/mm2 at a water cement ratio of 0.32. With the results structural analysis of a 10 storey office structures with columns spaced at 8 meters center to center was de-signed using the four classes and results compared. There was a reduction of columns from 1.2 m wide to 0.65 m wide (over 45%) when concrete class changes from Class 25 to Class 80 creating over 3% of the total space area per floor. Cost benefit analysis using Net Present Value (NPV) and Internal Rate of Return (IRR) presented business case for the use of HPC. With Class 80, the IRR was at 3% and NPV being 8% of the total initial investment. The steel reinforcement increased by 8.64% using Class 30, 11.68% using Class 35 and reduced by 8.37% at Class 80. Further analysis needs to be done to understand the trend of steel reinforcement keeping all the member sizes the same. In this study the member sizes were optimized based on the steel reinforcement and serviceability. This paper provides useful information to design Engineers and Architects and inform future design of multi storey structures.
基金"This paper is an extended version of "SpotMPl: a framework for auction-based HPC computing using amazon spot instances" published in the International Symposium on Advances of Distributed Computing and Networking (ADCN 2011).Acknowledgment This research is supported in part by the National Science Foundation grant CNS 0958854 and educational resource grants from Amazon.com.
基金supported in part by National 863 Program (2009AA01Z256,No.2009AA01A345)National 973 Program (2007CB310705)the NSFC (60932004),P.R.China
文摘This paper analyzes the physical potential, computing performance benefi t and power consumption of optical interconnects. Compared with electrical interconnections, optical ones show undoubted advantages based on physical factor analysis. At the same time, since the recent developments drive us to think about whether these optical interconnect technologies with higher bandwidth but higher cost are worthy to be deployed, the computing performance comparison is performed. To meet the increasing demand of large-scale parallel or multi-processor computing tasks, an analytic method to evaluate parallel computing performance ofinterconnect systems is proposed in this paper. Both bandwidth-limit model and full-bandwidth model are under our investigation. Speedup and effi ciency are selected to represent the parallel performance of an interconnect system. Deploying the proposed models, we depict the performance gap between the optical and electrically interconnected systems. Another investigation on power consumption of commercial products showed that if the parallel interconnections are deployed, the unit power consumption will be reduced. Therefore, from the analysis of computing influence and power dissipation, we found that parallel optical interconnect is valuable combination of high performance and low energy consumption. Considering the possible data center under construction, huge power could be saved if parallel optical interconnects technologies are used.
文摘In this paper, by using a matrix technique, a dynamic model of high power factor induction motor with floating winding in parallel connection with capacitors is established. Then, the starting performance of this motor is analyzed by computer simulation. By comparison of the tested and computed results, which are in good agreement, the dynamic model and simulative method are verified.
文摘This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and hosting,so that if two virtual severs be active on a host and energy load be more on a host,it would allocated the energy of other hosts(virtual host)to itself to stay steady and this option usually leads to hardware overflow errors and users dissatisfaction.This problem has been removed in methods based on cloud processing but not perfectly,therefore,providing an algorithm not only will implement a suitable security background but also it will suitably divide energy consumption and load balancing among virtual severs.The proposed algorithm is compared with several previously proposed Security Strategy including SC-PSSF,PSSF and DEEAC.Comparisons show that the proposed method offers high performance computing,efficiency and consumes lower energy in the network.
文摘Today the PC class machines are quite popular for HPC area, especially on the problemsthat require the good cost/performance ratios. One of the drawback of these machines is the poormemory throughput performance. And one of the reasons of the poor performance is depend on the lack of the mapping capability of the TLB which is a buffer to accelerate the virtual memory access. In this report, I present that the mapping capability and the performance can be improved with the multi granularity TLB feature that some processors have. And I also present that the new TLB handling routine can be incorporated into the demand paging system of Linux.
文摘With the rapid development of high-rise buildings and long-span structures in the recent years, high performance com- putation (HPC) is becoming more and more important, sometimes even crucial, for the design and construction of com- plex building structures. To satisfy the engineering requirements of HPC, a parallel FEA computing kernel, which is designed typically for the analysis of complex building structures, will be presented and illustrated in this paper. This kernel program is based on the Intel Math Kernel Library (MKL) and coded by FORTRAN 2008 syntax, which is a parallel computer language. To improve the capability and efficiency of the computing kernel program, the parallel concepts of modern FORTRAN, such as elemental procedure, do concurrent, etc., have been applied extensively in coding and the famous PARDISO solver in MKL has been called to solve the Large-sparse system of linear equations. The ultimate objective of developing the computing kernel is to make the personal computer have the ability to analysis large building structures up to ten million degree of freedoms (DOFs). Up to now, the linear static analysis and dynamic analysis have been achieved while the nonlinear analysis, including geometric and material nonlinearity, has not been finished yet. Therefore, the numerical examples in this paper will be concentrated on demonstrating the validity and efficiency of the linear analysis and modal analysis for large FE models, while ignoring the verification of the nonlinear analysis capabilities.
文摘Item response theory (IRT) is a modern test theory that has been used in various aspects of educational and psychological measurement. The fully Bayesian approach shows promise for estimating IRT models. Given that it is computation- ally expensive, the procedure is limited in practical applications. It is hence important to seek ways to reduce the execution time. A suitable solution is the use of high performance computing. This study focuses on the fully Bayesian algorithm for a conventional IRT model so that it can be implemented on a high performance parallel machine. Empirical results suggest that this parallel version of the algorithm achieves a considerable speedup and thus reduces the execution time considerably.