Further improving the railway innovation capacity and technological strength is the important goal of the 14th Five-Year Plan for railway scientific and technological innovation.It includes promoting the deep integrat...Further improving the railway innovation capacity and technological strength is the important goal of the 14th Five-Year Plan for railway scientific and technological innovation.It includes promoting the deep integration of cutting-edge technologies with the railway systems,strengthening the research and application of intelligent railway technologies,applying green computing technologies and advancing the collaborative sharing of transportation big data.The high-speed rail system tasks need to process huge amounts of data and heavy workload with the requirement of ultra-fast response.Therefore,it is of great necessity to promote computation efficiency by applying High Performance Computing(HPC)to high-speed rail systems.The HPC technique is a great solution for improving the performance,efficiency,and safety of high-speed rail systems.In this review,we introduce and analyze the application research of high performance computing technology in the field of highspeed railways.These HPC applications are cataloged into four broad categories,namely:fault diagnosis,network and communication,management system,and simulations.Moreover,challenges and issues to be addressed are discussed and further directions are suggested.展开更多
As an important branch of information technology, high-performance computing has expanded its application field and its influence has been expanding. High-performance computing is always a key area of application in m...As an important branch of information technology, high-performance computing has expanded its application field and its influence has been expanding. High-performance computing is always a key area of application in meteorology. We used field research and literature review methods to study the application of high performance computing in China’s meteorological department, and obtained the following results: 1) China Meteorological Department gradually established the first high-performance computer system since 1978. High-performance computing services can support operational numerical weather prediction models. 2) The Chinese meteorological department has always used the relatively advanced high-performance computing technology, and the business system capability has been continuously improved. The computing power has become an important symbol of the level of meteorological modernization. 3) High-performance computing technology and meteorological numerical forecasting applications are increasingly integrated, and continue to innovate and develop. 4) In the future, high-performance computing resource management will gradually transit from the current local pre-allocation mode to the local remote unified scheduling and shared use. In summary, we have come to the conclusion that the performance calculation business of the meteorological department will usher in a better tomorrow.展开更多
This work started out with the in-depth feasibil-ity study and limitation analysis on the current disease spread estimating and countermea-sures evaluating models, then we identify that the population variability is a...This work started out with the in-depth feasibil-ity study and limitation analysis on the current disease spread estimating and countermea-sures evaluating models, then we identify that the population variability is a crucial impact which has been always ignored or less empha-sized. Taking HIV/AIDS as the application and validation background, we propose a novel al-gorithm model system, EEA model system, a new way to estimate the spread situation, evaluate different countermeasures and analyze the development of ARV-resistant disease strains. The model is a series of solvable ordi-nary differential equation (ODE) models to es-timate the spread of HIV/AIDS infections, which not only require only one year’s data to deduce the situation in any year, but also apply the piecewise constant method to employ multi- year information at the same time. We simulate the effects of therapy and vaccine, then evaluate the difference between them, and offer the smallest proportion of the vaccination in the population to defeat HIV/AIDS, especially the advantage of using the vaccination while the deficiency of using therapy separately. Then we analyze the development of ARV-resistant dis-ease strains by the piecewise constant method. Last but not least, high performance computing (HPC) platform is applied to simulate the situa-tion with variable large scale areas divided by grids, and especially the acceleration rate will come to around 4 to 5.5.展开更多
The meteorological high-performance computing resource is the support platform for the weather forecast and climate prediction numerical model operation. The scientific and objective method to evaluate the application...The meteorological high-performance computing resource is the support platform for the weather forecast and climate prediction numerical model operation. The scientific and objective method to evaluate the application of meteorological high-performance computing resources can not only provide reference for the optimization of active resources, but also provide a quantitative basis for future resource construction and planning. In this paper, the concept of the utility value B and index compliance rate E of the meteorological high performance computing system are presented. The evaluation process, evaluation index and calculation method of the high performance computing resource application benefits are introduced.展开更多
The influences of water/cement ratio and admixtures on carbonation resistance of sulphoaluminate cement-based high performance concrete (HPC) were investigated. The experimental results show that with the decreasing...The influences of water/cement ratio and admixtures on carbonation resistance of sulphoaluminate cement-based high performance concrete (HPC) were investigated. The experimental results show that with the decreasing water/cement ratio, the carbonation depth of sulphoaluminate cement-based HPC is decreased remarkably, and the carbonation resistance capability is also improved with the adding admixtures. The morphologies and structure characteristics of sulphoaluminate cement hydration products before and after carbonation were analyzed using SEM and XRD. The analysis results reveal that the main hydration product of sulphoaluminate cement, that is ettringite (AFt), decomposes after carbonation.展开更多
A new direct method for solving unsymmetrical sparse linear systems(USLS) arising from meshless methods was introduced. Computation of certain meshless methods such as meshless local Petrov-Galerkin (MLPG) method ...A new direct method for solving unsymmetrical sparse linear systems(USLS) arising from meshless methods was introduced. Computation of certain meshless methods such as meshless local Petrov-Galerkin (MLPG) method need to solve large USLS. The proposed solution method for unsymmetrical case performs factorization processes symmetrically on the upper and lower triangular portion of matrix, which differs from previous work based on general unsymmetrical process, and attains higher performance. It is shown that the solution algorithm for USLS can be simply derived from the existing approaches for the symmetrical case. The new matrix factorization algorithm in our method can be implemented easily by modifying a standard JKI symmetrical matrix factorization code. Multi-blocked out-of-core strategies were also developed to expand the solution scale. The approach convincingly increases the speed of the solution process, which is demonstrated with the numerical tests.展开更多
In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of ...In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of finite element analysis benchmark tests, the MFLOPS (million floating operations per second) of LDL^T factorization of benchmark tests vary on a Dell Pentium IV 850 MHz machine from 100 to 456 depending on the average size of the super-equations, i.e., on the average depth of unrolling. In this paper, a new sparse static solver with two-level unrolling that employs the concept of master-equations and searches for an appropriate depths of unrolling is proposed. The new solver provides higher MFLOPS for LDL^T factorization of benchmark tests, and therefore speeds up the solution process.展开更多
Factors that have effect on concrete creep include mixture composition,curing conditions,ambient exposure conditions,and element geometry.Considering concrete mixtures influence and in order to improve the prediction ...Factors that have effect on concrete creep include mixture composition,curing conditions,ambient exposure conditions,and element geometry.Considering concrete mixtures influence and in order to improve the prediction of prestress loss in important structures,an experimental test under laboratory conditions was carried out to investigate compression creep of two high performance concrete mixtures used for prestressed members in one bridge.Based on the experimental results,a power exponent function of creep degree for structural numerical analysis was used to model the creep degree of two HPCs,and two series of parameters of this function for two HPCs were calculated with evolution program optimum method.The experimental data was compared with CEB-FIP 90 and ACI 209(92) models,and the two code models both overestimated creep degrees of the two HPCs.So it is recommended that the power exponent function should be used in this bridge structure analysis.展开更多
This paper analyzes the physical potential, computing performance benefi t and power consumption of optical interconnects. Compared with electrical interconnections, optical ones show undoubted advantages based on phy...This paper analyzes the physical potential, computing performance benefi t and power consumption of optical interconnects. Compared with electrical interconnections, optical ones show undoubted advantages based on physical factor analysis. At the same time, since the recent developments drive us to think about whether these optical interconnect technologies with higher bandwidth but higher cost are worthy to be deployed, the computing performance comparison is performed. To meet the increasing demand of large-scale parallel or multi-processor computing tasks, an analytic method to evaluate parallel computing performance ofinterconnect systems is proposed in this paper. Both bandwidth-limit model and full-bandwidth model are under our investigation. Speedup and effi ciency are selected to represent the parallel performance of an interconnect system. Deploying the proposed models, we depict the performance gap between the optical and electrically interconnected systems. Another investigation on power consumption of commercial products showed that if the parallel interconnections are deployed, the unit power consumption will be reduced. Therefore, from the analysis of computing influence and power dissipation, we found that parallel optical interconnect is valuable combination of high performance and low energy consumption. Considering the possible data center under construction, huge power could be saved if parallel optical interconnects technologies are used.展开更多
This research evaluated the suitability of stone dust in the design and production of High Perfor-mance Concrete (HPC). HPC mix was designed, tested, costed and a comparison of concrete classes used in the market (Cla...This research evaluated the suitability of stone dust in the design and production of High Perfor-mance Concrete (HPC). HPC mix was designed, tested, costed and a comparison of concrete classes used in the market (Class 25, 30 and 35) done using Cost Benefit Analysis (CBA). The cost benefit was analyzed using Internal Rate of Return (IRR) and Net Present Value (NPV). Laboratory tests established the properties concrete obtained from the design mix. Compressive strength, slump, and modulus of elasticity were tested and analyzed. Structural analysis using BS 8110 was done for a 10 storey office building to establish the structural member sizes. Members obtained from concrete Classes 25, 30, 35 and the new compressive strengths from HPC (Class 80) were obtained and compared. Analysis was done for structural members’ sizes and area freed as a result of de-signing with HPC as well as the steel reinforcement used. To justify the initial cost of HPC if ado- pted, the Cost Benefit Analysis (CBA) was used to estimate increased costs versus income resulting from increased let table space created. The minimum class of concrete used in design was limited to Class 25 N/mm2. The research shows that it is possible to manufacture high strength concrete using locally available stone dust. The stone dust sampled from Mlolongo quarries achieved a characteristic strength of 86.7 N/mm2 at a water cement ratio of 0.32. With the results structural analysis of a 10 storey office structures with columns spaced at 8 meters center to center was de-signed using the four classes and results compared. There was a reduction of columns from 1.2 m wide to 0.65 m wide (over 45%) when concrete class changes from Class 25 to Class 80 creating over 3% of the total space area per floor. Cost benefit analysis using Net Present Value (NPV) and Internal Rate of Return (IRR) presented business case for the use of HPC. With Class 80, the IRR was at 3% and NPV being 8% of the total initial investment. The steel reinforcement increased by 8.64% using Class 30, 11.68% using Class 35 and reduced by 8.37% at Class 80. Further analysis needs to be done to understand the trend of steel reinforcement keeping all the member sizes the same. In this study the member sizes were optimized based on the steel reinforcement and serviceability. This paper provides useful information to design Engineers and Architects and inform future design of multi storey structures.展开更多
In this paper, by using a matrix technique, a dynamic model of high power factor induction motor with floating winding in parallel connection with capacitors is established. Then, the starting performance of this mot...In this paper, by using a matrix technique, a dynamic model of high power factor induction motor with floating winding in parallel connection with capacitors is established. Then, the starting performance of this motor is analyzed by computer simulation. By comparison of the tested and computed results, which are in good agreement, the dynamic model and simulative method are verified.展开更多
In this work the turbulence based acoustic sources and the corresponding wave propagation of fluctuating flow values in incompressible fluid flows are considered. Lighthill’s and Curle’s acoustic analogies are imple...In this work the turbulence based acoustic sources and the corresponding wave propagation of fluctuating flow values in incompressible fluid flows are considered. Lighthill’s and Curle’s acoustic analogies are implemented in the open source computational fluid dynamics framework OpenFOAM. The main objective of this work is to visualize and localize the dominated sound sources and the resulting values of fluctuating pressure values within the computation domain representing the acoustical near field. This is all done on one mesh and during the iterative computation of the transient fluid flow. Finally the flow field and acoustical results of different simulation cases are presented and the properties of the shown method are discussed.展开更多
This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and...This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and hosting,so that if two virtual severs be active on a host and energy load be more on a host,it would allocated the energy of other hosts(virtual host)to itself to stay steady and this option usually leads to hardware overflow errors and users dissatisfaction.This problem has been removed in methods based on cloud processing but not perfectly,therefore,providing an algorithm not only will implement a suitable security background but also it will suitably divide energy consumption and load balancing among virtual severs.The proposed algorithm is compared with several previously proposed Security Strategy including SC-PSSF,PSSF and DEEAC.Comparisons show that the proposed method offers high performance computing,efficiency and consumes lower energy in the network.展开更多
Today the PC class machines are quite popular for HPC area, especially on the problemsthat require the good cost/performance ratios. One of the drawback of these machines is the poormemory throughput performance. And ...Today the PC class machines are quite popular for HPC area, especially on the problemsthat require the good cost/performance ratios. One of the drawback of these machines is the poormemory throughput performance. And one of the reasons of the poor performance is depend on the lack of the mapping capability of the TLB which is a buffer to accelerate the virtual memory access. In this report, I present that the mapping capability and the performance can be improved with the multi granularity TLB feature that some processors have. And I also present that the new TLB handling routine can be incorporated into the demand paging system of Linux.展开更多
Traditional HPC (High Performance Computing) cluster is built on top of physical machines. It is usually not practical to reassign these machines to other tasks due to the fact that software installation is time con...Traditional HPC (High Performance Computing) cluster is built on top of physical machines. It is usually not practical to reassign these machines to other tasks due to the fact that software installation is time consuming. As a result, these machines are usually dedicated for the cluster usage. Virtualization technology provides an abstract layer which allows several different operating systems (with different software packages) running on top of one physical machine. Cloud computing provides an easy way for the user to manage and interact with the computing resources (the virtual machines in this case). In this work, we demonstrate the feasibility of building a cloud-based cluster for HPC on top of a set of desktop computers that are interconnected by means of Fast Ethernet. Our cluster has several advantages. For instance, the deployment time of the cluster is quite fast: We need only 5 min to deploy a cluster of 30 machines, Besides, several performance benchmarks have been carried out. As expected, the embarrassingly parallel problem has the linear relationship between the performance and the cluster size.展开更多
Cloud computing is expanding widely in the world of IT infrastructure. This is due partly to the cost-saving effect of economies of scale. Fair market conditions can in theory provide a healthy environment to reflect ...Cloud computing is expanding widely in the world of IT infrastructure. This is due partly to the cost-saving effect of economies of scale. Fair market conditions can in theory provide a healthy environment to reflect the most reasonable costs of computations. While fixed cloud pricing provides an attractive low entry barrier for compute-intensive applications, both the consumer and supplier of computing resources can see high efficiency for their investments by participating in auction-based exchanges. There are huge incentives for the cloud provider to offer auctioned resources. However, from the consumer perspective, using these resources is a sparsely discussed challenge. This paper reports a methodology and framework designed to address the challenges of using HPC (High Performance Computing) applications on auction-based cloud clusters. The authors focus on HPC applications and describe a method for determining bid-aware checkpointing intervals. They extend a theoretical model for determining checkpoint intervals using statistical analysis of pricing histories. Also the latest developments in the SpotHPC framework are introduced which aim at facilitating the managed execution of real MPI applications on auction-based cloud environments. The authors use their model to simulate a set of algorithms with different computing and communication densities. The results show the complex interactions between optimal bidding strategies and parallel applications performance.展开更多
The integration of clusters,grids,clouds,edges and other computing platforms result in contemporary technology of jungle computing.This novel technique has the aptitude to tackle high performance computation systems a...The integration of clusters,grids,clouds,edges and other computing platforms result in contemporary technology of jungle computing.This novel technique has the aptitude to tackle high performance computation systems and it manages the usage of all computing platforms at a time.Federated learning is a collaborative machine learning approach without centralized training data.The proposed system effectively detects the intrusion attack without human intervention and subsequently detects anomalous deviations in device communication behavior,potentially caused by malicious adversaries and it can emerge with new and unknown attacks.The main objective is to learn overall behavior of an intruder while performing attacks to the assumed target service.Moreover,the updated system model is send to the centralized server in jungle computing,to detect their pattern.Federated learning greatly helps the machine to study the type of attack from each device and this technique paves a way to complete dominion over all malicious behaviors.In our proposed work,we have implemented an intrusion detection system that has high accuracy,low False Positive Rate(FPR)scalable,and versatile for the jungle computing environment.The execution time taken to complete a round is less than two seconds,with an accuracy rate of 96%.展开更多
基金supported in part by the Talent Fund of Beijing Jiaotong University(2023XKRC017)in part by Research and Development Project of China State Railway Group Co.,Ltd.(P2022Z003).
文摘Further improving the railway innovation capacity and technological strength is the important goal of the 14th Five-Year Plan for railway scientific and technological innovation.It includes promoting the deep integration of cutting-edge technologies with the railway systems,strengthening the research and application of intelligent railway technologies,applying green computing technologies and advancing the collaborative sharing of transportation big data.The high-speed rail system tasks need to process huge amounts of data and heavy workload with the requirement of ultra-fast response.Therefore,it is of great necessity to promote computation efficiency by applying High Performance Computing(HPC)to high-speed rail systems.The HPC technique is a great solution for improving the performance,efficiency,and safety of high-speed rail systems.In this review,we introduce and analyze the application research of high performance computing technology in the field of highspeed railways.These HPC applications are cataloged into four broad categories,namely:fault diagnosis,network and communication,management system,and simulations.Moreover,challenges and issues to be addressed are discussed and further directions are suggested.
文摘As an important branch of information technology, high-performance computing has expanded its application field and its influence has been expanding. High-performance computing is always a key area of application in meteorology. We used field research and literature review methods to study the application of high performance computing in China’s meteorological department, and obtained the following results: 1) China Meteorological Department gradually established the first high-performance computer system since 1978. High-performance computing services can support operational numerical weather prediction models. 2) The Chinese meteorological department has always used the relatively advanced high-performance computing technology, and the business system capability has been continuously improved. The computing power has become an important symbol of the level of meteorological modernization. 3) High-performance computing technology and meteorological numerical forecasting applications are increasingly integrated, and continue to innovate and develop. 4) In the future, high-performance computing resource management will gradually transit from the current local pre-allocation mode to the local remote unified scheduling and shared use. In summary, we have come to the conclusion that the performance calculation business of the meteorological department will usher in a better tomorrow.
文摘This work started out with the in-depth feasibil-ity study and limitation analysis on the current disease spread estimating and countermea-sures evaluating models, then we identify that the population variability is a crucial impact which has been always ignored or less empha-sized. Taking HIV/AIDS as the application and validation background, we propose a novel al-gorithm model system, EEA model system, a new way to estimate the spread situation, evaluate different countermeasures and analyze the development of ARV-resistant disease strains. The model is a series of solvable ordi-nary differential equation (ODE) models to es-timate the spread of HIV/AIDS infections, which not only require only one year’s data to deduce the situation in any year, but also apply the piecewise constant method to employ multi- year information at the same time. We simulate the effects of therapy and vaccine, then evaluate the difference between them, and offer the smallest proportion of the vaccination in the population to defeat HIV/AIDS, especially the advantage of using the vaccination while the deficiency of using therapy separately. Then we analyze the development of ARV-resistant dis-ease strains by the piecewise constant method. Last but not least, high performance computing (HPC) platform is applied to simulate the situa-tion with variable large scale areas divided by grids, and especially the acceleration rate will come to around 4 to 5.5.
文摘The meteorological high-performance computing resource is the support platform for the weather forecast and climate prediction numerical model operation. The scientific and objective method to evaluate the application of meteorological high-performance computing resources can not only provide reference for the optimization of active resources, but also provide a quantitative basis for future resource construction and planning. In this paper, the concept of the utility value B and index compliance rate E of the meteorological high performance computing system are presented. The evaluation process, evaluation index and calculation method of the high performance computing resource application benefits are introduced.
基金Funded by the National Natural Science Foundation of China(No.50872043)
文摘The influences of water/cement ratio and admixtures on carbonation resistance of sulphoaluminate cement-based high performance concrete (HPC) were investigated. The experimental results show that with the decreasing water/cement ratio, the carbonation depth of sulphoaluminate cement-based HPC is decreased remarkably, and the carbonation resistance capability is also improved with the adding admixtures. The morphologies and structure characteristics of sulphoaluminate cement hydration products before and after carbonation were analyzed using SEM and XRD. The analysis results reveal that the main hydration product of sulphoaluminate cement, that is ettringite (AFt), decomposes after carbonation.
基金Project supported by the National Natural Science Foundation of China (Nos. 10232040, 10572002 and 10572003)
文摘A new direct method for solving unsymmetrical sparse linear systems(USLS) arising from meshless methods was introduced. Computation of certain meshless methods such as meshless local Petrov-Galerkin (MLPG) method need to solve large USLS. The proposed solution method for unsymmetrical case performs factorization processes symmetrically on the upper and lower triangular portion of matrix, which differs from previous work based on general unsymmetrical process, and attains higher performance. It is shown that the solution algorithm for USLS can be simply derived from the existing approaches for the symmetrical case. The new matrix factorization algorithm in our method can be implemented easily by modifying a standard JKI symmetrical matrix factorization code. Multi-blocked out-of-core strategies were also developed to expand the solution scale. The approach convincingly increases the speed of the solution process, which is demonstrated with the numerical tests.
基金Project supported by the Research Fund for the Doctoral Program of Higher Education (No.20030001112).
文摘In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of finite element analysis benchmark tests, the MFLOPS (million floating operations per second) of LDL^T factorization of benchmark tests vary on a Dell Pentium IV 850 MHz machine from 100 to 456 depending on the average size of the super-equations, i.e., on the average depth of unrolling. In this paper, a new sparse static solver with two-level unrolling that employs the concept of master-equations and searches for an appropriate depths of unrolling is proposed. The new solver provides higher MFLOPS for LDL^T factorization of benchmark tests, and therefore speeds up the solution process.
文摘Factors that have effect on concrete creep include mixture composition,curing conditions,ambient exposure conditions,and element geometry.Considering concrete mixtures influence and in order to improve the prediction of prestress loss in important structures,an experimental test under laboratory conditions was carried out to investigate compression creep of two high performance concrete mixtures used for prestressed members in one bridge.Based on the experimental results,a power exponent function of creep degree for structural numerical analysis was used to model the creep degree of two HPCs,and two series of parameters of this function for two HPCs were calculated with evolution program optimum method.The experimental data was compared with CEB-FIP 90 and ACI 209(92) models,and the two code models both overestimated creep degrees of the two HPCs.So it is recommended that the power exponent function should be used in this bridge structure analysis.
基金supported in part by National 863 Program (2009AA01Z256,No.2009AA01A345)National 973 Program (2007CB310705)the NSFC (60932004),P.R.China
文摘This paper analyzes the physical potential, computing performance benefi t and power consumption of optical interconnects. Compared with electrical interconnections, optical ones show undoubted advantages based on physical factor analysis. At the same time, since the recent developments drive us to think about whether these optical interconnect technologies with higher bandwidth but higher cost are worthy to be deployed, the computing performance comparison is performed. To meet the increasing demand of large-scale parallel or multi-processor computing tasks, an analytic method to evaluate parallel computing performance ofinterconnect systems is proposed in this paper. Both bandwidth-limit model and full-bandwidth model are under our investigation. Speedup and effi ciency are selected to represent the parallel performance of an interconnect system. Deploying the proposed models, we depict the performance gap between the optical and electrically interconnected systems. Another investigation on power consumption of commercial products showed that if the parallel interconnections are deployed, the unit power consumption will be reduced. Therefore, from the analysis of computing influence and power dissipation, we found that parallel optical interconnect is valuable combination of high performance and low energy consumption. Considering the possible data center under construction, huge power could be saved if parallel optical interconnects technologies are used.
文摘This research evaluated the suitability of stone dust in the design and production of High Perfor-mance Concrete (HPC). HPC mix was designed, tested, costed and a comparison of concrete classes used in the market (Class 25, 30 and 35) done using Cost Benefit Analysis (CBA). The cost benefit was analyzed using Internal Rate of Return (IRR) and Net Present Value (NPV). Laboratory tests established the properties concrete obtained from the design mix. Compressive strength, slump, and modulus of elasticity were tested and analyzed. Structural analysis using BS 8110 was done for a 10 storey office building to establish the structural member sizes. Members obtained from concrete Classes 25, 30, 35 and the new compressive strengths from HPC (Class 80) were obtained and compared. Analysis was done for structural members’ sizes and area freed as a result of de-signing with HPC as well as the steel reinforcement used. To justify the initial cost of HPC if ado- pted, the Cost Benefit Analysis (CBA) was used to estimate increased costs versus income resulting from increased let table space created. The minimum class of concrete used in design was limited to Class 25 N/mm2. The research shows that it is possible to manufacture high strength concrete using locally available stone dust. The stone dust sampled from Mlolongo quarries achieved a characteristic strength of 86.7 N/mm2 at a water cement ratio of 0.32. With the results structural analysis of a 10 storey office structures with columns spaced at 8 meters center to center was de-signed using the four classes and results compared. There was a reduction of columns from 1.2 m wide to 0.65 m wide (over 45%) when concrete class changes from Class 25 to Class 80 creating over 3% of the total space area per floor. Cost benefit analysis using Net Present Value (NPV) and Internal Rate of Return (IRR) presented business case for the use of HPC. With Class 80, the IRR was at 3% and NPV being 8% of the total initial investment. The steel reinforcement increased by 8.64% using Class 30, 11.68% using Class 35 and reduced by 8.37% at Class 80. Further analysis needs to be done to understand the trend of steel reinforcement keeping all the member sizes the same. In this study the member sizes were optimized based on the steel reinforcement and serviceability. This paper provides useful information to design Engineers and Architects and inform future design of multi storey structures.
文摘In this paper, by using a matrix technique, a dynamic model of high power factor induction motor with floating winding in parallel connection with capacitors is established. Then, the starting performance of this motor is analyzed by computer simulation. By comparison of the tested and computed results, which are in good agreement, the dynamic model and simulative method are verified.
文摘In this work the turbulence based acoustic sources and the corresponding wave propagation of fluctuating flow values in incompressible fluid flows are considered. Lighthill’s and Curle’s acoustic analogies are implemented in the open source computational fluid dynamics framework OpenFOAM. The main objective of this work is to visualize and localize the dominated sound sources and the resulting values of fluctuating pressure values within the computation domain representing the acoustical near field. This is all done on one mesh and during the iterative computation of the transient fluid flow. Finally the flow field and acoustical results of different simulation cases are presented and the properties of the shown method are discussed.
文摘This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and hosting,so that if two virtual severs be active on a host and energy load be more on a host,it would allocated the energy of other hosts(virtual host)to itself to stay steady and this option usually leads to hardware overflow errors and users dissatisfaction.This problem has been removed in methods based on cloud processing but not perfectly,therefore,providing an algorithm not only will implement a suitable security background but also it will suitably divide energy consumption and load balancing among virtual severs.The proposed algorithm is compared with several previously proposed Security Strategy including SC-PSSF,PSSF and DEEAC.Comparisons show that the proposed method offers high performance computing,efficiency and consumes lower energy in the network.
文摘Today the PC class machines are quite popular for HPC area, especially on the problemsthat require the good cost/performance ratios. One of the drawback of these machines is the poormemory throughput performance. And one of the reasons of the poor performance is depend on the lack of the mapping capability of the TLB which is a buffer to accelerate the virtual memory access. In this report, I present that the mapping capability and the performance can be improved with the multi granularity TLB feature that some processors have. And I also present that the new TLB handling routine can be incorporated into the demand paging system of Linux.
文摘Traditional HPC (High Performance Computing) cluster is built on top of physical machines. It is usually not practical to reassign these machines to other tasks due to the fact that software installation is time consuming. As a result, these machines are usually dedicated for the cluster usage. Virtualization technology provides an abstract layer which allows several different operating systems (with different software packages) running on top of one physical machine. Cloud computing provides an easy way for the user to manage and interact with the computing resources (the virtual machines in this case). In this work, we demonstrate the feasibility of building a cloud-based cluster for HPC on top of a set of desktop computers that are interconnected by means of Fast Ethernet. Our cluster has several advantages. For instance, the deployment time of the cluster is quite fast: We need only 5 min to deploy a cluster of 30 machines, Besides, several performance benchmarks have been carried out. As expected, the embarrassingly parallel problem has the linear relationship between the performance and the cluster size.
基金"This paper is an extended version of "SpotMPl: a framework for auction-based HPC computing using amazon spot instances" published in the International Symposium on Advances of Distributed Computing and Networking (ADCN 2011).Acknowledgment This research is supported in part by the National Science Foundation grant CNS 0958854 and educational resource grants from Amazon.com.
文摘Cloud computing is expanding widely in the world of IT infrastructure. This is due partly to the cost-saving effect of economies of scale. Fair market conditions can in theory provide a healthy environment to reflect the most reasonable costs of computations. While fixed cloud pricing provides an attractive low entry barrier for compute-intensive applications, both the consumer and supplier of computing resources can see high efficiency for their investments by participating in auction-based exchanges. There are huge incentives for the cloud provider to offer auctioned resources. However, from the consumer perspective, using these resources is a sparsely discussed challenge. This paper reports a methodology and framework designed to address the challenges of using HPC (High Performance Computing) applications on auction-based cloud clusters. The authors focus on HPC applications and describe a method for determining bid-aware checkpointing intervals. They extend a theoretical model for determining checkpoint intervals using statistical analysis of pricing histories. Also the latest developments in the SpotHPC framework are introduced which aim at facilitating the managed execution of real MPI applications on auction-based cloud environments. The authors use their model to simulate a set of algorithms with different computing and communication densities. The results show the complex interactions between optimal bidding strategies and parallel applications performance.
文摘The integration of clusters,grids,clouds,edges and other computing platforms result in contemporary technology of jungle computing.This novel technique has the aptitude to tackle high performance computation systems and it manages the usage of all computing platforms at a time.Federated learning is a collaborative machine learning approach without centralized training data.The proposed system effectively detects the intrusion attack without human intervention and subsequently detects anomalous deviations in device communication behavior,potentially caused by malicious adversaries and it can emerge with new and unknown attacks.The main objective is to learn overall behavior of an intruder while performing attacks to the assumed target service.Moreover,the updated system model is send to the centralized server in jungle computing,to detect their pattern.Federated learning greatly helps the machine to study the type of attack from each device and this technique paves a way to complete dominion over all malicious behaviors.In our proposed work,we have implemented an intrusion detection system that has high accuracy,low False Positive Rate(FPR)scalable,and versatile for the jungle computing environment.The execution time taken to complete a round is less than two seconds,with an accuracy rate of 96%.