Further improving the railway innovation capacity and technological strength is the important goal of the 14th Five-Year Plan for railway scientific and technological innovation.It includes promoting the deep integrat...Further improving the railway innovation capacity and technological strength is the important goal of the 14th Five-Year Plan for railway scientific and technological innovation.It includes promoting the deep integration of cutting-edge technologies with the railway systems,strengthening the research and application of intelligent railway technologies,applying green computing technologies and advancing the collaborative sharing of transportation big data.The high-speed rail system tasks need to process huge amounts of data and heavy workload with the requirement of ultra-fast response.Therefore,it is of great necessity to promote computation efficiency by applying High Performance Computing(HPC)to high-speed rail systems.The HPC technique is a great solution for improving the performance,efficiency,and safety of high-speed rail systems.In this review,we introduce and analyze the application research of high performance computing technology in the field of highspeed railways.These HPC applications are cataloged into four broad categories,namely:fault diagnosis,network and communication,management system,and simulations.Moreover,challenges and issues to be addressed are discussed and further directions are suggested.展开更多
Computational optical imaging is an interdisciplinary subject integrating optics, mathematics, and information technology. It introduces information processing into optical imaging and combines it with intelligent com...Computational optical imaging is an interdisciplinary subject integrating optics, mathematics, and information technology. It introduces information processing into optical imaging and combines it with intelligent computing, subverting the imaging mechanism of traditional optical imaging which only relies on orderly information transmission. To meet the high-precision requirements of traditional optical imaging for optical processing and adjustment, as well as to solve its problems of being sensitive to gravity and temperature in use, we establish an optical imaging system model from the perspective of computational optical imaging and studies how to design and solve the imaging consistency problem of optical system under the influence of gravity, thermal effect, stress, and other external environment to build a high robustness optical system. The results show that the high robustness interval of the optical system exists and can effectively reduce the sensitivity of the optical system to the disturbance of each link, thus realizing the high robustness of optical imaging.展开更多
The integration of clusters,grids,clouds,edges and other computing platforms result in contemporary technology of jungle computing.This novel technique has the aptitude to tackle high performance computation systems a...The integration of clusters,grids,clouds,edges and other computing platforms result in contemporary technology of jungle computing.This novel technique has the aptitude to tackle high performance computation systems and it manages the usage of all computing platforms at a time.Federated learning is a collaborative machine learning approach without centralized training data.The proposed system effectively detects the intrusion attack without human intervention and subsequently detects anomalous deviations in device communication behavior,potentially caused by malicious adversaries and it can emerge with new and unknown attacks.The main objective is to learn overall behavior of an intruder while performing attacks to the assumed target service.Moreover,the updated system model is send to the centralized server in jungle computing,to detect their pattern.Federated learning greatly helps the machine to study the type of attack from each device and this technique paves a way to complete dominion over all malicious behaviors.In our proposed work,we have implemented an intrusion detection system that has high accuracy,low False Positive Rate(FPR)scalable,and versatile for the jungle computing environment.The execution time taken to complete a round is less than two seconds,with an accuracy rate of 96%.展开更多
Integrated computational materials engineering(ICME)is to integrate multi-scale computational simulations and key experimental methods such as macroscopic,mesoscopic,and microscopic into the whole process of Al alloys...Integrated computational materials engineering(ICME)is to integrate multi-scale computational simulations and key experimental methods such as macroscopic,mesoscopic,and microscopic into the whole process of Al alloys design and development,which enables the design and development of Al alloys to upgrade from traditional empirical to the integration of compositionprocess-structure-mechanical property,thus greatly accelerating its development speed and reducing its development cost.This study combines calculation of phase diagram(CALPHAD),Finite element calculations,first principle calculations,and microstructure characterization methods to predict and regulate the formation and structure of composite precipitates from the design of highmodulus Al alloy compositions and optimize the casting process parameters to inhibit the formation of micropore defects in the casting process,and the final tensile strength of Al alloys reaches420 MPa and Young's modulus reaches more than 88 GPa,which achieves the design goal of the high strength and modulus Al alloys,and establishes a new mode of the design and development of the strength/modulus Al alloys.展开更多
Evolutionary algorithms(EAs)have been used in high utility itemset mining(HUIM)to address the problem of discover-ing high utility itemsets(HUIs)in the exponential search space.EAs have good running and mining perform...Evolutionary algorithms(EAs)have been used in high utility itemset mining(HUIM)to address the problem of discover-ing high utility itemsets(HUIs)in the exponential search space.EAs have good running and mining performance,but they still require huge computational resource and may miss many HUIs.Due to the good combination of EA and graphics processing unit(GPU),we propose a parallel genetic algorithm(GA)based on the platform of GPU for mining HUIM(PHUI-GA).The evolution steps with improvements are performed in central processing unit(CPU)and the CPU intensive steps are sent to GPU to eva-luate with multi-threaded processors.Experiments show that the mining performance of PHUI-GA outperforms the existing EAs.When mining 90%HUIs,the PHUI-GA is up to 188 times better than the existing EAs and up to 36 times better than the CPU parallel approach.展开更多
As an important branch of information technology, high-performance computing has expanded its application field and its influence has been expanding. High-performance computing is always a key area of application in m...As an important branch of information technology, high-performance computing has expanded its application field and its influence has been expanding. High-performance computing is always a key area of application in meteorology. We used field research and literature review methods to study the application of high performance computing in China’s meteorological department, and obtained the following results: 1) China Meteorological Department gradually established the first high-performance computer system since 1978. High-performance computing services can support operational numerical weather prediction models. 2) The Chinese meteorological department has always used the relatively advanced high-performance computing technology, and the business system capability has been continuously improved. The computing power has become an important symbol of the level of meteorological modernization. 3) High-performance computing technology and meteorological numerical forecasting applications are increasingly integrated, and continue to innovate and develop. 4) In the future, high-performance computing resource management will gradually transit from the current local pre-allocation mode to the local remote unified scheduling and shared use. In summary, we have come to the conclusion that the performance calculation business of the meteorological department will usher in a better tomorrow.展开更多
This work started out with the in-depth feasibil-ity study and limitation analysis on the current disease spread estimating and countermea-sures evaluating models, then we identify that the population variability is a...This work started out with the in-depth feasibil-ity study and limitation analysis on the current disease spread estimating and countermea-sures evaluating models, then we identify that the population variability is a crucial impact which has been always ignored or less empha-sized. Taking HIV/AIDS as the application and validation background, we propose a novel al-gorithm model system, EEA model system, a new way to estimate the spread situation, evaluate different countermeasures and analyze the development of ARV-resistant disease strains. The model is a series of solvable ordi-nary differential equation (ODE) models to es-timate the spread of HIV/AIDS infections, which not only require only one year’s data to deduce the situation in any year, but also apply the piecewise constant method to employ multi- year information at the same time. We simulate the effects of therapy and vaccine, then evaluate the difference between them, and offer the smallest proportion of the vaccination in the population to defeat HIV/AIDS, especially the advantage of using the vaccination while the deficiency of using therapy separately. Then we analyze the development of ARV-resistant dis-ease strains by the piecewise constant method. Last but not least, high performance computing (HPC) platform is applied to simulate the situa-tion with variable large scale areas divided by grids, and especially the acceleration rate will come to around 4 to 5.5.展开更多
For the complex and large targets like naval vessels,the computation for their RCS usu- ally uses high-frequency approach.Presenting the geometry modeling and the computation principle on naval vessel's RCS,this p...For the complex and large targets like naval vessels,the computation for their RCS usu- ally uses high-frequency approach.Presenting the geometry modeling and the computation principle on naval vessel's RCS,this paper puts the emphasis on the key techniques of computing the naval vessel's RCS based on high-frequency approach with the analysis on mast's effect to the total RCS as the example.展开更多
Milling Process Simulation is one of the important re search areas in manufacturing science. For the purpose of improving the prec ision of simulation and extending its usability, numerical algorithm is more and more ...Milling Process Simulation is one of the important re search areas in manufacturing science. For the purpose of improving the prec ision of simulation and extending its usability, numerical algorithm is more and more used in the milling modeling areas. But simulative efficiency is decreasin g with increase of its complexity. As a result, application of the method is lim ited. Aimed at above question, high-efficient algorithm for milling process sim ulation is studied. It is important for milling process simulation’s applicatio n. Parallel computing is widely used to solve the large-scale computation question s. Its advantages include system flexibility, robust, high-efficient computing capability and high ratio of performance to price. With the development of compu ter network, utilizing the computing resource in the Internet, a virtual computi ng environment with powerful computing capability can be consisted by microc omputers, and the difficulty of building hardware environment which is used to s upport parallel computing is reduced. How to use network technology and parallel algorithm to improve simulative effic iency for milling forces simulation is investigated in the paper. In order to pr edict milling forces, a simplified local milling forces model is used in the pap er. End milling cutter is assumed to be divided by r number of differential elem ents along the axial direction of the cutter. For a given time, the total cuttin g forces can be obtained by summarizing the resultant cutting force produced by each differential cutter disc. Divide the whole simulative time into some segmen ts, send these program’s segments to microcomputers in the Internet and obtain the result of the program’s segments, all of the result of program’s segments a re composed the final result. For implementing the algorithm, a distributed Parallel computing framework is de signed in the paper. In the framework, web server plays a role of controller. Us ing Java RMI(remote method interface), the computing processes in computing serv er are called by web server. There are lots of control processes in web server a nd control the computing servers. The codes of simulative algorithm can be dynam ic sent to the computing servers, and milling forces at the different time are c omputed through utilizing the local computer’s resource. The results that are ca lculated by every computing servers are sent to the web server, and composed the final result. The framework can be used by different simulative algorithm. Comp ared with the algorithm running single machine, the efficiency of provided algor ithm is higher than that of single machine.展开更多
The meteorological high-performance computing resource is the support platform for the weather forecast and climate prediction numerical model operation. The scientific and objective method to evaluate the application...The meteorological high-performance computing resource is the support platform for the weather forecast and climate prediction numerical model operation. The scientific and objective method to evaluate the application of meteorological high-performance computing resources can not only provide reference for the optimization of active resources, but also provide a quantitative basis for future resource construction and planning. In this paper, the concept of the utility value B and index compliance rate E of the meteorological high performance computing system are presented. The evaluation process, evaluation index and calculation method of the high performance computing resource application benefits are introduced.展开更多
On the basis of the brief introduction to CAI and some its distinctive features,this paper minutely arrives at the principles of the application of CAI in high schools and analyses the implication brought about by its...On the basis of the brief introduction to CAI and some its distinctive features,this paper minutely arrives at the principles of the application of CAI in high schools and analyses the implication brought about by its application.This research is to provide a further help for English teachers in high shools,so that the students can effectively improve the level of English language learning.展开更多
Low temperature complementary metal oxide semiconductor(CMOS)or cryogenic CMOS is a promising avenue for the continuation of Moore’s law while serving the needs of high performance computing.With temperature as a con...Low temperature complementary metal oxide semiconductor(CMOS)or cryogenic CMOS is a promising avenue for the continuation of Moore’s law while serving the needs of high performance computing.With temperature as a control“knob”to steepen the subthreshold slope behavior of CMOS devices,the supply voltage of operation can be reduced with no impact on operating speed.With the optimal threshold voltage engineering,the device ON current can be further enhanced,translating to higher performance.In this article,the experimentally calibrated data was adopted to tune the threshold voltage and investigated the power performance area of cryogenic CMOS at device,circuit and system level.We also presented results from measurement and analysis of functional memory chips fabricated in 28 nm bulk CMOS and 22 nm fully depleted silicon on insulator(FDSOI)operating at cryogenic temperature.Finally,the challenges and opportunities in the further development and deployment of such systems were discussed.展开更多
Objective To investigate the image quality, radiation dose and diagnostic value of the low-tube-voltage high-pitch dual-source computed tomography(DSCT) with sinogram affirmed iterative reconstruction(SAFIRE) for non-...Objective To investigate the image quality, radiation dose and diagnostic value of the low-tube-voltage high-pitch dual-source computed tomography(DSCT) with sinogram affirmed iterative reconstruction(SAFIRE) for non-enhanced abdominal and pelvic scans. Methods This institutional review board-approved prospective study included 64 patients who gave written informed consent for additional abdominal and pelvic scan with DSCT in the period from November to December 2012. The patients underwent standard non-enhanced CT scans(protocol 1) [tube voltage of 120 k Vp/pitch of 0.9/filtered back-projection(FBP) reconstruction] followed by high-pitch non-enhanced CT scans(protocol 2)(100 k Vp/3.0/SAFIRE). The total scan time, mean CT number, signal-to-noise ratio(SNR), image quality, lesion detectability and radiation dose were compared between the two protocols. Results The total scan time of protocol 2 was significantly shorter than that of protocol 1(1.4±0.1 seconds vs. 7.6±0.6 seconds, P<0.001). There was no significant difference between protocol 1 and protocol 2 in mean CT number of all organs(liver, 55.4±6.3 HU vs. 56.1±6.8 HU, P=0.214; pancreas, 43.6±5.9 HU vs. 43.7±5.8 HU, P=0.785; spleen, 47.9±3.9 HU vs. 49.4±4.3 HU, P=0.128; kidney, 32.2±2.3 HU vs. 33.1±2.3 HU, P=0.367; abdominal aorta, 44.8±5.6 HU vs. 45.0±5.5 HU, P=0.499; psoas muscle, 50.7±4.1 HU vs. 50.3±4.5 HU, P=0.279). SNR on images of protocol 2 was higher than that of protocol 1(liver, 5.0±1.2 vs. 4.5±1.1, P<0.001; pancreas, 4.0±1.0 vs. 3.6±0.8, P<0.001; spleen, 4.7±1.0 vs. 4.1±0.9, P<0.001; kidney, 3.1±0.6 vs. 2.8±0.6, P<0.001; abdominal aorta, 4.1±1.0 vs. 3.8±1.0, P<0.001; psoas muscle, 4.5±1.1 vs. 4.3±1.2, P=0.012). The overall image noise of protocol 2 was lower than that of protocol1(9.8±3.1 HU vs. 11.1±3.0 HU, P<0.001). Image quality of protocol 2 was good but lower than that of protocol 1(4.1±0.7 vs. 4.6±0.5, P<0.001). Protocol 2 perceived 229 of 234 lesions(97.9%) that were detected in protocol 1 in the abdomen and pelvis. Radiation dose of protocol 2 was lower than that of protocol 1(4.4±0.4 m Sv vs. 7.3±2.4 m Sv, P<0.001) and the mean dose reduction was 41.4%. Conclusion The high-pitch DSCT with SAFIRE can shorten scan time and reduce radiation dose while preserving image quality in non-enhanced abdominal and pelvic scans.展开更多
AIM To determine the sensitivity and specificity of high resolution computed tomography(HRCT) in the diagnosis of otosclerosis.METHODS A systematic literature review was undertaken to include Level I-III studies(Oxfor...AIM To determine the sensitivity and specificity of high resolution computed tomography(HRCT) in the diagnosis of otosclerosis.METHODS A systematic literature review was undertaken to include Level I-III studies(Oxford Centre for Evidenced based Medicine) that utilised HRCT to detect histology confirmed otosclerosis.Quantitative synthesis was then performed.RESULTS Based on available level III literature,HRCT has a relatively low sensitivity of 58%(95%CI: 49.4-66.9),a high specificity,95%(95%CI: 89.9-98.0) and a positive predictive value of 92%(95%CI: 84.1-95.8).HRCT is better at diagnosing the more prevalent fenestral form of otosclerosis but remains vulnerable to inframillimetre,retrofenestral and dense sclerotic lesions,despite the advent of more advanced CT scanners with improved collimation.CONCLUSION Whilst the diagnosis of otosclerosis remains largely clinical,HRCT remains the gold standard imaging of choice for the middle ear and serves as a useful adjunct to the clinician,helping to delineate extent of disease and exclude other causes.展开更多
The present paper reviews the recent developments of a high⁃order⁃spectral method(HOS)and the combination with computational fluid dynamics(CFD)method for wave⁃structure interactions.As the numerical simulations of wa...The present paper reviews the recent developments of a high⁃order⁃spectral method(HOS)and the combination with computational fluid dynamics(CFD)method for wave⁃structure interactions.As the numerical simulations of wave⁃structure interaction require efficiency and accuracy,as well as the ability in calculating in open sea states,the HOS method has its strength in both generating extreme waves in open seas and fast convergence in simulations,while computational fluid dynamics(CFD)method has its advantages in simulating violent wave⁃structure interactions.This paper provides the new thoughts for fast and accurate simulations,as well as the future work on innovations in fine fluid field of numerical simulations.展开更多
This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and...This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and hosting,so that if two virtual severs be active on a host and energy load be more on a host,it would allocated the energy of other hosts(virtual host)to itself to stay steady and this option usually leads to hardware overflow errors and users dissatisfaction.This problem has been removed in methods based on cloud processing but not perfectly,therefore,providing an algorithm not only will implement a suitable security background but also it will suitably divide energy consumption and load balancing among virtual severs.The proposed algorithm is compared with several previously proposed Security Strategy including SC-PSSF,PSSF and DEEAC.Comparisons show that the proposed method offers high performance computing,efficiency and consumes lower energy in the network.展开更多
A numerical investigation of the structure of the vortical flowfield over delta wings at high angles of attack in longitudinal and with small sideslip angle is presented. Three-dimensional Navier-Stokes numerical simu...A numerical investigation of the structure of the vortical flowfield over delta wings at high angles of attack in longitudinal and with small sideslip angle is presented. Three-dimensional Navier-Stokes numerical simulations were carried out to predict the complex leeward-side flowfield characteristics that are dominated by the effect of the breakdown of the leading-edge vortices. The methods that analyze the flowfield structure quantitatively were given by using flowfield data from the computational results. In the region before the vortex breakdown, the vortex axes are approximated as being straight line. As the angle of attack increases, the vortex axes are closer to the root chord, and farther away from the wing surface. Along the vortex axes, as the adverse pressure gradients occur, the axial velocity decreases, that is, A is negativee, so the vortex is unstable, and it is possible to breakdown. The occurrence of the breakdown results in the instability of lateral motion for a delta wing, and the lateral moment diverges after a small perturbation occurs at high angles of attack. However, after a critical angle of attack is reached the vortices breakdown completely at the wing apex, and the instability resulting from the vortex breakdown disappears.展开更多
With the rapid development of high-rise buildings and long-span structures in the recent years, high performance com- putation (HPC) is becoming more and more important, sometimes even crucial, for the design and cons...With the rapid development of high-rise buildings and long-span structures in the recent years, high performance com- putation (HPC) is becoming more and more important, sometimes even crucial, for the design and construction of com- plex building structures. To satisfy the engineering requirements of HPC, a parallel FEA computing kernel, which is designed typically for the analysis of complex building structures, will be presented and illustrated in this paper. This kernel program is based on the Intel Math Kernel Library (MKL) and coded by FORTRAN 2008 syntax, which is a parallel computer language. To improve the capability and efficiency of the computing kernel program, the parallel concepts of modern FORTRAN, such as elemental procedure, do concurrent, etc., have been applied extensively in coding and the famous PARDISO solver in MKL has been called to solve the Large-sparse system of linear equations. The ultimate objective of developing the computing kernel is to make the personal computer have the ability to analysis large building structures up to ten million degree of freedoms (DOFs). Up to now, the linear static analysis and dynamic analysis have been achieved while the nonlinear analysis, including geometric and material nonlinearity, has not been finished yet. Therefore, the numerical examples in this paper will be concentrated on demonstrating the validity and efficiency of the linear analysis and modal analysis for large FE models, while ignoring the verification of the nonlinear analysis capabilities.展开更多
Mineral dissemination and pore space distribution in ore particles are important features that influence heap leaching performance. To quantify the mineral dissemination and pore space distribution of an ore particle,...Mineral dissemination and pore space distribution in ore particles are important features that influence heap leaching performance. To quantify the mineral dissemination and pore space distribution of an ore particle, a cylindrical copper oxide ore sample (I center dot 4.6 mm x 5.6 mm) was scanned using high-resolution X-ray computed tomography (HRXCT), a nondestructive imaging technology, at a spatial resolution of 4.85 mu m. Combined with three-dimensional (3D) image analysis techniques, the main mineral phases and pore space were segmented and the volume fraction of each phase was calculated. In addition, the mass fraction of each mineral phase was estimated and the result was validated with that obtained using traditional techniques. Furthermore, the pore phase features, including the pore size distribution, pore surface area, pore fractal dimension, pore centerline, and the pore connectivity, were investigated quantitatively. The pore space analysis results indicate that the pore size distribution closely fits a log-normal distribution and that the pore space morphology is complicated, with a large surface area and low connectivity. This study demonstrates that the combination of HRXCT and 3D image analysis is an effective tool for acquiring 3D mineralogical and pore structural data.展开更多
基金supported in part by the Talent Fund of Beijing Jiaotong University(2023XKRC017)in part by Research and Development Project of China State Railway Group Co.,Ltd.(P2022Z003).
文摘Further improving the railway innovation capacity and technological strength is the important goal of the 14th Five-Year Plan for railway scientific and technological innovation.It includes promoting the deep integration of cutting-edge technologies with the railway systems,strengthening the research and application of intelligent railway technologies,applying green computing technologies and advancing the collaborative sharing of transportation big data.The high-speed rail system tasks need to process huge amounts of data and heavy workload with the requirement of ultra-fast response.Therefore,it is of great necessity to promote computation efficiency by applying High Performance Computing(HPC)to high-speed rail systems.The HPC technique is a great solution for improving the performance,efficiency,and safety of high-speed rail systems.In this review,we introduce and analyze the application research of high performance computing technology in the field of highspeed railways.These HPC applications are cataloged into four broad categories,namely:fault diagnosis,network and communication,management system,and simulations.Moreover,challenges and issues to be addressed are discussed and further directions are suggested.
文摘Computational optical imaging is an interdisciplinary subject integrating optics, mathematics, and information technology. It introduces information processing into optical imaging and combines it with intelligent computing, subverting the imaging mechanism of traditional optical imaging which only relies on orderly information transmission. To meet the high-precision requirements of traditional optical imaging for optical processing and adjustment, as well as to solve its problems of being sensitive to gravity and temperature in use, we establish an optical imaging system model from the perspective of computational optical imaging and studies how to design and solve the imaging consistency problem of optical system under the influence of gravity, thermal effect, stress, and other external environment to build a high robustness optical system. The results show that the high robustness interval of the optical system exists and can effectively reduce the sensitivity of the optical system to the disturbance of each link, thus realizing the high robustness of optical imaging.
文摘The integration of clusters,grids,clouds,edges and other computing platforms result in contemporary technology of jungle computing.This novel technique has the aptitude to tackle high performance computation systems and it manages the usage of all computing platforms at a time.Federated learning is a collaborative machine learning approach without centralized training data.The proposed system effectively detects the intrusion attack without human intervention and subsequently detects anomalous deviations in device communication behavior,potentially caused by malicious adversaries and it can emerge with new and unknown attacks.The main objective is to learn overall behavior of an intruder while performing attacks to the assumed target service.Moreover,the updated system model is send to the centralized server in jungle computing,to detect their pattern.Federated learning greatly helps the machine to study the type of attack from each device and this technique paves a way to complete dominion over all malicious behaviors.In our proposed work,we have implemented an intrusion detection system that has high accuracy,low False Positive Rate(FPR)scalable,and versatile for the jungle computing environment.The execution time taken to complete a round is less than two seconds,with an accuracy rate of 96%.
基金supported by the National Natural Science Foundation of China(No.52073030)。
文摘Integrated computational materials engineering(ICME)is to integrate multi-scale computational simulations and key experimental methods such as macroscopic,mesoscopic,and microscopic into the whole process of Al alloys design and development,which enables the design and development of Al alloys to upgrade from traditional empirical to the integration of compositionprocess-structure-mechanical property,thus greatly accelerating its development speed and reducing its development cost.This study combines calculation of phase diagram(CALPHAD),Finite element calculations,first principle calculations,and microstructure characterization methods to predict and regulate the formation and structure of composite precipitates from the design of highmodulus Al alloy compositions and optimize the casting process parameters to inhibit the formation of micropore defects in the casting process,and the final tensile strength of Al alloys reaches420 MPa and Young's modulus reaches more than 88 GPa,which achieves the design goal of the high strength and modulus Al alloys,and establishes a new mode of the design and development of the strength/modulus Al alloys.
基金This work was supported by the National Natural Science Foundation of China(62073155,62002137,62106088,62206113)the High-End Foreign Expert Recruitment Plan(G2023144007L)the Fundamental Research Funds for the Central Universities(JUSRP221028).
文摘Evolutionary algorithms(EAs)have been used in high utility itemset mining(HUIM)to address the problem of discover-ing high utility itemsets(HUIs)in the exponential search space.EAs have good running and mining performance,but they still require huge computational resource and may miss many HUIs.Due to the good combination of EA and graphics processing unit(GPU),we propose a parallel genetic algorithm(GA)based on the platform of GPU for mining HUIM(PHUI-GA).The evolution steps with improvements are performed in central processing unit(CPU)and the CPU intensive steps are sent to GPU to eva-luate with multi-threaded processors.Experiments show that the mining performance of PHUI-GA outperforms the existing EAs.When mining 90%HUIs,the PHUI-GA is up to 188 times better than the existing EAs and up to 36 times better than the CPU parallel approach.
文摘As an important branch of information technology, high-performance computing has expanded its application field and its influence has been expanding. High-performance computing is always a key area of application in meteorology. We used field research and literature review methods to study the application of high performance computing in China’s meteorological department, and obtained the following results: 1) China Meteorological Department gradually established the first high-performance computer system since 1978. High-performance computing services can support operational numerical weather prediction models. 2) The Chinese meteorological department has always used the relatively advanced high-performance computing technology, and the business system capability has been continuously improved. The computing power has become an important symbol of the level of meteorological modernization. 3) High-performance computing technology and meteorological numerical forecasting applications are increasingly integrated, and continue to innovate and develop. 4) In the future, high-performance computing resource management will gradually transit from the current local pre-allocation mode to the local remote unified scheduling and shared use. In summary, we have come to the conclusion that the performance calculation business of the meteorological department will usher in a better tomorrow.
文摘This work started out with the in-depth feasibil-ity study and limitation analysis on the current disease spread estimating and countermea-sures evaluating models, then we identify that the population variability is a crucial impact which has been always ignored or less empha-sized. Taking HIV/AIDS as the application and validation background, we propose a novel al-gorithm model system, EEA model system, a new way to estimate the spread situation, evaluate different countermeasures and analyze the development of ARV-resistant disease strains. The model is a series of solvable ordi-nary differential equation (ODE) models to es-timate the spread of HIV/AIDS infections, which not only require only one year’s data to deduce the situation in any year, but also apply the piecewise constant method to employ multi- year information at the same time. We simulate the effects of therapy and vaccine, then evaluate the difference between them, and offer the smallest proportion of the vaccination in the population to defeat HIV/AIDS, especially the advantage of using the vaccination while the deficiency of using therapy separately. Then we analyze the development of ARV-resistant dis-ease strains by the piecewise constant method. Last but not least, high performance computing (HPC) platform is applied to simulate the situa-tion with variable large scale areas divided by grids, and especially the acceleration rate will come to around 4 to 5.5.
文摘For the complex and large targets like naval vessels,the computation for their RCS usu- ally uses high-frequency approach.Presenting the geometry modeling and the computation principle on naval vessel's RCS,this paper puts the emphasis on the key techniques of computing the naval vessel's RCS based on high-frequency approach with the analysis on mast's effect to the total RCS as the example.
文摘Milling Process Simulation is one of the important re search areas in manufacturing science. For the purpose of improving the prec ision of simulation and extending its usability, numerical algorithm is more and more used in the milling modeling areas. But simulative efficiency is decreasin g with increase of its complexity. As a result, application of the method is lim ited. Aimed at above question, high-efficient algorithm for milling process sim ulation is studied. It is important for milling process simulation’s applicatio n. Parallel computing is widely used to solve the large-scale computation question s. Its advantages include system flexibility, robust, high-efficient computing capability and high ratio of performance to price. With the development of compu ter network, utilizing the computing resource in the Internet, a virtual computi ng environment with powerful computing capability can be consisted by microc omputers, and the difficulty of building hardware environment which is used to s upport parallel computing is reduced. How to use network technology and parallel algorithm to improve simulative effic iency for milling forces simulation is investigated in the paper. In order to pr edict milling forces, a simplified local milling forces model is used in the pap er. End milling cutter is assumed to be divided by r number of differential elem ents along the axial direction of the cutter. For a given time, the total cuttin g forces can be obtained by summarizing the resultant cutting force produced by each differential cutter disc. Divide the whole simulative time into some segmen ts, send these program’s segments to microcomputers in the Internet and obtain the result of the program’s segments, all of the result of program’s segments a re composed the final result. For implementing the algorithm, a distributed Parallel computing framework is de signed in the paper. In the framework, web server plays a role of controller. Us ing Java RMI(remote method interface), the computing processes in computing serv er are called by web server. There are lots of control processes in web server a nd control the computing servers. The codes of simulative algorithm can be dynam ic sent to the computing servers, and milling forces at the different time are c omputed through utilizing the local computer’s resource. The results that are ca lculated by every computing servers are sent to the web server, and composed the final result. The framework can be used by different simulative algorithm. Comp ared with the algorithm running single machine, the efficiency of provided algor ithm is higher than that of single machine.
文摘The meteorological high-performance computing resource is the support platform for the weather forecast and climate prediction numerical model operation. The scientific and objective method to evaluate the application of meteorological high-performance computing resources can not only provide reference for the optimization of active resources, but also provide a quantitative basis for future resource construction and planning. In this paper, the concept of the utility value B and index compliance rate E of the meteorological high performance computing system are presented. The evaluation process, evaluation index and calculation method of the high performance computing resource application benefits are introduced.
文摘On the basis of the brief introduction to CAI and some its distinctive features,this paper minutely arrives at the principles of the application of CAI in high schools and analyses the implication brought about by its application.This research is to provide a further help for English teachers in high shools,so that the students can effectively improve the level of English language learning.
基金funded by the Defense Advanced Research Project Agency(DARPA)Low Temperature Logic Technology(LTLT)program.
文摘Low temperature complementary metal oxide semiconductor(CMOS)or cryogenic CMOS is a promising avenue for the continuation of Moore’s law while serving the needs of high performance computing.With temperature as a control“knob”to steepen the subthreshold slope behavior of CMOS devices,the supply voltage of operation can be reduced with no impact on operating speed.With the optimal threshold voltage engineering,the device ON current can be further enhanced,translating to higher performance.In this article,the experimentally calibrated data was adopted to tune the threshold voltage and investigated the power performance area of cryogenic CMOS at device,circuit and system level.We also presented results from measurement and analysis of functional memory chips fabricated in 28 nm bulk CMOS and 22 nm fully depleted silicon on insulator(FDSOI)operating at cryogenic temperature.Finally,the challenges and opportunities in the further development and deployment of such systems were discussed.
文摘Objective To investigate the image quality, radiation dose and diagnostic value of the low-tube-voltage high-pitch dual-source computed tomography(DSCT) with sinogram affirmed iterative reconstruction(SAFIRE) for non-enhanced abdominal and pelvic scans. Methods This institutional review board-approved prospective study included 64 patients who gave written informed consent for additional abdominal and pelvic scan with DSCT in the period from November to December 2012. The patients underwent standard non-enhanced CT scans(protocol 1) [tube voltage of 120 k Vp/pitch of 0.9/filtered back-projection(FBP) reconstruction] followed by high-pitch non-enhanced CT scans(protocol 2)(100 k Vp/3.0/SAFIRE). The total scan time, mean CT number, signal-to-noise ratio(SNR), image quality, lesion detectability and radiation dose were compared between the two protocols. Results The total scan time of protocol 2 was significantly shorter than that of protocol 1(1.4±0.1 seconds vs. 7.6±0.6 seconds, P<0.001). There was no significant difference between protocol 1 and protocol 2 in mean CT number of all organs(liver, 55.4±6.3 HU vs. 56.1±6.8 HU, P=0.214; pancreas, 43.6±5.9 HU vs. 43.7±5.8 HU, P=0.785; spleen, 47.9±3.9 HU vs. 49.4±4.3 HU, P=0.128; kidney, 32.2±2.3 HU vs. 33.1±2.3 HU, P=0.367; abdominal aorta, 44.8±5.6 HU vs. 45.0±5.5 HU, P=0.499; psoas muscle, 50.7±4.1 HU vs. 50.3±4.5 HU, P=0.279). SNR on images of protocol 2 was higher than that of protocol 1(liver, 5.0±1.2 vs. 4.5±1.1, P<0.001; pancreas, 4.0±1.0 vs. 3.6±0.8, P<0.001; spleen, 4.7±1.0 vs. 4.1±0.9, P<0.001; kidney, 3.1±0.6 vs. 2.8±0.6, P<0.001; abdominal aorta, 4.1±1.0 vs. 3.8±1.0, P<0.001; psoas muscle, 4.5±1.1 vs. 4.3±1.2, P=0.012). The overall image noise of protocol 2 was lower than that of protocol1(9.8±3.1 HU vs. 11.1±3.0 HU, P<0.001). Image quality of protocol 2 was good but lower than that of protocol 1(4.1±0.7 vs. 4.6±0.5, P<0.001). Protocol 2 perceived 229 of 234 lesions(97.9%) that were detected in protocol 1 in the abdomen and pelvis. Radiation dose of protocol 2 was lower than that of protocol 1(4.4±0.4 m Sv vs. 7.3±2.4 m Sv, P<0.001) and the mean dose reduction was 41.4%. Conclusion The high-pitch DSCT with SAFIRE can shorten scan time and reduce radiation dose while preserving image quality in non-enhanced abdominal and pelvic scans.
文摘AIM To determine the sensitivity and specificity of high resolution computed tomography(HRCT) in the diagnosis of otosclerosis.METHODS A systematic literature review was undertaken to include Level I-III studies(Oxford Centre for Evidenced based Medicine) that utilised HRCT to detect histology confirmed otosclerosis.Quantitative synthesis was then performed.RESULTS Based on available level III literature,HRCT has a relatively low sensitivity of 58%(95%CI: 49.4-66.9),a high specificity,95%(95%CI: 89.9-98.0) and a positive predictive value of 92%(95%CI: 84.1-95.8).HRCT is better at diagnosing the more prevalent fenestral form of otosclerosis but remains vulnerable to inframillimetre,retrofenestral and dense sclerotic lesions,despite the advent of more advanced CT scanners with improved collimation.CONCLUSION Whilst the diagnosis of otosclerosis remains largely clinical,HRCT remains the gold standard imaging of choice for the middle ear and serves as a useful adjunct to the clinician,helping to delineate extent of disease and exclude other causes.
基金National Natural Science Foundation of China(Grant No.51879159)the National Key Research and Development Program of China(Grant Nos.2019YFB1704200 and 2019YFC0312400)+2 种基金the Chang Jiang Scholars Program(Grant No.T2014099)the Shanghai Excellent Academic Leaders Program(Grant No.17XD1402300)the Innovative Special Project of Numerical Tank of Ministry of Industry and Information Technology of China(Grant No.2016-23/09).
文摘The present paper reviews the recent developments of a high⁃order⁃spectral method(HOS)and the combination with computational fluid dynamics(CFD)method for wave⁃structure interactions.As the numerical simulations of wave⁃structure interaction require efficiency and accuracy,as well as the ability in calculating in open sea states,the HOS method has its strength in both generating extreme waves in open seas and fast convergence in simulations,while computational fluid dynamics(CFD)method has its advantages in simulating violent wave⁃structure interactions.This paper provides the new thoughts for fast and accurate simulations,as well as the future work on innovations in fine fluid field of numerical simulations.
文摘This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and hosting,so that if two virtual severs be active on a host and energy load be more on a host,it would allocated the energy of other hosts(virtual host)to itself to stay steady and this option usually leads to hardware overflow errors and users dissatisfaction.This problem has been removed in methods based on cloud processing but not perfectly,therefore,providing an algorithm not only will implement a suitable security background but also it will suitably divide energy consumption and load balancing among virtual severs.The proposed algorithm is compared with several previously proposed Security Strategy including SC-PSSF,PSSF and DEEAC.Comparisons show that the proposed method offers high performance computing,efficiency and consumes lower energy in the network.
基金Project supported by the Foundation of Aeronautical Science (No.99A53001)
文摘A numerical investigation of the structure of the vortical flowfield over delta wings at high angles of attack in longitudinal and with small sideslip angle is presented. Three-dimensional Navier-Stokes numerical simulations were carried out to predict the complex leeward-side flowfield characteristics that are dominated by the effect of the breakdown of the leading-edge vortices. The methods that analyze the flowfield structure quantitatively were given by using flowfield data from the computational results. In the region before the vortex breakdown, the vortex axes are approximated as being straight line. As the angle of attack increases, the vortex axes are closer to the root chord, and farther away from the wing surface. Along the vortex axes, as the adverse pressure gradients occur, the axial velocity decreases, that is, A is negativee, so the vortex is unstable, and it is possible to breakdown. The occurrence of the breakdown results in the instability of lateral motion for a delta wing, and the lateral moment diverges after a small perturbation occurs at high angles of attack. However, after a critical angle of attack is reached the vortices breakdown completely at the wing apex, and the instability resulting from the vortex breakdown disappears.
文摘With the rapid development of high-rise buildings and long-span structures in the recent years, high performance com- putation (HPC) is becoming more and more important, sometimes even crucial, for the design and construction of com- plex building structures. To satisfy the engineering requirements of HPC, a parallel FEA computing kernel, which is designed typically for the analysis of complex building structures, will be presented and illustrated in this paper. This kernel program is based on the Intel Math Kernel Library (MKL) and coded by FORTRAN 2008 syntax, which is a parallel computer language. To improve the capability and efficiency of the computing kernel program, the parallel concepts of modern FORTRAN, such as elemental procedure, do concurrent, etc., have been applied extensively in coding and the famous PARDISO solver in MKL has been called to solve the Large-sparse system of linear equations. The ultimate objective of developing the computing kernel is to make the personal computer have the ability to analysis large building structures up to ten million degree of freedoms (DOFs). Up to now, the linear static analysis and dynamic analysis have been achieved while the nonlinear analysis, including geometric and material nonlinearity, has not been finished yet. Therefore, the numerical examples in this paper will be concentrated on demonstrating the validity and efficiency of the linear analysis and modal analysis for large FE models, while ignoring the verification of the nonlinear analysis capabilities.
基金financially supported by the National Natural Science Foundation of China(No.51304076)the Natural Science Foundation of Hunan Province,China(No.14JJ4064)
文摘Mineral dissemination and pore space distribution in ore particles are important features that influence heap leaching performance. To quantify the mineral dissemination and pore space distribution of an ore particle, a cylindrical copper oxide ore sample (I center dot 4.6 mm x 5.6 mm) was scanned using high-resolution X-ray computed tomography (HRXCT), a nondestructive imaging technology, at a spatial resolution of 4.85 mu m. Combined with three-dimensional (3D) image analysis techniques, the main mineral phases and pore space were segmented and the volume fraction of each phase was calculated. In addition, the mass fraction of each mineral phase was estimated and the result was validated with that obtained using traditional techniques. Furthermore, the pore phase features, including the pore size distribution, pore surface area, pore fractal dimension, pore centerline, and the pore connectivity, were investigated quantitatively. The pore space analysis results indicate that the pore size distribution closely fits a log-normal distribution and that the pore space morphology is complicated, with a large surface area and low connectivity. This study demonstrates that the combination of HRXCT and 3D image analysis is an effective tool for acquiring 3D mineralogical and pore structural data.