To efficiently complete a complex computation task,the complex task should be decomposed into subcomputation tasks that run parallel in edge computing.Wireless Sensor Network(WSN)is a typical application of parallel c...To efficiently complete a complex computation task,the complex task should be decomposed into subcomputation tasks that run parallel in edge computing.Wireless Sensor Network(WSN)is a typical application of parallel computation.To achieve highly reliable parallel computation for wireless sensor network,the network's lifetime needs to be extended.Therefore,a proper task allocation strategy is needed to reduce the energy consumption and balance the load of the network.This paper proposes a task model and a cluster-based WSN model in edge computing.In our model,different tasks require different types of resources and different sensors provide different types of resources,so our model is heterogeneous,which makes the model more practical.Then we propose a task allocation algorithm that combines the Genetic Algorithm(GA)and the Ant Colony Optimization(ACO)algorithm.The algorithm concentrates on energy conservation and load balancing so that the lifetime of the network can be extended.The experimental result shows the algorithm's effectiveness and advantages in energy conservation and load balancing.展开更多
Further improving the railway innovation capacity and technological strength is the important goal of the 14th Five-Year Plan for railway scientific and technological innovation.It includes promoting the deep integrat...Further improving the railway innovation capacity and technological strength is the important goal of the 14th Five-Year Plan for railway scientific and technological innovation.It includes promoting the deep integration of cutting-edge technologies with the railway systems,strengthening the research and application of intelligent railway technologies,applying green computing technologies and advancing the collaborative sharing of transportation big data.The high-speed rail system tasks need to process huge amounts of data and heavy workload with the requirement of ultra-fast response.Therefore,it is of great necessity to promote computation efficiency by applying High Performance Computing(HPC)to high-speed rail systems.The HPC technique is a great solution for improving the performance,efficiency,and safety of high-speed rail systems.In this review,we introduce and analyze the application research of high performance computing technology in the field of highspeed railways.These HPC applications are cataloged into four broad categories,namely:fault diagnosis,network and communication,management system,and simulations.Moreover,challenges and issues to be addressed are discussed and further directions are suggested.展开更多
Mobile Edge Computing(MEC)is promising to alleviate the computation and storage burdens for terminals in wireless networks.The huge energy consumption of MEC servers challenges the establishment of smart cities and th...Mobile Edge Computing(MEC)is promising to alleviate the computation and storage burdens for terminals in wireless networks.The huge energy consumption of MEC servers challenges the establishment of smart cities and their service time powered by rechargeable batteries.In addition,Orthogonal Multiple Access(OMA)technique cannot utilize limited spectrum resources fully and efficiently.Therefore,Non-Orthogonal Multiple Access(NOMA)-based energy-efficient task scheduling among MEC servers for delay-constraint mobile applications is important,especially in highly-dynamic vehicular edge computing networks.The various movement patterns of vehicles lead to unbalanced offloading requirements and different load pressure for MEC servers.Self-Imitation Learning(SIL)-based Deep Reinforcement Learning(DRL)has emerged as a promising machine learning technique to break through obstacles in various research fields,especially in time-varying networks.In this paper,we first introduce related MEC technologies in vehicular networks.Then,we propose an energy-efficient approach for task scheduling in vehicular edge computing networks based on DRL,with the purpose of both guaranteeing the task latency requirement for multiple users and minimizing total energy consumption of MEC servers.Numerical results demonstrate that the proposed algorithm outperforms other methods.展开更多
Computational optical imaging is an interdisciplinary subject integrating optics, mathematics, and information technology. It introduces information processing into optical imaging and combines it with intelligent com...Computational optical imaging is an interdisciplinary subject integrating optics, mathematics, and information technology. It introduces information processing into optical imaging and combines it with intelligent computing, subverting the imaging mechanism of traditional optical imaging which only relies on orderly information transmission. To meet the high-precision requirements of traditional optical imaging for optical processing and adjustment, as well as to solve its problems of being sensitive to gravity and temperature in use, we establish an optical imaging system model from the perspective of computational optical imaging and studies how to design and solve the imaging consistency problem of optical system under the influence of gravity, thermal effect, stress, and other external environment to build a high robustness optical system. The results show that the high robustness interval of the optical system exists and can effectively reduce the sensitivity of the optical system to the disturbance of each link, thus realizing the high robustness of optical imaging.展开更多
The integration of clusters,grids,clouds,edges and other computing platforms result in contemporary technology of jungle computing.This novel technique has the aptitude to tackle high performance computation systems a...The integration of clusters,grids,clouds,edges and other computing platforms result in contemporary technology of jungle computing.This novel technique has the aptitude to tackle high performance computation systems and it manages the usage of all computing platforms at a time.Federated learning is a collaborative machine learning approach without centralized training data.The proposed system effectively detects the intrusion attack without human intervention and subsequently detects anomalous deviations in device communication behavior,potentially caused by malicious adversaries and it can emerge with new and unknown attacks.The main objective is to learn overall behavior of an intruder while performing attacks to the assumed target service.Moreover,the updated system model is send to the centralized server in jungle computing,to detect their pattern.Federated learning greatly helps the machine to study the type of attack from each device and this technique paves a way to complete dominion over all malicious behaviors.In our proposed work,we have implemented an intrusion detection system that has high accuracy,low False Positive Rate(FPR)scalable,and versatile for the jungle computing environment.The execution time taken to complete a round is less than two seconds,with an accuracy rate of 96%.展开更多
Integrated computational materials engineering(ICME)is to integrate multi-scale computational simulations and key experimental methods such as macroscopic,mesoscopic,and microscopic into the whole process of Al alloys...Integrated computational materials engineering(ICME)is to integrate multi-scale computational simulations and key experimental methods such as macroscopic,mesoscopic,and microscopic into the whole process of Al alloys design and development,which enables the design and development of Al alloys to upgrade from traditional empirical to the integration of compositionprocess-structure-mechanical property,thus greatly accelerating its development speed and reducing its development cost.This study combines calculation of phase diagram(CALPHAD),Finite element calculations,first principle calculations,and microstructure characterization methods to predict and regulate the formation and structure of composite precipitates from the design of highmodulus Al alloy compositions and optimize the casting process parameters to inhibit the formation of micropore defects in the casting process,and the final tensile strength of Al alloys reaches420 MPa and Young's modulus reaches more than 88 GPa,which achieves the design goal of the high strength and modulus Al alloys,and establishes a new mode of the design and development of the strength/modulus Al alloys.展开更多
As an important branch of information technology, high-performance computing has expanded its application field and its influence has been expanding. High-performance computing is always a key area of application in m...As an important branch of information technology, high-performance computing has expanded its application field and its influence has been expanding. High-performance computing is always a key area of application in meteorology. We used field research and literature review methods to study the application of high performance computing in China’s meteorological department, and obtained the following results: 1) China Meteorological Department gradually established the first high-performance computer system since 1978. High-performance computing services can support operational numerical weather prediction models. 2) The Chinese meteorological department has always used the relatively advanced high-performance computing technology, and the business system capability has been continuously improved. The computing power has become an important symbol of the level of meteorological modernization. 3) High-performance computing technology and meteorological numerical forecasting applications are increasingly integrated, and continue to innovate and develop. 4) In the future, high-performance computing resource management will gradually transit from the current local pre-allocation mode to the local remote unified scheduling and shared use. In summary, we have come to the conclusion that the performance calculation business of the meteorological department will usher in a better tomorrow.展开更多
This work started out with the in-depth feasibil-ity study and limitation analysis on the current disease spread estimating and countermea-sures evaluating models, then we identify that the population variability is a...This work started out with the in-depth feasibil-ity study and limitation analysis on the current disease spread estimating and countermea-sures evaluating models, then we identify that the population variability is a crucial impact which has been always ignored or less empha-sized. Taking HIV/AIDS as the application and validation background, we propose a novel al-gorithm model system, EEA model system, a new way to estimate the spread situation, evaluate different countermeasures and analyze the development of ARV-resistant disease strains. The model is a series of solvable ordi-nary differential equation (ODE) models to es-timate the spread of HIV/AIDS infections, which not only require only one year’s data to deduce the situation in any year, but also apply the piecewise constant method to employ multi- year information at the same time. We simulate the effects of therapy and vaccine, then evaluate the difference between them, and offer the smallest proportion of the vaccination in the population to defeat HIV/AIDS, especially the advantage of using the vaccination while the deficiency of using therapy separately. Then we analyze the development of ARV-resistant dis-ease strains by the piecewise constant method. Last but not least, high performance computing (HPC) platform is applied to simulate the situa-tion with variable large scale areas divided by grids, and especially the acceleration rate will come to around 4 to 5.5.展开更多
For the complex and large targets like naval vessels,the computation for their RCS usu- ally uses high-frequency approach.Presenting the geometry modeling and the computation principle on naval vessel's RCS,this p...For the complex and large targets like naval vessels,the computation for their RCS usu- ally uses high-frequency approach.Presenting the geometry modeling and the computation principle on naval vessel's RCS,this paper puts the emphasis on the key techniques of computing the naval vessel's RCS based on high-frequency approach with the analysis on mast's effect to the total RCS as the example.展开更多
Milling Process Simulation is one of the important re search areas in manufacturing science. For the purpose of improving the prec ision of simulation and extending its usability, numerical algorithm is more and more ...Milling Process Simulation is one of the important re search areas in manufacturing science. For the purpose of improving the prec ision of simulation and extending its usability, numerical algorithm is more and more used in the milling modeling areas. But simulative efficiency is decreasin g with increase of its complexity. As a result, application of the method is lim ited. Aimed at above question, high-efficient algorithm for milling process sim ulation is studied. It is important for milling process simulation’s applicatio n. Parallel computing is widely used to solve the large-scale computation question s. Its advantages include system flexibility, robust, high-efficient computing capability and high ratio of performance to price. With the development of compu ter network, utilizing the computing resource in the Internet, a virtual computi ng environment with powerful computing capability can be consisted by microc omputers, and the difficulty of building hardware environment which is used to s upport parallel computing is reduced. How to use network technology and parallel algorithm to improve simulative effic iency for milling forces simulation is investigated in the paper. In order to pr edict milling forces, a simplified local milling forces model is used in the pap er. End milling cutter is assumed to be divided by r number of differential elem ents along the axial direction of the cutter. For a given time, the total cuttin g forces can be obtained by summarizing the resultant cutting force produced by each differential cutter disc. Divide the whole simulative time into some segmen ts, send these program’s segments to microcomputers in the Internet and obtain the result of the program’s segments, all of the result of program’s segments a re composed the final result. For implementing the algorithm, a distributed Parallel computing framework is de signed in the paper. In the framework, web server plays a role of controller. Us ing Java RMI(remote method interface), the computing processes in computing serv er are called by web server. There are lots of control processes in web server a nd control the computing servers. The codes of simulative algorithm can be dynam ic sent to the computing servers, and milling forces at the different time are c omputed through utilizing the local computer’s resource. The results that are ca lculated by every computing servers are sent to the web server, and composed the final result. The framework can be used by different simulative algorithm. Comp ared with the algorithm running single machine, the efficiency of provided algor ithm is higher than that of single machine.展开更多
The meteorological high-performance computing resource is the support platform for the weather forecast and climate prediction numerical model operation. The scientific and objective method to evaluate the application...The meteorological high-performance computing resource is the support platform for the weather forecast and climate prediction numerical model operation. The scientific and objective method to evaluate the application of meteorological high-performance computing resources can not only provide reference for the optimization of active resources, but also provide a quantitative basis for future resource construction and planning. In this paper, the concept of the utility value B and index compliance rate E of the meteorological high performance computing system are presented. The evaluation process, evaluation index and calculation method of the high performance computing resource application benefits are introduced.展开更多
On the basis of the brief introduction to CAI and some its distinctive features,this paper minutely arrives at the principles of the application of CAI in high schools and analyses the implication brought about by its...On the basis of the brief introduction to CAI and some its distinctive features,this paper minutely arrives at the principles of the application of CAI in high schools and analyses the implication brought about by its application.This research is to provide a further help for English teachers in high shools,so that the students can effectively improve the level of English language learning.展开更多
With the booming development of fifth-generation network technology and Internet of Things,the number of end-user devices(EDs)and diverse applications is surging,resulting in massive data generated at the edge of netw...With the booming development of fifth-generation network technology and Internet of Things,the number of end-user devices(EDs)and diverse applications is surging,resulting in massive data generated at the edge of networks.To process these data eficiently,the innovative mobile edge computing(MEC)framework has emerged to guarantee low latency and enable eficient computing close to the user traffic.Recently,federated learning(FL)has demonstrated its empirical success in edge computing due to its privacy-preserving advantages.Thus,it becomes a promising solution for analyzing and processing distributed data on EDs in various machine learning tasks,which are the major workloads in MEC.Unfortunately,EDs are typically powered by batteries with limited capacity,which brings challenges when performing energy-intensive FL tasks.To address these challenges,many strategies have been proposed to save energy in FL.Considering the absence of a survey that thoroughly summarizes and classifies these strategies,in this paper,we provide a comprehensive survey of recent advances in energy-efficient strategies for FL in MEC.Specifically,we first introduce the system model and energy consumption models in FL,in terms of computation and communication.Then we analyze the challenges regarding improving energy efficiency and summarize the energy-efficient strategies from three perspectives:learning-based,resource allocation,and client selection.We conduct a detailed analysis of these strategies,comparing their advantages and disadvantages.Additionally,we visually illustrate the impact of these strategies on the performance of FL by showcasing experimental results.Finally,several potential future research directions for energy-efficient FL are discussed.展开更多
Objective To investigate the image quality, radiation dose and diagnostic value of the low-tube-voltage high-pitch dual-source computed tomography(DSCT) with sinogram affirmed iterative reconstruction(SAFIRE) for non-...Objective To investigate the image quality, radiation dose and diagnostic value of the low-tube-voltage high-pitch dual-source computed tomography(DSCT) with sinogram affirmed iterative reconstruction(SAFIRE) for non-enhanced abdominal and pelvic scans. Methods This institutional review board-approved prospective study included 64 patients who gave written informed consent for additional abdominal and pelvic scan with DSCT in the period from November to December 2012. The patients underwent standard non-enhanced CT scans(protocol 1) [tube voltage of 120 k Vp/pitch of 0.9/filtered back-projection(FBP) reconstruction] followed by high-pitch non-enhanced CT scans(protocol 2)(100 k Vp/3.0/SAFIRE). The total scan time, mean CT number, signal-to-noise ratio(SNR), image quality, lesion detectability and radiation dose were compared between the two protocols. Results The total scan time of protocol 2 was significantly shorter than that of protocol 1(1.4±0.1 seconds vs. 7.6±0.6 seconds, P<0.001). There was no significant difference between protocol 1 and protocol 2 in mean CT number of all organs(liver, 55.4±6.3 HU vs. 56.1±6.8 HU, P=0.214; pancreas, 43.6±5.9 HU vs. 43.7±5.8 HU, P=0.785; spleen, 47.9±3.9 HU vs. 49.4±4.3 HU, P=0.128; kidney, 32.2±2.3 HU vs. 33.1±2.3 HU, P=0.367; abdominal aorta, 44.8±5.6 HU vs. 45.0±5.5 HU, P=0.499; psoas muscle, 50.7±4.1 HU vs. 50.3±4.5 HU, P=0.279). SNR on images of protocol 2 was higher than that of protocol 1(liver, 5.0±1.2 vs. 4.5±1.1, P<0.001; pancreas, 4.0±1.0 vs. 3.6±0.8, P<0.001; spleen, 4.7±1.0 vs. 4.1±0.9, P<0.001; kidney, 3.1±0.6 vs. 2.8±0.6, P<0.001; abdominal aorta, 4.1±1.0 vs. 3.8±1.0, P<0.001; psoas muscle, 4.5±1.1 vs. 4.3±1.2, P=0.012). The overall image noise of protocol 2 was lower than that of protocol1(9.8±3.1 HU vs. 11.1±3.0 HU, P<0.001). Image quality of protocol 2 was good but lower than that of protocol 1(4.1±0.7 vs. 4.6±0.5, P<0.001). Protocol 2 perceived 229 of 234 lesions(97.9%) that were detected in protocol 1 in the abdomen and pelvis. Radiation dose of protocol 2 was lower than that of protocol 1(4.4±0.4 m Sv vs. 7.3±2.4 m Sv, P<0.001) and the mean dose reduction was 41.4%. Conclusion The high-pitch DSCT with SAFIRE can shorten scan time and reduce radiation dose while preserving image quality in non-enhanced abdominal and pelvic scans.展开更多
AIM To determine the sensitivity and specificity of high resolution computed tomography(HRCT) in the diagnosis of otosclerosis.METHODS A systematic literature review was undertaken to include Level I-III studies(Oxfor...AIM To determine the sensitivity and specificity of high resolution computed tomography(HRCT) in the diagnosis of otosclerosis.METHODS A systematic literature review was undertaken to include Level I-III studies(Oxford Centre for Evidenced based Medicine) that utilised HRCT to detect histology confirmed otosclerosis.Quantitative synthesis was then performed.RESULTS Based on available level III literature,HRCT has a relatively low sensitivity of 58%(95%CI: 49.4-66.9),a high specificity,95%(95%CI: 89.9-98.0) and a positive predictive value of 92%(95%CI: 84.1-95.8).HRCT is better at diagnosing the more prevalent fenestral form of otosclerosis but remains vulnerable to inframillimetre,retrofenestral and dense sclerotic lesions,despite the advent of more advanced CT scanners with improved collimation.CONCLUSION Whilst the diagnosis of otosclerosis remains largely clinical,HRCT remains the gold standard imaging of choice for the middle ear and serves as a useful adjunct to the clinician,helping to delineate extent of disease and exclude other causes.展开更多
The present paper reviews the recent developments of a high⁃order⁃spectral method(HOS)and the combination with computational fluid dynamics(CFD)method for wave⁃structure interactions.As the numerical simulations of wa...The present paper reviews the recent developments of a high⁃order⁃spectral method(HOS)and the combination with computational fluid dynamics(CFD)method for wave⁃structure interactions.As the numerical simulations of wave⁃structure interaction require efficiency and accuracy,as well as the ability in calculating in open sea states,the HOS method has its strength in both generating extreme waves in open seas and fast convergence in simulations,while computational fluid dynamics(CFD)method has its advantages in simulating violent wave⁃structure interactions.This paper provides the new thoughts for fast and accurate simulations,as well as the future work on innovations in fine fluid field of numerical simulations.展开更多
This paper describes the model speed and model In/Out (I/O) efficiency of the high-resolution atmospheric general circulation model FAMIL (Finite-volume Atmospheric Model of IAP/LASG) at the National Supercomputer Cen...This paper describes the model speed and model In/Out (I/O) efficiency of the high-resolution atmospheric general circulation model FAMIL (Finite-volume Atmospheric Model of IAP/LASG) at the National Supercomputer Center in Tianjin, China, on its Tianhe-1A supercomputer platform. A series of three-model-day simulations were carried out with standard Aqua Planet Experiment (APE) designed within FAMIL to obtain the time stamp for the calculation of model speed, simulation cost, and model I/O efficiency. The results of the simulation demonstrate that FAMIL has remarkable scalability below 3456 and 6144 cores, and the lowest simulation costs are 1536 and 3456 cores for 12.5 km and 6.25 km resolutions, respectively. Furthermore, FAMIL has excellent I/O scalability and an efficiency of more than 80% on 6 I/Os and more than 99% on 1536 I/Os.展开更多
Core filling process of cast high speed steel roll was simulated.Ductile iron was used as core material.The influences of filling parameters,such as core filling time and core filling temperature,on the filling proces...Core filling process of cast high speed steel roll was simulated.Ductile iron was used as core material.The influences of filling parameters,such as core filling time and core filling temperature,on the filling process were investigated.Based on the simulated results,optimal core filling parameters were determined.The predicted temperature fields show that the temperature at the roll shoulder is the lowest during the core filling process and usually causes binding defects there.Method for solving this problem was presented.展开更多
This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and...This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and hosting,so that if two virtual severs be active on a host and energy load be more on a host,it would allocated the energy of other hosts(virtual host)to itself to stay steady and this option usually leads to hardware overflow errors and users dissatisfaction.This problem has been removed in methods based on cloud processing but not perfectly,therefore,providing an algorithm not only will implement a suitable security background but also it will suitably divide energy consumption and load balancing among virtual severs.The proposed algorithm is compared with several previously proposed Security Strategy including SC-PSSF,PSSF and DEEAC.Comparisons show that the proposed method offers high performance computing,efficiency and consumes lower energy in the network.展开更多
基金supported by Postdoctoral Science Foundation of China(No.2021M702441)National Natural Science Foundation of China(No.61871283)。
文摘To efficiently complete a complex computation task,the complex task should be decomposed into subcomputation tasks that run parallel in edge computing.Wireless Sensor Network(WSN)is a typical application of parallel computation.To achieve highly reliable parallel computation for wireless sensor network,the network's lifetime needs to be extended.Therefore,a proper task allocation strategy is needed to reduce the energy consumption and balance the load of the network.This paper proposes a task model and a cluster-based WSN model in edge computing.In our model,different tasks require different types of resources and different sensors provide different types of resources,so our model is heterogeneous,which makes the model more practical.Then we propose a task allocation algorithm that combines the Genetic Algorithm(GA)and the Ant Colony Optimization(ACO)algorithm.The algorithm concentrates on energy conservation and load balancing so that the lifetime of the network can be extended.The experimental result shows the algorithm's effectiveness and advantages in energy conservation and load balancing.
基金supported in part by the Talent Fund of Beijing Jiaotong University(2023XKRC017)in part by Research and Development Project of China State Railway Group Co.,Ltd.(P2022Z003).
文摘Further improving the railway innovation capacity and technological strength is the important goal of the 14th Five-Year Plan for railway scientific and technological innovation.It includes promoting the deep integration of cutting-edge technologies with the railway systems,strengthening the research and application of intelligent railway technologies,applying green computing technologies and advancing the collaborative sharing of transportation big data.The high-speed rail system tasks need to process huge amounts of data and heavy workload with the requirement of ultra-fast response.Therefore,it is of great necessity to promote computation efficiency by applying High Performance Computing(HPC)to high-speed rail systems.The HPC technique is a great solution for improving the performance,efficiency,and safety of high-speed rail systems.In this review,we introduce and analyze the application research of high performance computing technology in the field of highspeed railways.These HPC applications are cataloged into four broad categories,namely:fault diagnosis,network and communication,management system,and simulations.Moreover,challenges and issues to be addressed are discussed and further directions are suggested.
基金supported in part by the National Natural Science Foundation of China under Grant 61971084 and Grant 62001073in part by the National Natural Science Foundation of Chongqing under Grant cstc2019jcyj-msxmX0208in part by the open research fund of National Mobile Communications Research Laboratory,Southeast University,under Grant 2020D05.
文摘Mobile Edge Computing(MEC)is promising to alleviate the computation and storage burdens for terminals in wireless networks.The huge energy consumption of MEC servers challenges the establishment of smart cities and their service time powered by rechargeable batteries.In addition,Orthogonal Multiple Access(OMA)technique cannot utilize limited spectrum resources fully and efficiently.Therefore,Non-Orthogonal Multiple Access(NOMA)-based energy-efficient task scheduling among MEC servers for delay-constraint mobile applications is important,especially in highly-dynamic vehicular edge computing networks.The various movement patterns of vehicles lead to unbalanced offloading requirements and different load pressure for MEC servers.Self-Imitation Learning(SIL)-based Deep Reinforcement Learning(DRL)has emerged as a promising machine learning technique to break through obstacles in various research fields,especially in time-varying networks.In this paper,we first introduce related MEC technologies in vehicular networks.Then,we propose an energy-efficient approach for task scheduling in vehicular edge computing networks based on DRL,with the purpose of both guaranteeing the task latency requirement for multiple users and minimizing total energy consumption of MEC servers.Numerical results demonstrate that the proposed algorithm outperforms other methods.
文摘Computational optical imaging is an interdisciplinary subject integrating optics, mathematics, and information technology. It introduces information processing into optical imaging and combines it with intelligent computing, subverting the imaging mechanism of traditional optical imaging which only relies on orderly information transmission. To meet the high-precision requirements of traditional optical imaging for optical processing and adjustment, as well as to solve its problems of being sensitive to gravity and temperature in use, we establish an optical imaging system model from the perspective of computational optical imaging and studies how to design and solve the imaging consistency problem of optical system under the influence of gravity, thermal effect, stress, and other external environment to build a high robustness optical system. The results show that the high robustness interval of the optical system exists and can effectively reduce the sensitivity of the optical system to the disturbance of each link, thus realizing the high robustness of optical imaging.
文摘The integration of clusters,grids,clouds,edges and other computing platforms result in contemporary technology of jungle computing.This novel technique has the aptitude to tackle high performance computation systems and it manages the usage of all computing platforms at a time.Federated learning is a collaborative machine learning approach without centralized training data.The proposed system effectively detects the intrusion attack without human intervention and subsequently detects anomalous deviations in device communication behavior,potentially caused by malicious adversaries and it can emerge with new and unknown attacks.The main objective is to learn overall behavior of an intruder while performing attacks to the assumed target service.Moreover,the updated system model is send to the centralized server in jungle computing,to detect their pattern.Federated learning greatly helps the machine to study the type of attack from each device and this technique paves a way to complete dominion over all malicious behaviors.In our proposed work,we have implemented an intrusion detection system that has high accuracy,low False Positive Rate(FPR)scalable,and versatile for the jungle computing environment.The execution time taken to complete a round is less than two seconds,with an accuracy rate of 96%.
基金supported by the National Natural Science Foundation of China(No.52073030)。
文摘Integrated computational materials engineering(ICME)is to integrate multi-scale computational simulations and key experimental methods such as macroscopic,mesoscopic,and microscopic into the whole process of Al alloys design and development,which enables the design and development of Al alloys to upgrade from traditional empirical to the integration of compositionprocess-structure-mechanical property,thus greatly accelerating its development speed and reducing its development cost.This study combines calculation of phase diagram(CALPHAD),Finite element calculations,first principle calculations,and microstructure characterization methods to predict and regulate the formation and structure of composite precipitates from the design of highmodulus Al alloy compositions and optimize the casting process parameters to inhibit the formation of micropore defects in the casting process,and the final tensile strength of Al alloys reaches420 MPa and Young's modulus reaches more than 88 GPa,which achieves the design goal of the high strength and modulus Al alloys,and establishes a new mode of the design and development of the strength/modulus Al alloys.
文摘As an important branch of information technology, high-performance computing has expanded its application field and its influence has been expanding. High-performance computing is always a key area of application in meteorology. We used field research and literature review methods to study the application of high performance computing in China’s meteorological department, and obtained the following results: 1) China Meteorological Department gradually established the first high-performance computer system since 1978. High-performance computing services can support operational numerical weather prediction models. 2) The Chinese meteorological department has always used the relatively advanced high-performance computing technology, and the business system capability has been continuously improved. The computing power has become an important symbol of the level of meteorological modernization. 3) High-performance computing technology and meteorological numerical forecasting applications are increasingly integrated, and continue to innovate and develop. 4) In the future, high-performance computing resource management will gradually transit from the current local pre-allocation mode to the local remote unified scheduling and shared use. In summary, we have come to the conclusion that the performance calculation business of the meteorological department will usher in a better tomorrow.
文摘This work started out with the in-depth feasibil-ity study and limitation analysis on the current disease spread estimating and countermea-sures evaluating models, then we identify that the population variability is a crucial impact which has been always ignored or less empha-sized. Taking HIV/AIDS as the application and validation background, we propose a novel al-gorithm model system, EEA model system, a new way to estimate the spread situation, evaluate different countermeasures and analyze the development of ARV-resistant disease strains. The model is a series of solvable ordi-nary differential equation (ODE) models to es-timate the spread of HIV/AIDS infections, which not only require only one year’s data to deduce the situation in any year, but also apply the piecewise constant method to employ multi- year information at the same time. We simulate the effects of therapy and vaccine, then evaluate the difference between them, and offer the smallest proportion of the vaccination in the population to defeat HIV/AIDS, especially the advantage of using the vaccination while the deficiency of using therapy separately. Then we analyze the development of ARV-resistant dis-ease strains by the piecewise constant method. Last but not least, high performance computing (HPC) platform is applied to simulate the situa-tion with variable large scale areas divided by grids, and especially the acceleration rate will come to around 4 to 5.5.
文摘For the complex and large targets like naval vessels,the computation for their RCS usu- ally uses high-frequency approach.Presenting the geometry modeling and the computation principle on naval vessel's RCS,this paper puts the emphasis on the key techniques of computing the naval vessel's RCS based on high-frequency approach with the analysis on mast's effect to the total RCS as the example.
文摘Milling Process Simulation is one of the important re search areas in manufacturing science. For the purpose of improving the prec ision of simulation and extending its usability, numerical algorithm is more and more used in the milling modeling areas. But simulative efficiency is decreasin g with increase of its complexity. As a result, application of the method is lim ited. Aimed at above question, high-efficient algorithm for milling process sim ulation is studied. It is important for milling process simulation’s applicatio n. Parallel computing is widely used to solve the large-scale computation question s. Its advantages include system flexibility, robust, high-efficient computing capability and high ratio of performance to price. With the development of compu ter network, utilizing the computing resource in the Internet, a virtual computi ng environment with powerful computing capability can be consisted by microc omputers, and the difficulty of building hardware environment which is used to s upport parallel computing is reduced. How to use network technology and parallel algorithm to improve simulative effic iency for milling forces simulation is investigated in the paper. In order to pr edict milling forces, a simplified local milling forces model is used in the pap er. End milling cutter is assumed to be divided by r number of differential elem ents along the axial direction of the cutter. For a given time, the total cuttin g forces can be obtained by summarizing the resultant cutting force produced by each differential cutter disc. Divide the whole simulative time into some segmen ts, send these program’s segments to microcomputers in the Internet and obtain the result of the program’s segments, all of the result of program’s segments a re composed the final result. For implementing the algorithm, a distributed Parallel computing framework is de signed in the paper. In the framework, web server plays a role of controller. Us ing Java RMI(remote method interface), the computing processes in computing serv er are called by web server. There are lots of control processes in web server a nd control the computing servers. The codes of simulative algorithm can be dynam ic sent to the computing servers, and milling forces at the different time are c omputed through utilizing the local computer’s resource. The results that are ca lculated by every computing servers are sent to the web server, and composed the final result. The framework can be used by different simulative algorithm. Comp ared with the algorithm running single machine, the efficiency of provided algor ithm is higher than that of single machine.
文摘The meteorological high-performance computing resource is the support platform for the weather forecast and climate prediction numerical model operation. The scientific and objective method to evaluate the application of meteorological high-performance computing resources can not only provide reference for the optimization of active resources, but also provide a quantitative basis for future resource construction and planning. In this paper, the concept of the utility value B and index compliance rate E of the meteorological high performance computing system are presented. The evaluation process, evaluation index and calculation method of the high performance computing resource application benefits are introduced.
文摘On the basis of the brief introduction to CAI and some its distinctive features,this paper minutely arrives at the principles of the application of CAI in high schools and analyses the implication brought about by its application.This research is to provide a further help for English teachers in high shools,so that the students can effectively improve the level of English language learning.
基金supported by the National Natural Science Foundation of China(Nos.62002377,62072303,62072424,61872178,and 62272223)the Hong Kong Scholars Program(No.2021-101)the High-Level Talent Fund(No.22-TDRCJH-02-013)。
文摘With the booming development of fifth-generation network technology and Internet of Things,the number of end-user devices(EDs)and diverse applications is surging,resulting in massive data generated at the edge of networks.To process these data eficiently,the innovative mobile edge computing(MEC)framework has emerged to guarantee low latency and enable eficient computing close to the user traffic.Recently,federated learning(FL)has demonstrated its empirical success in edge computing due to its privacy-preserving advantages.Thus,it becomes a promising solution for analyzing and processing distributed data on EDs in various machine learning tasks,which are the major workloads in MEC.Unfortunately,EDs are typically powered by batteries with limited capacity,which brings challenges when performing energy-intensive FL tasks.To address these challenges,many strategies have been proposed to save energy in FL.Considering the absence of a survey that thoroughly summarizes and classifies these strategies,in this paper,we provide a comprehensive survey of recent advances in energy-efficient strategies for FL in MEC.Specifically,we first introduce the system model and energy consumption models in FL,in terms of computation and communication.Then we analyze the challenges regarding improving energy efficiency and summarize the energy-efficient strategies from three perspectives:learning-based,resource allocation,and client selection.We conduct a detailed analysis of these strategies,comparing their advantages and disadvantages.Additionally,we visually illustrate the impact of these strategies on the performance of FL by showcasing experimental results.Finally,several potential future research directions for energy-efficient FL are discussed.
文摘Objective To investigate the image quality, radiation dose and diagnostic value of the low-tube-voltage high-pitch dual-source computed tomography(DSCT) with sinogram affirmed iterative reconstruction(SAFIRE) for non-enhanced abdominal and pelvic scans. Methods This institutional review board-approved prospective study included 64 patients who gave written informed consent for additional abdominal and pelvic scan with DSCT in the period from November to December 2012. The patients underwent standard non-enhanced CT scans(protocol 1) [tube voltage of 120 k Vp/pitch of 0.9/filtered back-projection(FBP) reconstruction] followed by high-pitch non-enhanced CT scans(protocol 2)(100 k Vp/3.0/SAFIRE). The total scan time, mean CT number, signal-to-noise ratio(SNR), image quality, lesion detectability and radiation dose were compared between the two protocols. Results The total scan time of protocol 2 was significantly shorter than that of protocol 1(1.4±0.1 seconds vs. 7.6±0.6 seconds, P<0.001). There was no significant difference between protocol 1 and protocol 2 in mean CT number of all organs(liver, 55.4±6.3 HU vs. 56.1±6.8 HU, P=0.214; pancreas, 43.6±5.9 HU vs. 43.7±5.8 HU, P=0.785; spleen, 47.9±3.9 HU vs. 49.4±4.3 HU, P=0.128; kidney, 32.2±2.3 HU vs. 33.1±2.3 HU, P=0.367; abdominal aorta, 44.8±5.6 HU vs. 45.0±5.5 HU, P=0.499; psoas muscle, 50.7±4.1 HU vs. 50.3±4.5 HU, P=0.279). SNR on images of protocol 2 was higher than that of protocol 1(liver, 5.0±1.2 vs. 4.5±1.1, P<0.001; pancreas, 4.0±1.0 vs. 3.6±0.8, P<0.001; spleen, 4.7±1.0 vs. 4.1±0.9, P<0.001; kidney, 3.1±0.6 vs. 2.8±0.6, P<0.001; abdominal aorta, 4.1±1.0 vs. 3.8±1.0, P<0.001; psoas muscle, 4.5±1.1 vs. 4.3±1.2, P=0.012). The overall image noise of protocol 2 was lower than that of protocol1(9.8±3.1 HU vs. 11.1±3.0 HU, P<0.001). Image quality of protocol 2 was good but lower than that of protocol 1(4.1±0.7 vs. 4.6±0.5, P<0.001). Protocol 2 perceived 229 of 234 lesions(97.9%) that were detected in protocol 1 in the abdomen and pelvis. Radiation dose of protocol 2 was lower than that of protocol 1(4.4±0.4 m Sv vs. 7.3±2.4 m Sv, P<0.001) and the mean dose reduction was 41.4%. Conclusion The high-pitch DSCT with SAFIRE can shorten scan time and reduce radiation dose while preserving image quality in non-enhanced abdominal and pelvic scans.
文摘AIM To determine the sensitivity and specificity of high resolution computed tomography(HRCT) in the diagnosis of otosclerosis.METHODS A systematic literature review was undertaken to include Level I-III studies(Oxford Centre for Evidenced based Medicine) that utilised HRCT to detect histology confirmed otosclerosis.Quantitative synthesis was then performed.RESULTS Based on available level III literature,HRCT has a relatively low sensitivity of 58%(95%CI: 49.4-66.9),a high specificity,95%(95%CI: 89.9-98.0) and a positive predictive value of 92%(95%CI: 84.1-95.8).HRCT is better at diagnosing the more prevalent fenestral form of otosclerosis but remains vulnerable to inframillimetre,retrofenestral and dense sclerotic lesions,despite the advent of more advanced CT scanners with improved collimation.CONCLUSION Whilst the diagnosis of otosclerosis remains largely clinical,HRCT remains the gold standard imaging of choice for the middle ear and serves as a useful adjunct to the clinician,helping to delineate extent of disease and exclude other causes.
基金National Natural Science Foundation of China(Grant No.51879159)the National Key Research and Development Program of China(Grant Nos.2019YFB1704200 and 2019YFC0312400)+2 种基金the Chang Jiang Scholars Program(Grant No.T2014099)the Shanghai Excellent Academic Leaders Program(Grant No.17XD1402300)the Innovative Special Project of Numerical Tank of Ministry of Industry and Information Technology of China(Grant No.2016-23/09).
文摘The present paper reviews the recent developments of a high⁃order⁃spectral method(HOS)and the combination with computational fluid dynamics(CFD)method for wave⁃structure interactions.As the numerical simulations of wave⁃structure interaction require efficiency and accuracy,as well as the ability in calculating in open sea states,the HOS method has its strength in both generating extreme waves in open seas and fast convergence in simulations,while computational fluid dynamics(CFD)method has its advantages in simulating violent wave⁃structure interactions.This paper provides the new thoughts for fast and accurate simulations,as well as the future work on innovations in fine fluid field of numerical simulations.
基金supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDA05110303)the National Basic Research Program of China (973Program, Grant Nos. 2012CB417203 and 2010CB950404)+1 种基金the National High Technology Research and Development Program of China (863 Program, Grant No. 2010AA012305)the National Natural Science Foundation of China (Grant No. 41023002)
文摘This paper describes the model speed and model In/Out (I/O) efficiency of the high-resolution atmospheric general circulation model FAMIL (Finite-volume Atmospheric Model of IAP/LASG) at the National Supercomputer Center in Tianjin, China, on its Tianhe-1A supercomputer platform. A series of three-model-day simulations were carried out with standard Aqua Planet Experiment (APE) designed within FAMIL to obtain the time stamp for the calculation of model speed, simulation cost, and model I/O efficiency. The results of the simulation demonstrate that FAMIL has remarkable scalability below 3456 and 6144 cores, and the lowest simulation costs are 1536 and 3456 cores for 12.5 km and 6.25 km resolutions, respectively. Furthermore, FAMIL has excellent I/O scalability and an efficiency of more than 80% on 6 I/Os and more than 99% on 1536 I/Os.
文摘Core filling process of cast high speed steel roll was simulated.Ductile iron was used as core material.The influences of filling parameters,such as core filling time and core filling temperature,on the filling process were investigated.Based on the simulated results,optimal core filling parameters were determined.The predicted temperature fields show that the temperature at the roll shoulder is the lowest during the core filling process and usually causes binding defects there.Method for solving this problem was presented.
文摘This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and hosting,so that if two virtual severs be active on a host and energy load be more on a host,it would allocated the energy of other hosts(virtual host)to itself to stay steady and this option usually leads to hardware overflow errors and users dissatisfaction.This problem has been removed in methods based on cloud processing but not perfectly,therefore,providing an algorithm not only will implement a suitable security background but also it will suitably divide energy consumption and load balancing among virtual severs.The proposed algorithm is compared with several previously proposed Security Strategy including SC-PSSF,PSSF and DEEAC.Comparisons show that the proposed method offers high performance computing,efficiency and consumes lower energy in the network.