Deuterium(D_(2)) is one of the important fuel sources that power nuclear fusion reactors. The existing D_(2)/H_(2) separation technologies that obtain high-purity D_(2) are cost-intensive. Recent research has shown th...Deuterium(D_(2)) is one of the important fuel sources that power nuclear fusion reactors. The existing D_(2)/H_(2) separation technologies that obtain high-purity D_(2) are cost-intensive. Recent research has shown that metal-organic frameworks(MOFs) are of good potential for D_(2)/H_(2) separation application. In this work, a high-throughput computational screening of 12020 computation-ready experimental MOFs is carried out to determine the best MOFs for hydrogen isotope separation application. Meanwhile, the detailed structure-performance correlation is systematically investigated with the aid of machine learning. The results indicate that the ideal D_(2)/H_(2) adsorption selectivity calculated based on Henry coefficient is strongly correlated with the 1/ΔAD feature descriptor;that is, inverse of the adsorbility difference of the two adsorbates. Meanwhile, the machine learning(ML) results show that the prediction accuracy of all the four ML methods is significantly improved after the addition of this feature descriptor. In addition, the ML results based on extreme gradient boosting model also revealed that the 1/ΔAD descriptor has the highest relative importance compared to other commonly-used descriptors. To further explore the effect of hydrogen isotope separation in binary mixture, 1548 MOFs with ideal adsorption selectivity greater than 1.5 are simulated at equimolar conditions. The structure-performance relationship shows that high adsorption selectivity MOFs generally have smaller pore size(0.3-0.5 nm) and lower surface area. Among the top 200 performers, the materials mainly have the sql, pcu, cds, hxl, and ins topologies.Finally, three MOFs with high D_(2)/H_(2) selectivity and good D_(2) uptake are identified as the best candidates,of all which had one-dimensional channel pore. The findings obtained in this work may be helpful for the identification of potentially promising candidates for hydrogen isotope separation.展开更多
The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical r...The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical research.The review covers key topics such as computational modelling,bioinformatics,machine learning in medical diagnostics,and the integration of wearable technology for real-time health monitoring.Major findings indicate that computational models have significantly enhanced the understanding of complex biological systems,while machine learning algorithms have improved the accuracy of disease prediction and diagnosis.The synergy between bioinformatics and computational techniques has led to breakthroughs in personalized medicine,enabling more precise treatment strategies.Additionally,the integration of wearable devices with advanced computational methods has opened new avenues for continuous health monitoring and early disease detection.The review emphasizes the need for interdisciplinary collaboration to further advance this field.Future research should focus on developing more robust and scalable computational models,enhancing data integration techniques,and addressing ethical considerations related to data privacy and security.By fostering innovation at the intersection of these disciplines,the potential to revolutionize healthcare delivery and outcomes becomes increasingly attainable.展开更多
A numerical technique of the target-region locating (TRL) solver in conjunction with the wave-front method is presented for the application of the finite element method (FEM) for 3-D electromagnetic computation. F...A numerical technique of the target-region locating (TRL) solver in conjunction with the wave-front method is presented for the application of the finite element method (FEM) for 3-D electromagnetic computation. First, the principle of TRL technique is described. Then, the availability of TRL solver for nonlinear application is particularly discussed demonstrating that this solver can be easily used while still remaining great efficiency. The implementation on how to apply this technique in FEM based on magnetic vector potential (MVP) is also introduced. Finally, a numerical example of 3-D magnetostatic modeling using the TRL solver and FEMLAB is given. It shows that a huge computer resource can be saved by employing the new solver.展开更多
Underground mine pillars provide natural stability to the mine area,allowing safe operations for workers and machinery.Extensive prior research has been conducted to understand pillar failure mechanics and design safe...Underground mine pillars provide natural stability to the mine area,allowing safe operations for workers and machinery.Extensive prior research has been conducted to understand pillar failure mechanics and design safe pillar layouts.However,limited studies(mostly based on empirical field observation and small-scale laboratory tests)have considered pillar-support interactions under monotonic loading conditions for the design of pillar-support systems.This study used a series of large-scale laboratory compression tests on porous limestone blocks to analyze rock and support behavior at a sufficiently large scale(specimens with edge length of 0.5 m)for incorporation of actual support elements,with consideration of different w/h ratios.Both unsupported and supported(grouted rebar rockbolt and wire mesh)tests were conducted,and the surface deformations of the specimens were monitored using three-dimensional(3D)digital image correlation(DIC).Rockbolts instrumented with distributed fiber optic strain sensors were used to study rockbolt strain distribution,load mobilization,and localized deformation at different w/h ratios.Both axial and bending strains were observed in the rockbolts,which became more prominent in the post-peak region of the stress-strain curve.展开更多
Disordered ferromagnets with a domain structure that exhibit a hysteresis loop when driven by the external magnetic field are essential materials for modern technological applications.Therefore,the understanding and p...Disordered ferromagnets with a domain structure that exhibit a hysteresis loop when driven by the external magnetic field are essential materials for modern technological applications.Therefore,the understanding and potential for controlling the hysteresis phenomenon in thesematerials,especially concerning the disorder-induced critical behavior on the hysteresis loop,have attracted significant experimental,theoretical,and numerical research efforts.We review the challenges of the numerical modeling of physical phenomena behind the hysteresis loop critical behavior in disordered ferromagnetic systems related to the non-equilibriumstochastic dynamics of domain walls driven by external fields.Specifically,using the extended Random Field Ising Model,we present different simulation approaches and advanced numerical techniques that adequately describe the hysteresis loop shapes and the collective nature of the magnetization fluctuations associated with the criticality of the hysteresis loop for different sample shapes and varied parameters of disorder and rate of change of the external field,as well as the influence of thermal fluctuations and demagnetizing fields.The studied examples demonstrate how these numerical approaches reveal newphysical insights,providing quantitativemeasures of pertinent variables extracted from the systems’simulated or experimentally measured Barkhausen noise signals.The described computational techniques using inherent scale-invariance can be applied to the analysis of various complex systems,both quantum and classical,exhibiting non-equilibrium dynamical critical point or self-organized criticality.展开更多
In this paper, according to the parallel environment of ELXSI computer, a parallel solving process of substructure method in static and dynamic analyses of large-scale and complex structure has been put forward, and t...In this paper, according to the parallel environment of ELXSI computer, a parallel solving process of substructure method in static and dynamic analyses of large-scale and complex structure has been put forward, and the corresponding parallel computational program has been developed.展开更多
Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero....Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.展开更多
Assessment of past-climate simulations of regional climate models(RCMs)is important for understanding the reliability of RCMs when used to project future regional climate.Here,we assess the performance and discuss pos...Assessment of past-climate simulations of regional climate models(RCMs)is important for understanding the reliability of RCMs when used to project future regional climate.Here,we assess the performance and discuss possible causes of biases in a WRF-based RCM with a grid spacing of 50 km,named WRFG,from the North American Regional Climate Change Assessment Program(NARCCAP)in simulating wet season precipitation over the Central United States for a period when observational data are available.The RCM reproduces key features of the precipitation distribution characteristics during late spring to early summer,although it tends to underestimate the magnitude of precipitation.This dry bias is partially due to the model’s lack of skill in simulating nocturnal precipitation related to the lack of eastward propagating convective systems in the simulation.Inaccuracy in reproducing large-scale circulation and environmental conditions is another contributing factor.The too weak simulated pressure gradient between the Rocky Mountains and the Gulf of Mexico results in weaker southerly winds in between,leading to a reduction of warm moist air transport from the Gulf to the Central Great Plains.The simulated low-level horizontal convergence fields are less favorable for upward motion than in the NARR and hence,for the development of moist convection as well.Therefore,a careful examination of an RCM’s deficiencies and the identification of the source of errors are important when using the RCM to project precipitation changes in future climate scenarios.展开更多
Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a nove...Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a novel approach for the design,analysis,management,control,and integration of CPSS,which can realize the causal analysis of complex systems by means of“algorithmization”of“counterfactuals”.However,because CPSS involve human and social factors(e.g.,autonomy,initiative,and sociality),it is difficult for traditional design of experiment(DOE)methods to achieve the generative explanation of system emergence.To address this challenge,this paper proposes an integrated approach to the design of computational experiments,incorporating three key modules:1)Descriptive module:Determining the influencing factors and response variables of the system by means of the modeling of an artificial society;2)Interpretative module:Selecting factorial experimental design solution to identify the relationship between influencing factors and macro phenomena;3)Predictive module:Building a meta-model that is equivalent to artificial society to explore its operating laws.Finally,a case study of crowd-sourcing platforms is presented to illustrate the application process and effectiveness of the proposed approach,which can reveal the social impact of algorithmic behavior on“rider race”.展开更多
Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to tr...Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.展开更多
The amount of seismological data is rapidly increasing with accumulating observational time and increasing number of stations, requiring modern technique to provide adequate computing power. In present study, we propo...The amount of seismological data is rapidly increasing with accumulating observational time and increasing number of stations, requiring modern technique to provide adequate computing power. In present study, we proposed a framework to calculate large-scale noise crosscorrelation functions(NCFs) using public cloud service from ALIYUN. The entire computation is factorized into small pieces which are performed parallelly on specified number of virtual servers provided by the cloud. Using data from most seismic stations in China, five NCF databases are built. The results show that, comparing to the time cost using a single server, the entire time can be reduced over two orders of magnitude depending number of evoked virtual servers. This could reduce computation time from months to less than 12 hours. Based on obtained massive NCFs, the global body waves are retrieved through array interferometry and agree well with those from earthquakes. This leads to a solution to process massive seismic dataset within an affordable time and is applicable to other large-scale computing in seismological researches.展开更多
Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The ...Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation.展开更多
Turbulent fluidized bed possesses a distinct advantage over bubbling fluidized bed in high solids contact efficiency and thus exerts great potential in applications to many industrial processes.Simulation for fluidiza...Turbulent fluidized bed possesses a distinct advantage over bubbling fluidized bed in high solids contact efficiency and thus exerts great potential in applications to many industrial processes.Simulation for fluidization of fluid catalytic cracking(FCC)particles and the catalytic reaction of ozone decomposition in turbulent fluidized bed is conducted using the EulerianeEulerian approach,where the recently developed two-equation turbulent(TET)model is introduced to describe the turbulent mass diffusion.The energy minimization multi-scale(EMMS)drag model and the kinetic theory of granular flow(KTGF)are adopted to describe gaseparticles interaction and particleeparticle interaction respectively.The TET model features the rigorous closure for the turbulent mass transfer equations and thus enables more reliable simulation.With this model,distributions of ozone concentration and gaseparticles two-phase velocity as well as volume fraction are obtained and compared against experimental data.The average absolute relative deviation for the simulated ozone concentration is 9.67%which confirms the validity of the proposed model.Moreover,it is found that the transition velocity from bubbling fluidization to turbulent fluidization for FCC particles is about 0.5 m$se1 which is consistent with experimental observation.展开更多
The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cess...The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cessed in wireless communication networks.Mobile Edge Computing(MEC)is a desired paradigm to timely process the data from IoT for value maximization.In MEC,a number of computing-capable devices are deployed at the network edge near data sources to support edge computing,such that the long network transmission delay in cloud computing paradigm could be avoided.Since an edge device might not always have sufficient resources to process the massive amount of data,computation offloading is significantly important considering the coop-eration among edge devices.However,the dynamic traffic characteristics and heterogeneous computing capa-bilities of edge devices challenge the offloading.In addition,different scheduling schemes might provide different computation delays to the offloaded tasks.Thus,offloading in mobile nodes and scheduling in the MEC server are coupled to determine service delay.This paper seeks to guarantee low delay for computation intensive applica-tions by jointly optimizing the offloading and scheduling in such an MEC system.We propose a Delay-Greedy Computation Offloading(DGCO)algorithm to make offloading decisions for new tasks in distributed computing-enabled mobile devices.A Reinforcement Learning-based Parallel Scheduling(RLPS)algorithm is further designed to schedule offloaded tasks in the multi-core MEC server.With an offloading delay broadcast mechanism,the DGCO and RLPS cooperate to achieve the goal of delay-guarantee-ratio maximization.Finally,the simulation results show that our proposal can bound the end-to-end delay of various tasks.Even under slightly heavy task load,the delay-guarantee-ratio given by DGCO-RLPS can still approximate 95%,while that given by benchmarked algorithms is reduced to intolerable value.The simulation results are demonstrated the effective-ness of DGCO-RLPS for delay guarantee in MEC.展开更多
Bedding slope is a typical heterogeneous slope consisting of different soil/rock layers and is likely to slide along the weakest interface.Conventional slope protection methods for bedding slopes,such as retaining wal...Bedding slope is a typical heterogeneous slope consisting of different soil/rock layers and is likely to slide along the weakest interface.Conventional slope protection methods for bedding slopes,such as retaining walls,stabilizing piles,and anchors,are time-consuming and labor-and energy-intensive.This study proposes an innovative polymer grout method to improve the bearing capacity and reduce the displacement of bedding slopes.A series of large-scale model tests were carried out to verify the effectiveness of polymer grout in protecting bedding slopes.Specifically,load-displacement relationships and failure patterns were analyzed for different testing slopes with various dosages of polymer.Results show the great potential of polymer grout in improving bearing capacity,reducing settlement,and protecting slopes from being crushed under shearing.The polymer-treated slopes remained structurally intact,while the untreated slope exhibited considerable damage when subjected to loads surpassing the bearing capacity.It is also found that polymer-cemented soils concentrate around the injection pipe,forming a fan-shaped sheet-like structure.This study proves the improvement of polymer grouting for bedding slope treatment and will contribute to the development of a fast method to protect bedding slopes from landslides.展开更多
Secure and efficient outsourced computation in cloud computing environments is crucial for ensuring data confidentiality, integrity, and resource optimization. In this research, we propose novel algorithms and methodo...Secure and efficient outsourced computation in cloud computing environments is crucial for ensuring data confidentiality, integrity, and resource optimization. In this research, we propose novel algorithms and methodologies to address these challenges. Through a series of experiments, we evaluate the performance, security, and efficiency of the proposed algorithms in real-world cloud environments. Our results demonstrate the effectiveness of homomorphic encryption-based secure computation, secure multiparty computation, and trusted execution environment-based approaches in mitigating security threats while ensuring efficient resource utilization. Specifically, our homomorphic encryption-based algorithm exhibits encryption times ranging from 20 to 1000 milliseconds and decryption times ranging from 25 to 1250 milliseconds for payload sizes varying from 100 KB to 5000 KB. Furthermore, our comparative analysis against state-of-the-art solutions reveals the strengths of our proposed algorithms in terms of security guarantees, encryption overhead, and communication latency.展开更多
This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.W...This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.We delve into the emerging trend of machine learning on embedded devices,enabling tasks in resource-limited environ-ments.However,the widespread adoption of machine learning raises significant privacy concerns,necessitating the development of privacy-preserving techniques.One such technique,secure multi-party computation(MPC),allows collaborative computations without exposing private inputs.Despite its potential,complex protocols and communication interactions hinder performance,especially on resource-constrained devices.Efforts to enhance efficiency have been made,but scalability remains a challenge.Given the success of GPUs in deep learning,lever-aging embedded GPUs,such as those offered by NVIDIA,emerges as a promising solution.Therefore,we propose an Embedded GPU-based Secure Two-party Computation(EG-STC)framework for Artificial Intelligence(AI)systems.To the best of our knowledge,this work represents the first endeavor to fully implement machine learning model training based on secure two-party computing on the Embedded GPU platform.Our experimental results demonstrate the effectiveness of EG-STC.On an embedded GPU with a power draw of 5 W,our implementation achieved a secure two-party matrix multiplication throughput of 5881.5 kilo-operations per millisecond(kops/ms),with an energy efficiency ratio of 1176.3 kops/ms/W.Furthermore,leveraging our EG-STC framework,we achieved an overall time acceleration ratio of 5–6 times compared to solutions running on server-grade CPUs.Our solution also exhibited a reduced runtime,requiring only 60%to 70%of the runtime of previously best-known methods on the same platform.In summary,our research contributes to the advancement of secure and efficient machine learning implementations on resource-constrained embedded devices,paving the way for broader adoption of AI technologies in various applications.展开更多
In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based ...In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.展开更多
This article introduces the concept of load aggregation,which involves a comprehensive analysis of loads to acquire their external characteristics for the purpose of modeling and analyzing power systems.The online ide...This article introduces the concept of load aggregation,which involves a comprehensive analysis of loads to acquire their external characteristics for the purpose of modeling and analyzing power systems.The online identification method is a computer-involved approach for data collection,processing,and system identification,commonly used for adaptive control and prediction.This paper proposes a method for dynamically aggregating large-scale adjustable loads to support high proportions of new energy integration,aiming to study the aggregation characteristics of regional large-scale adjustable loads using online identification techniques and feature extraction methods.The experiment selected 300 central air conditioners as the research subject and analyzed their regulation characteristics,economic efficiency,and comfort.The experimental results show that as the adjustment time of the air conditioner increases from 5 minutes to 35 minutes,the stable adjustment quantity during the adjustment period decreases from 28.46 to 3.57,indicating that air conditioning loads can be controlled over a long period and have better adjustment effects in the short term.Overall,the experimental results of this paper demonstrate that analyzing the aggregation characteristics of regional large-scale adjustable loads using online identification techniques and feature extraction algorithms is effective.展开更多
Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework...Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework for aircraft geo-localization in a large range that only requires a downward-facing monocular camera,an altimeter,a compass,and an open-source Vector Map(VMAP).The algorithm combines the matching and particle filter methods.Shape vector and correlation between two building contour vectors are defined,and a coarse-to-fine building vector matching(CFBVM)method is proposed in the matching stage,for which the original matching results are described by the Gaussian mixture model(GMM).Subsequently,an improved resampling strategy is designed to reduce computing expenses with a huge number of initial particles,and a credibility indicator is designed to avoid location mistakes in the particle filter stage.An experimental evaluation of the approach based on flight data is provided.On a flight at a height of 0.2 km over a flight distance of 2 km,the aircraft is geo-localized in a reference map of 11,025 km~2using 0.09 km~2aerial images without any prior information.The absolute localization error is less than 10 m.展开更多
基金supported by the National Natural Science Foundation of China (22078004)the Research Development Fund from Xi’an Jiaotong-Liverpool University (RDF-16-02-03 and RDF15-01-23)key program special fund (KSF-E-03)。
文摘Deuterium(D_(2)) is one of the important fuel sources that power nuclear fusion reactors. The existing D_(2)/H_(2) separation technologies that obtain high-purity D_(2) are cost-intensive. Recent research has shown that metal-organic frameworks(MOFs) are of good potential for D_(2)/H_(2) separation application. In this work, a high-throughput computational screening of 12020 computation-ready experimental MOFs is carried out to determine the best MOFs for hydrogen isotope separation application. Meanwhile, the detailed structure-performance correlation is systematically investigated with the aid of machine learning. The results indicate that the ideal D_(2)/H_(2) adsorption selectivity calculated based on Henry coefficient is strongly correlated with the 1/ΔAD feature descriptor;that is, inverse of the adsorbility difference of the two adsorbates. Meanwhile, the machine learning(ML) results show that the prediction accuracy of all the four ML methods is significantly improved after the addition of this feature descriptor. In addition, the ML results based on extreme gradient boosting model also revealed that the 1/ΔAD descriptor has the highest relative importance compared to other commonly-used descriptors. To further explore the effect of hydrogen isotope separation in binary mixture, 1548 MOFs with ideal adsorption selectivity greater than 1.5 are simulated at equimolar conditions. The structure-performance relationship shows that high adsorption selectivity MOFs generally have smaller pore size(0.3-0.5 nm) and lower surface area. Among the top 200 performers, the materials mainly have the sql, pcu, cds, hxl, and ins topologies.Finally, three MOFs with high D_(2)/H_(2) selectivity and good D_(2) uptake are identified as the best candidates,of all which had one-dimensional channel pore. The findings obtained in this work may be helpful for the identification of potentially promising candidates for hydrogen isotope separation.
文摘The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical research.The review covers key topics such as computational modelling,bioinformatics,machine learning in medical diagnostics,and the integration of wearable technology for real-time health monitoring.Major findings indicate that computational models have significantly enhanced the understanding of complex biological systems,while machine learning algorithms have improved the accuracy of disease prediction and diagnosis.The synergy between bioinformatics and computational techniques has led to breakthroughs in personalized medicine,enabling more precise treatment strategies.Additionally,the integration of wearable devices with advanced computational methods has opened new avenues for continuous health monitoring and early disease detection.The review emphasizes the need for interdisciplinary collaboration to further advance this field.Future research should focus on developing more robust and scalable computational models,enhancing data integration techniques,and addressing ethical considerations related to data privacy and security.By fostering innovation at the intersection of these disciplines,the potential to revolutionize healthcare delivery and outcomes becomes increasingly attainable.
基金Open Funds of State Key Laboratory of MillimeterWaves,China (No. K200401), Outstanding Teaching and ResearchAwards for Young Teachers of Nanjing Normal University (No.1320BL51)
文摘A numerical technique of the target-region locating (TRL) solver in conjunction with the wave-front method is presented for the application of the finite element method (FEM) for 3-D electromagnetic computation. First, the principle of TRL technique is described. Then, the availability of TRL solver for nonlinear application is particularly discussed demonstrating that this solver can be easily used while still remaining great efficiency. The implementation on how to apply this technique in FEM based on magnetic vector potential (MVP) is also introduced. Finally, a numerical example of 3-D magnetostatic modeling using the TRL solver and FEMLAB is given. It shows that a huge computer resource can be saved by employing the new solver.
基金the funding support from Alpha Foundation for the Improvement of Mine Safety and Health Inc.(ALPHAFOUNDATION,Grant No.AFC820-52)。
文摘Underground mine pillars provide natural stability to the mine area,allowing safe operations for workers and machinery.Extensive prior research has been conducted to understand pillar failure mechanics and design safe pillar layouts.However,limited studies(mostly based on empirical field observation and small-scale laboratory tests)have considered pillar-support interactions under monotonic loading conditions for the design of pillar-support systems.This study used a series of large-scale laboratory compression tests on porous limestone blocks to analyze rock and support behavior at a sufficiently large scale(specimens with edge length of 0.5 m)for incorporation of actual support elements,with consideration of different w/h ratios.Both unsupported and supported(grouted rebar rockbolt and wire mesh)tests were conducted,and the surface deformations of the specimens were monitored using three-dimensional(3D)digital image correlation(DIC).Rockbolts instrumented with distributed fiber optic strain sensors were used to study rockbolt strain distribution,load mobilization,and localized deformation at different w/h ratios.Both axial and bending strains were observed in the rockbolts,which became more prominent in the post-peak region of the stress-strain curve.
基金Djordje Spasojevic and Svetislav Mijatovic acknowledge the support from the Ministry of Science,TechnologicalDevelopment and Innovation of the Republic of Serbia(Agreement No.451-03-65/2024-03/200162)S.J.ibid.(Agreement No.451-03-65/2024-03/200122)Bosiljka Tadic from the Slovenian Research Agency(program P1-0044).
文摘Disordered ferromagnets with a domain structure that exhibit a hysteresis loop when driven by the external magnetic field are essential materials for modern technological applications.Therefore,the understanding and potential for controlling the hysteresis phenomenon in thesematerials,especially concerning the disorder-induced critical behavior on the hysteresis loop,have attracted significant experimental,theoretical,and numerical research efforts.We review the challenges of the numerical modeling of physical phenomena behind the hysteresis loop critical behavior in disordered ferromagnetic systems related to the non-equilibriumstochastic dynamics of domain walls driven by external fields.Specifically,using the extended Random Field Ising Model,we present different simulation approaches and advanced numerical techniques that adequately describe the hysteresis loop shapes and the collective nature of the magnetization fluctuations associated with the criticality of the hysteresis loop for different sample shapes and varied parameters of disorder and rate of change of the external field,as well as the influence of thermal fluctuations and demagnetizing fields.The studied examples demonstrate how these numerical approaches reveal newphysical insights,providing quantitativemeasures of pertinent variables extracted from the systems’simulated or experimentally measured Barkhausen noise signals.The described computational techniques using inherent scale-invariance can be applied to the analysis of various complex systems,both quantum and classical,exhibiting non-equilibrium dynamical critical point or self-organized criticality.
文摘In this paper, according to the parallel environment of ELXSI computer, a parallel solving process of substructure method in static and dynamic analyses of large-scale and complex structure has been put forward, and the corresponding parallel computational program has been developed.
基金supported by the Scientific Research Project of Xiang Jiang Lab(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(ZC23112101-10)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJ-Z03)the Science and Technology Innovation Program of Humnan Province(2023RC1002)。
文摘Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.
文摘Assessment of past-climate simulations of regional climate models(RCMs)is important for understanding the reliability of RCMs when used to project future regional climate.Here,we assess the performance and discuss possible causes of biases in a WRF-based RCM with a grid spacing of 50 km,named WRFG,from the North American Regional Climate Change Assessment Program(NARCCAP)in simulating wet season precipitation over the Central United States for a period when observational data are available.The RCM reproduces key features of the precipitation distribution characteristics during late spring to early summer,although it tends to underestimate the magnitude of precipitation.This dry bias is partially due to the model’s lack of skill in simulating nocturnal precipitation related to the lack of eastward propagating convective systems in the simulation.Inaccuracy in reproducing large-scale circulation and environmental conditions is another contributing factor.The too weak simulated pressure gradient between the Rocky Mountains and the Gulf of Mexico results in weaker southerly winds in between,leading to a reduction of warm moist air transport from the Gulf to the Central Great Plains.The simulated low-level horizontal convergence fields are less favorable for upward motion than in the NARR and hence,for the development of moist convection as well.Therefore,a careful examination of an RCM’s deficiencies and the identification of the source of errors are important when using the RCM to project precipitation changes in future climate scenarios.
基金the National Key Research and Development Program of China(2021YFF0900800)the National Natural Science Foundation of China(61972276,62206116,62032016)+2 种基金the New Liberal Arts Reform and Practice Project of National Ministry of Education(2021170002)the Open Research Fund of the State Key Laboratory for Management and Control of Complex Systems(20210101)Tianjin University Talent Innovation Reward Program for Literature and Science Graduate Student(C1-2022-010)。
文摘Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a novel approach for the design,analysis,management,control,and integration of CPSS,which can realize the causal analysis of complex systems by means of“algorithmization”of“counterfactuals”.However,because CPSS involve human and social factors(e.g.,autonomy,initiative,and sociality),it is difficult for traditional design of experiment(DOE)methods to achieve the generative explanation of system emergence.To address this challenge,this paper proposes an integrated approach to the design of computational experiments,incorporating three key modules:1)Descriptive module:Determining the influencing factors and response variables of the system by means of the modeling of an artificial society;2)Interpretative module:Selecting factorial experimental design solution to identify the relationship between influencing factors and macro phenomena;3)Predictive module:Building a meta-model that is equivalent to artificial society to explore its operating laws.Finally,a case study of crowd-sourcing platforms is presented to illustrate the application process and effectiveness of the proposed approach,which can reveal the social impact of algorithmic behavior on“rider race”.
基金support by the Open Project of Xiangjiang Laboratory(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28,ZK21-07)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(CX20230074)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJZ03)the Science and Technology Innovation Program of Humnan Province(2023RC1002).
文摘Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.
基金supported by National Key R&D Program of China(No.2018YFC1503200)National Natural Science Foundation of China(Nos.41674061,41790463 and 41674058)
文摘The amount of seismological data is rapidly increasing with accumulating observational time and increasing number of stations, requiring modern technique to provide adequate computing power. In present study, we proposed a framework to calculate large-scale noise crosscorrelation functions(NCFs) using public cloud service from ALIYUN. The entire computation is factorized into small pieces which are performed parallelly on specified number of virtual servers provided by the cloud. Using data from most seismic stations in China, five NCF databases are built. The results show that, comparing to the time cost using a single server, the entire time can be reduced over two orders of magnitude depending number of evoked virtual servers. This could reduce computation time from months to less than 12 hours. Based on obtained massive NCFs, the global body waves are retrieved through array interferometry and agree well with those from earthquakes. This leads to a solution to process massive seismic dataset within an affordable time and is applicable to other large-scale computing in seismological researches.
文摘Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation.
基金financial support from the National Natural Science Foundation of China(22078230)the National Key Research and Development Program of China(2023YFB4103600)the State Key Laboratory of Heavy Oil Processing(SKLHOP202202008).
文摘Turbulent fluidized bed possesses a distinct advantage over bubbling fluidized bed in high solids contact efficiency and thus exerts great potential in applications to many industrial processes.Simulation for fluidization of fluid catalytic cracking(FCC)particles and the catalytic reaction of ozone decomposition in turbulent fluidized bed is conducted using the EulerianeEulerian approach,where the recently developed two-equation turbulent(TET)model is introduced to describe the turbulent mass diffusion.The energy minimization multi-scale(EMMS)drag model and the kinetic theory of granular flow(KTGF)are adopted to describe gaseparticles interaction and particleeparticle interaction respectively.The TET model features the rigorous closure for the turbulent mass transfer equations and thus enables more reliable simulation.With this model,distributions of ozone concentration and gaseparticles two-phase velocity as well as volume fraction are obtained and compared against experimental data.The average absolute relative deviation for the simulated ozone concentration is 9.67%which confirms the validity of the proposed model.Moreover,it is found that the transition velocity from bubbling fluidization to turbulent fluidization for FCC particles is about 0.5 m$se1 which is consistent with experimental observation.
基金supported in part by the National Natural Science Foundation of China under Grant 61901128,62273109the Natural Science Foundation of the Jiangsu Higher Education Institutions of China(21KJB510032).
文摘The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cessed in wireless communication networks.Mobile Edge Computing(MEC)is a desired paradigm to timely process the data from IoT for value maximization.In MEC,a number of computing-capable devices are deployed at the network edge near data sources to support edge computing,such that the long network transmission delay in cloud computing paradigm could be avoided.Since an edge device might not always have sufficient resources to process the massive amount of data,computation offloading is significantly important considering the coop-eration among edge devices.However,the dynamic traffic characteristics and heterogeneous computing capa-bilities of edge devices challenge the offloading.In addition,different scheduling schemes might provide different computation delays to the offloaded tasks.Thus,offloading in mobile nodes and scheduling in the MEC server are coupled to determine service delay.This paper seeks to guarantee low delay for computation intensive applica-tions by jointly optimizing the offloading and scheduling in such an MEC system.We propose a Delay-Greedy Computation Offloading(DGCO)algorithm to make offloading decisions for new tasks in distributed computing-enabled mobile devices.A Reinforcement Learning-based Parallel Scheduling(RLPS)algorithm is further designed to schedule offloaded tasks in the multi-core MEC server.With an offloading delay broadcast mechanism,the DGCO and RLPS cooperate to achieve the goal of delay-guarantee-ratio maximization.Finally,the simulation results show that our proposal can bound the end-to-end delay of various tasks.Even under slightly heavy task load,the delay-guarantee-ratio given by DGCO-RLPS can still approximate 95%,while that given by benchmarked algorithms is reduced to intolerable value.The simulation results are demonstrated the effective-ness of DGCO-RLPS for delay guarantee in MEC.
基金supported by the Fujian Science Foundation for Outstanding Youth(Grant No.2023J06039)the National Natural Science Foundation of China(Grant No.41977259 and No.U2005205)Fujian Province natural resources science and technology innovation project(Grant No.KY-090000-04-2022-019)。
文摘Bedding slope is a typical heterogeneous slope consisting of different soil/rock layers and is likely to slide along the weakest interface.Conventional slope protection methods for bedding slopes,such as retaining walls,stabilizing piles,and anchors,are time-consuming and labor-and energy-intensive.This study proposes an innovative polymer grout method to improve the bearing capacity and reduce the displacement of bedding slopes.A series of large-scale model tests were carried out to verify the effectiveness of polymer grout in protecting bedding slopes.Specifically,load-displacement relationships and failure patterns were analyzed for different testing slopes with various dosages of polymer.Results show the great potential of polymer grout in improving bearing capacity,reducing settlement,and protecting slopes from being crushed under shearing.The polymer-treated slopes remained structurally intact,while the untreated slope exhibited considerable damage when subjected to loads surpassing the bearing capacity.It is also found that polymer-cemented soils concentrate around the injection pipe,forming a fan-shaped sheet-like structure.This study proves the improvement of polymer grouting for bedding slope treatment and will contribute to the development of a fast method to protect bedding slopes from landslides.
文摘Secure and efficient outsourced computation in cloud computing environments is crucial for ensuring data confidentiality, integrity, and resource optimization. In this research, we propose novel algorithms and methodologies to address these challenges. Through a series of experiments, we evaluate the performance, security, and efficiency of the proposed algorithms in real-world cloud environments. Our results demonstrate the effectiveness of homomorphic encryption-based secure computation, secure multiparty computation, and trusted execution environment-based approaches in mitigating security threats while ensuring efficient resource utilization. Specifically, our homomorphic encryption-based algorithm exhibits encryption times ranging from 20 to 1000 milliseconds and decryption times ranging from 25 to 1250 milliseconds for payload sizes varying from 100 KB to 5000 KB. Furthermore, our comparative analysis against state-of-the-art solutions reveals the strengths of our proposed algorithms in terms of security guarantees, encryption overhead, and communication latency.
基金supported in part by Major Science and Technology Demonstration Project of Jiangsu Provincial Key R&D Program under Grant No.BE2023025in part by the National Natural Science Foundation of China under Grant No.62302238+2 种基金in part by the Natural Science Foundation of Jiangsu Province under Grant No.BK20220388in part by the Natural Science Research Project of Colleges and Universities in Jiangsu Province under Grant No.22KJB520004in part by the China Postdoctoral Science Foundation under Grant No.2022M711689.
文摘This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.We delve into the emerging trend of machine learning on embedded devices,enabling tasks in resource-limited environ-ments.However,the widespread adoption of machine learning raises significant privacy concerns,necessitating the development of privacy-preserving techniques.One such technique,secure multi-party computation(MPC),allows collaborative computations without exposing private inputs.Despite its potential,complex protocols and communication interactions hinder performance,especially on resource-constrained devices.Efforts to enhance efficiency have been made,but scalability remains a challenge.Given the success of GPUs in deep learning,lever-aging embedded GPUs,such as those offered by NVIDIA,emerges as a promising solution.Therefore,we propose an Embedded GPU-based Secure Two-party Computation(EG-STC)framework for Artificial Intelligence(AI)systems.To the best of our knowledge,this work represents the first endeavor to fully implement machine learning model training based on secure two-party computing on the Embedded GPU platform.Our experimental results demonstrate the effectiveness of EG-STC.On an embedded GPU with a power draw of 5 W,our implementation achieved a secure two-party matrix multiplication throughput of 5881.5 kilo-operations per millisecond(kops/ms),with an energy efficiency ratio of 1176.3 kops/ms/W.Furthermore,leveraging our EG-STC framework,we achieved an overall time acceleration ratio of 5–6 times compared to solutions running on server-grade CPUs.Our solution also exhibited a reduced runtime,requiring only 60%to 70%of the runtime of previously best-known methods on the same platform.In summary,our research contributes to the advancement of secure and efficient machine learning implementations on resource-constrained embedded devices,paving the way for broader adoption of AI technologies in various applications.
基金The Natural Science Foundation of Henan Province(No.232300421097)the Program for Science&Technology Innovation Talents in Universities of Henan Province(No.23HASTIT019,24HASTIT038)+2 种基金the China Postdoctoral Science Foundation(No.2023T160596,2023M733251)the Open Research Fund of National Mobile Communications Research Laboratory,Southeast University(No.2023D11)the Song Shan Laboratory Foundation(No.YYJC022022003)。
文摘In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.
基金supported by the State Grid Science&Technology Project(5100-202114296A-0-0-00).
文摘This article introduces the concept of load aggregation,which involves a comprehensive analysis of loads to acquire their external characteristics for the purpose of modeling and analyzing power systems.The online identification method is a computer-involved approach for data collection,processing,and system identification,commonly used for adaptive control and prediction.This paper proposes a method for dynamically aggregating large-scale adjustable loads to support high proportions of new energy integration,aiming to study the aggregation characteristics of regional large-scale adjustable loads using online identification techniques and feature extraction methods.The experiment selected 300 central air conditioners as the research subject and analyzed their regulation characteristics,economic efficiency,and comfort.The experimental results show that as the adjustment time of the air conditioner increases from 5 minutes to 35 minutes,the stable adjustment quantity during the adjustment period decreases from 28.46 to 3.57,indicating that air conditioning loads can be controlled over a long period and have better adjustment effects in the short term.Overall,the experimental results of this paper demonstrate that analyzing the aggregation characteristics of regional large-scale adjustable loads using online identification techniques and feature extraction algorithms is effective.
文摘Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework for aircraft geo-localization in a large range that only requires a downward-facing monocular camera,an altimeter,a compass,and an open-source Vector Map(VMAP).The algorithm combines the matching and particle filter methods.Shape vector and correlation between two building contour vectors are defined,and a coarse-to-fine building vector matching(CFBVM)method is proposed in the matching stage,for which the original matching results are described by the Gaussian mixture model(GMM).Subsequently,an improved resampling strategy is designed to reduce computing expenses with a huge number of initial particles,and a credibility indicator is designed to avoid location mistakes in the particle filter stage.An experimental evaluation of the approach based on flight data is provided.On a flight at a height of 0.2 km over a flight distance of 2 km,the aircraft is geo-localized in a reference map of 11,025 km~2using 0.09 km~2aerial images without any prior information.The absolute localization error is less than 10 m.