As massive underground projects have become popular in dense urban cities,a problem has arisen:which model predicts the best for Tunnel Boring Machine(TBM)performance in these tunneling projects?However,performance le...As massive underground projects have become popular in dense urban cities,a problem has arisen:which model predicts the best for Tunnel Boring Machine(TBM)performance in these tunneling projects?However,performance level of TBMs in complex geological conditions is still a great challenge for practitioners and researchers.On the other hand,a reliable and accurate prediction of TBM performance is essential to planning an applicable tunnel construction schedule.The performance of TBM is very difficult to estimate due to various geotechnical and geological factors and machine specifications.The previously-proposed intelligent techniques in this field are mostly based on a single or base model with a low level of accuracy.Hence,this study aims to introduce a hybrid randomforest(RF)technique optimized by global harmony search with generalized oppositionbased learning(GOGHS)for forecasting TBM advance rate(AR).Optimizing the RF hyper-parameters in terms of,e.g.,tree number and maximum tree depth is the main objective of using the GOGHS-RF model.In the modelling of this study,a comprehensive databasewith themost influential parameters onTBMtogetherwithTBM AR were used as input and output variables,respectively.To examine the capability and power of the GOGHSRF model,three more hybrid models of particle swarm optimization-RF,genetic algorithm-RF and artificial bee colony-RF were also constructed to forecast TBM AR.Evaluation of the developed models was performed by calculating several performance indices,including determination coefficient(R2),root-mean-square-error(RMSE),and mean-absolute-percentage-error(MAPE).The results showed that theGOGHS-RF is a more accurate technique for estimatingTBMAR compared to the other applied models.The newly-developedGOGHS-RFmodel enjoyed R2=0.9937 and 0.9844,respectively,for train and test stages,which are higher than a pre-developed RF.Also,the importance of the input parameters was interpreted through the SHapley Additive exPlanations(SHAP)method,and it was found that thrust force per cutter is the most important variable on TBMAR.The GOGHS-RF model can be used in mechanized tunnel projects for predicting and checking performance.展开更多
Refined 3D modeling of mine slopes is pivotal for precise prediction of geological hazards.Aiming at the inadequacy of existing single modeling methods in comprehensively representing the overall and localized charact...Refined 3D modeling of mine slopes is pivotal for precise prediction of geological hazards.Aiming at the inadequacy of existing single modeling methods in comprehensively representing the overall and localized characteristics of mining slopes,this study introduces a new method that fuses model data from Unmanned aerial vehicles(UAV)tilt photogrammetry and 3D laser scanning through a data alignment algorithm based on control points.First,the mini batch K-Medoids algorithm is utilized to cluster the point cloud data from ground 3D laser scanning.Then,the elbow rule is applied to determine the optimal cluster number(K0),and the feature points are extracted.Next,the nearest neighbor point algorithm is employed to match the feature points obtained from UAV tilt photogrammetry,and the internal point coordinates are adjusted through the distanceweighted average to construct a 3D model.Finally,by integrating an engineering case study,the K0 value is determined to be 8,with a matching accuracy between the two model datasets ranging from 0.0669 to 1.0373 mm.Therefore,compared with the modeling method utilizing K-medoids clustering algorithm,the new modeling method significantly enhances the computational efficiency,the accuracy of selecting the optimal number of feature points in 3D laser scanning,and the precision of the 3D model derived from UAV tilt photogrammetry.This method provides a research foundation for constructing mine slope model.展开更多
This paper explores the application of Model Predictive Control(MPC)to enhance safety and efficiency in autonomous vehicle(AV)navigation through optimized path planning.The evolution of AV technology has progressed ra...This paper explores the application of Model Predictive Control(MPC)to enhance safety and efficiency in autonomous vehicle(AV)navigation through optimized path planning.The evolution of AV technology has progressed rapidly,moving from basic driver-assistance systems(Level 1)to fully autonomous capabilities(Level 5).Central to this advancement are two key functionalities:Lane-Change Maneuvers(LCM)and Adaptive Cruise Control(ACC).In this study,a detailed simulation environment is created to replicate the road network between Nantun andWuri on National Freeway No.1 in Taiwan.The MPC controller is deployed to optimize vehicle trajectories,ensuring safe and efficient navigation.Simulated onboard sensors,including vehicle cameras and millimeterwave radar,are used to detect and respond to dynamic changes in the surrounding environment,enabling real-time decision-making for LCM and ACC.The simulation resultshighlight the superiority of the MPC-based approach in maintaining safe distances,executing controlled lane changes,and optimizing fuel efficiency.Specifically,the MPC controller effectively manages collision avoidance,reduces travel time,and contributes to smoother traffic flow compared to traditional path planning methods.These findings underscore the potential of MPC to enhance the reliability and safety of autonomous driving in complex traffic scenarios.Future research will focus on validating these results through real-world testing,addressing computational challenges for real-time implementation,and exploring the adaptability of MPC under various environmental conditions.This study provides a significant step towards achieving safer and more efficient autonomous vehicle navigation,paving the way for broader adoption of MPC in AV systems.展开更多
安全生产事故往往由多组织交互、多因素耦合造成,事故原因涉及多个组织。为预防和遏制多组织生产安全事故的发生,基于系统理论事故建模与过程模型(Systems-Theory Accident Modeling and Process,STAMP)、24Model,构建一种用于多组织事...安全生产事故往往由多组织交互、多因素耦合造成,事故原因涉及多个组织。为预防和遏制多组织生产安全事故的发生,基于系统理论事故建模与过程模型(Systems-Theory Accident Modeling and Process,STAMP)、24Model,构建一种用于多组织事故分析的方法,并以青岛石油爆炸事故为例进行事故原因分析。结果显示:STAMP-24Model可以分组织,分层次且有效、全面、详细地分析涉及多个组织的事故原因,探究多组织之间的交互关系;对事故进行动态演化分析,可得到各组织不安全动作耦合关系与形成的事故失效链及管控失效路径,进而为预防多组织事故提供思路和参考。展开更多
In many problems,to analyze the process/metabolism behavior,a mod-el of the system is identified.The main gap is the weakness of current methods vs.noisy environments.The primary objective of this study is to present a...In many problems,to analyze the process/metabolism behavior,a mod-el of the system is identified.The main gap is the weakness of current methods vs.noisy environments.The primary objective of this study is to present a more robust method against uncertainties.This paper proposes a new deep learning scheme for modeling and identification applications.The suggested approach is based on non-singleton type-3 fuzzy logic systems(NT3-FLSs)that can support measurement errors and high-level uncertainties.Besides the rule optimization,the antecedent parameters and the level of secondary memberships are also adjusted by the suggested square root cubature Kalmanfilter(SCKF).In the learn-ing algorithm,the presented NT3-FLSs are deeply learned,and their nonlinear structure is preserved.The designed scheme is applied for modeling carbon cap-ture and sequestration problem using real-world data sets.Through various ana-lyses and comparisons,the better efficiency of the proposed fuzzy modeling scheme is verified.The main advantages of the suggested approach include better resistance against uncertainties,deep learning,and good convergence.展开更多
We proposed and compared three methods(filter burnup,single energy burnup,and burnup extremum analysis)to build a high-resolution neutronics model for 238Pu production in high-flux reactors.The filter burnup and singl...We proposed and compared three methods(filter burnup,single energy burnup,and burnup extremum analysis)to build a high-resolution neutronics model for 238Pu production in high-flux reactors.The filter burnup and single energy burnup methods have no theoretical approximation and can achieve a spectrum resolution of up to~1 eV,thereby constructing the importance curve and yield curve of the full energy range.The burnup extreme analysis method combines the importance and yield curves to consider the influence of irradiation time on production efficiency,thereby constructing extreme curves.The three curves,which quantify the transmutation rate of the nuclei in each energy region,are of physical significance because they have similar distributions.A high-resolution neutronics model for ^(238)Pu production was established based on these three curves,and its universality and feasibility were proven.The neutronics model can guide the neutron spectrum optimization and improve the yield of ^(238)Pu by up to 18.81%.The neutronics model revealed the law of nuclei transmutation in all energy regions with high spectrum resolution,thus providing theoretical support for high-flux reactor design and irradiation production of ^(238)Pu.展开更多
Background:Diabetic nephropathy(DN)is the most common complication of type 2 diabetes mellitus and the main cause of end-stage renal disease worldwide.Diagnostic biomarkers may allow early diagnosis and treatment of D...Background:Diabetic nephropathy(DN)is the most common complication of type 2 diabetes mellitus and the main cause of end-stage renal disease worldwide.Diagnostic biomarkers may allow early diagnosis and treatment of DN to reduce the prevalence and delay the development of DN.Kidney biopsy is the gold standard for diagnosing DN;however,its invasive character is its primary limitation.The machine learning approach provides a non-invasive and specific criterion for diagnosing DN,although traditional machine learning algorithms need to be improved to enhance diagnostic performance.Methods:We applied high-throughput RNA sequencing to obtain the genes related to DN tubular tissues and normal tubular tissues of mice.Then machine learning algorithms,random forest,LASSO logistic regression,and principal component analysis were used to identify key genes(CES1G,CYP4A14,NDUFA4,ABCC4,ACE).Then,the genetic algorithm-optimized backpropagation neural network(GA-BPNN)was used to improve the DN diagnostic model.Results:The AUC value of the GA-BPNN model in the training dataset was 0.83,and the AUC value of the model in the validation dataset was 0.81,while the AUC values of the SVM model in the training dataset and external validation dataset were 0.756 and 0.650,respectively.Thus,this GA-BPNN gave better values than the traditional SVM model.This diagnosis model may aim for personalized diagnosis and treatment of patients with DN.Immunohistochemical staining further confirmed that the tissue and cell expression of NADH dehydrogenase(ubiquinone)1 alpha subcomplex,4-like 2(NDUFA4L2)in tubular tissue in DN mice were decreased.Conclusion:The GA-BPNN model has better accuracy than the traditional SVM model and may provide an effective tool for diagnosing DN.展开更多
Precipitous Arctic sea-ice decline and the corresponding increase in Arctic open-water areas in summer months give more space for sea-ice growth in the subsequent cold seasons. Compared to the decline of the entire Ar...Precipitous Arctic sea-ice decline and the corresponding increase in Arctic open-water areas in summer months give more space for sea-ice growth in the subsequent cold seasons. Compared to the decline of the entire Arctic multiyear sea ice,changes in newly formed sea ice indicate more thermodynamic and dynamic information on Arctic atmosphere–ocean–ice interaction and northern mid–high latitude atmospheric teleconnections. Here, we use a large multimodel ensemble from phase 6 of the Coupled Model Intercomparison Project(CMIP6) to investigate future changes in wintertime newly formed Arctic sea ice. The commonly used model-democracy approach that gives equal weight to each model essentially assumes that all models are independent and equally plausible, which contradicts with the fact that there are large interdependencies in the ensemble and discrepancies in models' performances in reproducing observations. Therefore, instead of using the arithmetic mean of well-performing models or all available models for projections like in previous studies, we employ a newly developed model weighting scheme that weights all models in the ensemble with consideration of their performance and independence to provide more reliable projections. Model democracy leads to evident bias and large intermodel spread in CMIP6 projections of newly formed Arctic sea ice. However, we show that both the bias and the intermodel spread can be effectively reduced by the weighting scheme. Projections from the weighted models indicate that wintertime newly formed Arctic sea ice is likely to increase dramatically until the middle of this century regardless of the emissions scenario.Thereafter, it may decrease(or remain stable) if the Arctic warming crosses a threshold(or is extensively constrained).展开更多
Understanding the anisotropic creep behaviors of shale under direct shearing is a challenging issue.In this context,we conducted shear-creep and steady-creep tests on shale with five bedding orientations (i.e.0°,...Understanding the anisotropic creep behaviors of shale under direct shearing is a challenging issue.In this context,we conducted shear-creep and steady-creep tests on shale with five bedding orientations (i.e.0°,30°,45°,60°,and 90°),under multiple levels of direct shearing for the first time.The results show that the anisotropic creep of shale exhibits a significant stress-dependent behavior.Under a low shear stress,the creep compliance of shale increases linearly with the logarithm of time at all bedding orientations,and the increase depends on the bedding orientation and creep time.Under high shear stress conditions,the creep compliance of shale is minimal when the bedding orientation is 0°,and the steady-creep rate of shale increases significantly with increasing bedding orientations of 30°,45°,60°,and 90°.The stress-strain values corresponding to the inception of the accelerated creep stage show an increasing and then decreasing trend with the bedding orientation.A semilogarithmic model that could reflect the stress dependence of the steady-creep rate while considering the hardening and damage process is proposed.The model minimizes the deviation of the calculated steady-state creep rate from the observed value and reveals the behavior of the bedding orientation's influence on the steady-creep rate.The applicability of the five classical empirical creep models is quantitatively evaluated.It shows that the logarithmic model can well explain the experimental creep strain and creep rate,and it can accurately predict long-term shear creep deformation.Based on an improved logarithmic model,the variations in creep parameters with shear stress and bedding orientations are discussed.With abovementioned findings,a mathematical method for constructing an anisotropic shear creep model of shale is proposed,which can characterize the nonlinear dependence of the anisotropic shear creep behavior of shale on the bedding orientation.展开更多
Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the ...Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the last two decades.Recently,transformer-based Pre-trained Language Models(PLM)have excelled in Natural Language Processing(NLP)tasks by leveraging large-scale training corpora.Increasing the scale of these models enhances performance significantly,introducing abilities like context learning that smaller models lack.The advancement in Large Language Models,exemplified by the development of ChatGPT,has made significant impacts both academically and industrially,capturing widespread societal interest.This survey provides an overview of the development and prospects from Large Language Models(LLM)to Large Multimodal Models(LMM).It first discusses the contributions and technological advancements of LLMs in the field of natural language processing,especially in text generation and language understanding.Then,it turns to the discussion of LMMs,which integrates various data modalities such as text,images,and sound,demonstrating advanced capabilities in understanding and generating cross-modal content,paving new pathways for the adaptability and flexibility of AI systems.Finally,the survey highlights the prospects of LMMs in terms of technological development and application potential,while also pointing out challenges in data integration,cross-modal understanding accuracy,providing a comprehensive perspective on the latest developments in this field.展开更多
BACKGROUND Colorectal cancer(CRC)is a serious threat worldwide.Although early screening is suggested to be the most effective method to prevent and control CRC,the current situation of early screening for CRC is still...BACKGROUND Colorectal cancer(CRC)is a serious threat worldwide.Although early screening is suggested to be the most effective method to prevent and control CRC,the current situation of early screening for CRC is still not optimistic.In China,the incidence of CRC in the Yangtze River Delta region is increasing dramatically,but few studies have been conducted.Therefore,it is necessary to develop a simple and efficient early screening model for CRC.AIM To develop and validate an early-screening nomogram model to identify individuals at high risk of CRC.METHODS Data of 64448 participants obtained from Ningbo Hospital,China between 2014 and 2017 were retrospectively analyzed.The cohort comprised 64448 individuals,of which,530 were excluded due to missing or incorrect data.Of 63918,7607(11.9%)individuals were considered to be high risk for CRC,and 56311(88.1%)were not.The participants were randomly allocated to a training set(44743)or validation set(19175).The discriminatory ability,predictive accuracy,and clinical utility of the model were evaluated by constructing and analyzing receiver operating characteristic(ROC)curves and calibration curves and by decision curve analysis.Finally,the model was validated internally using a bootstrap resampling technique.RESULTS Seven variables,including demographic,lifestyle,and family history information,were examined.Multifactorial logistic regression analysis revealed that age[odds ratio(OR):1.03,95%confidence interval(CI):1.02-1.03,P<0.001],body mass index(BMI)(OR:1.07,95%CI:1.06-1.08,P<0.001),waist circumference(WC)(OR:1.03,95%CI:1.02-1.03 P<0.001),lifestyle(OR:0.45,95%CI:0.42-0.48,P<0.001),and family history(OR:4.28,95%CI:4.04-4.54,P<0.001)were the most significant predictors of high-risk CRC.Healthy lifestyle was a protective factor,whereas family history was the most significant risk factor.The area under the curve was 0.734(95%CI:0.723-0.745)for the final validation set ROC curve and 0.735(95%CI:0.728-0.742)for the training set ROC curve.The calibration curve demonstrated a high correlation between the CRC high-risk population predicted by the nomogram model and the actual CRC high-risk population.CONCLUSION The early-screening nomogram model for CRC prediction in high-risk populations developed in this study based on age,BMI,WC,lifestyle,and family history exhibited high accuracy.展开更多
Flow units(FU)rock typing is a common technique for characterizing reservoir flow behavior,producing reliable porosity and permeability estimation even in complex geological settings.However,the lateral extrapolation ...Flow units(FU)rock typing is a common technique for characterizing reservoir flow behavior,producing reliable porosity and permeability estimation even in complex geological settings.However,the lateral extrapolation of FU away from the well into the whole reservoir grid is commonly a difficult task and using the seismic data as constraints is rarely a subject of study.This paper proposes a workflow to generate numerous possible 3D volumes of flow units,porosity and permeability below the seismic resolution limit,respecting the available seismic data at larger scales.The methodology is used in the Mero Field,a Brazilian presalt carbonate reservoir located in the Santos Basin,who presents a complex and heterogenic geological setting with different sedimentological processes and diagenetic history.We generated metric flow units using the conventional core analysis and transposed to the well log data.Then,given a Markov chain Monte Carlo algorithm,the seismic data and the well log statistics,we simulated acoustic impedance,decametric flow units(DFU),metric flow units(MFU),porosity and permeability volumes in the metric scale.The aim is to estimate a minimum amount of MFU able to calculate realistic scenarios porosity and permeability scenarios,without losing the seismic lateral control.In other words,every porosity and permeability volume simulated produces a synthetic seismic that match the real seismic of the area,even in the metric scale.The achieved 3D results represent a high-resolution fluid flow reservoir modelling considering the lateral control of the seismic during the process and can be directly incorporated in the dynamic characterization workflow.展开更多
Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of ...Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of physics in climate science has occasionally been overlooked.Our perspective suggests that the future of climate modeling involves a synergistic partnership between AI and physics,rather than an“either/or”scenario.Scrutinizing controversies around current physical inconsistencies in large AI models,we stress the critical need for detailed dynamic diagnostics and physical constraints.Furthermore,we provide illustrative examples to guide future assessments and constraints for AI models.Regarding AI integration with numerical models,we argue that offline AI parameterization schemes may fall short of achieving global optimality,emphasizing the importance of constructing online schemes.Additionally,we highlight the significance of fostering a community culture and propose the OCR(Open,Comparable,Reproducible)principles.Through a better community culture and a deep integration of physics and AI,we contend that developing a learnable climate model,balancing AI and physics,is an achievable goal.展开更多
BACKGROUND A cure for Helicobacter pylori(H.pylori)remains a problem of global concern.The prevalence of antimicrobial resistance is widely rising and becoming a challenging issue worldwide.Optimizing sequential thera...BACKGROUND A cure for Helicobacter pylori(H.pylori)remains a problem of global concern.The prevalence of antimicrobial resistance is widely rising and becoming a challenging issue worldwide.Optimizing sequential therapy seems to be one of the most attractive strategies in terms of efficacy,tolerability and cost.The most common sequential therapy consists of a dual therapy[proton-pump inhibitors(PPIs)and amoxicillin]for the first period(5 to 7 d),followed by a triple therapy for the second period(PPI,clarithromycin and metronidazole).PPIs play a key role in maintaining a gastric pH at a level that allows an optimal efficacy of antibiotics,hence the idea of using new generation molecules.This open-label prospective study randomized 328 patients with confirmed H.pylori infection into three groups(1:1:1):The first group received quadruple therapy consisting of twice-daily(bid)omeprazole 20 mg,amoxicillin 1 g,clarith-romycin 500 mg and metronidazole 500 mg for 10 d(QT-10),the second group received a 14 d quadruple therapy following the same regimen(QT-14),and the third group received an optimized sequential therapy consisting of bid rabe-prazole 20 mg plus amoxicillin 1 g for 7 d,followed by bid rabeprazole 20 mg,clarithromycin 500 mg and metronidazole 500 mg for the next 7 d(OST-14).AEs were recorded throughout the study,and the H.pylori eradication rate was determined 4 to 6 wk after the end of treatment,using the 13C urea breath test.RESULTS In the intention-to-treat and per-protocol analysis,the eradication rate was higher in the OST-14 group compared to the QT-10 group:(93.5%,85.5%P=0.04)and(96.2%,89.5%P=0.03)respectively.However,there was no statist-ically significant difference in eradication rates between the OST-14 and QT-14 groups:(93.5%,91.8%P=0.34)and(96.2%,94.4%P=0.35),respectively.The overall incidence of AEs was significantly lower in the OST-14 group(P=0.01).Furthermore,OST-14 was the most cost-effective among the three groups.CONCLUSION The optimized 14-d sequential therapy is a safe and effective alternative.Its eradication rate is comparable to that of the 14-d concomitant therapy while causing fewer AEs and allowing a gain in terms of cost.展开更多
Lunar Environment heliospheric X-ray Imager(LEXI)and Solar wind−Magnetosphere−Ionosphere Link Explorer(SMILE)will observe magnetosheath and its boundary motion in soft X-rays for understanding magnetopause reconnectio...Lunar Environment heliospheric X-ray Imager(LEXI)and Solar wind−Magnetosphere−Ionosphere Link Explorer(SMILE)will observe magnetosheath and its boundary motion in soft X-rays for understanding magnetopause reconnection modes under various solar wind conditions after their respective launches in 2024 and 2025.Magnetosheath conditions,namely,plasma density,velocity,and temperature,are key parameters for predicting and analyzing soft X-ray images from the LEXI and SMILE missions.We developed a userfriendly model of magnetosheath that parameterizes number density,velocity,temperature,and magnetic field by utilizing the global Magnetohydrodynamics(MHD)model as well as the pre-existing gas-dynamic and analytic models.Using this parameterized magnetosheath model,scientists can easily reconstruct expected soft X-ray images and utilize them for analysis of observed images of LEXI and SMILE without simulating the complicated global magnetosphere models.First,we created an MHD-based magnetosheath model by running a total of 14 OpenGGCM global MHD simulations under 7 solar wind densities(1,5,10,15,20,25,and 30 cm)and 2 interplanetary magnetic field Bz components(±4 nT),and then parameterizing the results in new magnetosheath conditions.We compared the magnetosheath model result with THEMIS statistical data and it showed good agreement with a weighted Pearson correlation coefficient greater than 0.77,especially for plasma density and plasma velocity.Second,we compiled a suite of magnetosheath models incorporating previous magnetosheath models(gas-dynamic,analytic),and did two case studies to test the performance.The MHD-based model was comparable to or better than the previous models while providing self-consistency among the magnetosheath parameters.Third,we constructed a tool to calculate a soft X-ray image from any given vantage point,which can support the planning and data analysis of the aforementioned LEXI and SMILE missions.A release of the code has been uploaded to a Github repository.展开更多
基金the National Natural Science Foundation of China(Grant 42177164)the Distinguished Youth Science Foundation of Hunan Province of China(2022JJ10073).
文摘As massive underground projects have become popular in dense urban cities,a problem has arisen:which model predicts the best for Tunnel Boring Machine(TBM)performance in these tunneling projects?However,performance level of TBMs in complex geological conditions is still a great challenge for practitioners and researchers.On the other hand,a reliable and accurate prediction of TBM performance is essential to planning an applicable tunnel construction schedule.The performance of TBM is very difficult to estimate due to various geotechnical and geological factors and machine specifications.The previously-proposed intelligent techniques in this field are mostly based on a single or base model with a low level of accuracy.Hence,this study aims to introduce a hybrid randomforest(RF)technique optimized by global harmony search with generalized oppositionbased learning(GOGHS)for forecasting TBM advance rate(AR).Optimizing the RF hyper-parameters in terms of,e.g.,tree number and maximum tree depth is the main objective of using the GOGHS-RF model.In the modelling of this study,a comprehensive databasewith themost influential parameters onTBMtogetherwithTBM AR were used as input and output variables,respectively.To examine the capability and power of the GOGHSRF model,three more hybrid models of particle swarm optimization-RF,genetic algorithm-RF and artificial bee colony-RF were also constructed to forecast TBM AR.Evaluation of the developed models was performed by calculating several performance indices,including determination coefficient(R2),root-mean-square-error(RMSE),and mean-absolute-percentage-error(MAPE).The results showed that theGOGHS-RF is a more accurate technique for estimatingTBMAR compared to the other applied models.The newly-developedGOGHS-RFmodel enjoyed R2=0.9937 and 0.9844,respectively,for train and test stages,which are higher than a pre-developed RF.Also,the importance of the input parameters was interpreted through the SHapley Additive exPlanations(SHAP)method,and it was found that thrust force per cutter is the most important variable on TBMAR.The GOGHS-RF model can be used in mechanized tunnel projects for predicting and checking performance.
基金funded by National Natural Science Foundation of China(Grant Nos.42272333,42277147).
文摘Refined 3D modeling of mine slopes is pivotal for precise prediction of geological hazards.Aiming at the inadequacy of existing single modeling methods in comprehensively representing the overall and localized characteristics of mining slopes,this study introduces a new method that fuses model data from Unmanned aerial vehicles(UAV)tilt photogrammetry and 3D laser scanning through a data alignment algorithm based on control points.First,the mini batch K-Medoids algorithm is utilized to cluster the point cloud data from ground 3D laser scanning.Then,the elbow rule is applied to determine the optimal cluster number(K0),and the feature points are extracted.Next,the nearest neighbor point algorithm is employed to match the feature points obtained from UAV tilt photogrammetry,and the internal point coordinates are adjusted through the distanceweighted average to construct a 3D model.Finally,by integrating an engineering case study,the K0 value is determined to be 8,with a matching accuracy between the two model datasets ranging from 0.0669 to 1.0373 mm.Therefore,compared with the modeling method utilizing K-medoids clustering algorithm,the new modeling method significantly enhances the computational efficiency,the accuracy of selecting the optimal number of feature points in 3D laser scanning,and the precision of the 3D model derived from UAV tilt photogrammetry.This method provides a research foundation for constructing mine slope model.
基金National Science and Technology Council,Taiwan,for financially supporting this research(Grant No.NSTC 113-2221-E-018-011)Ministry of Education’s Teaching Practice Research Program,Taiwan(PSK1120797 and PSK1134099).
文摘This paper explores the application of Model Predictive Control(MPC)to enhance safety and efficiency in autonomous vehicle(AV)navigation through optimized path planning.The evolution of AV technology has progressed rapidly,moving from basic driver-assistance systems(Level 1)to fully autonomous capabilities(Level 5).Central to this advancement are two key functionalities:Lane-Change Maneuvers(LCM)and Adaptive Cruise Control(ACC).In this study,a detailed simulation environment is created to replicate the road network between Nantun andWuri on National Freeway No.1 in Taiwan.The MPC controller is deployed to optimize vehicle trajectories,ensuring safe and efficient navigation.Simulated onboard sensors,including vehicle cameras and millimeterwave radar,are used to detect and respond to dynamic changes in the surrounding environment,enabling real-time decision-making for LCM and ACC.The simulation resultshighlight the superiority of the MPC-based approach in maintaining safe distances,executing controlled lane changes,and optimizing fuel efficiency.Specifically,the MPC controller effectively manages collision avoidance,reduces travel time,and contributes to smoother traffic flow compared to traditional path planning methods.These findings underscore the potential of MPC to enhance the reliability and safety of autonomous driving in complex traffic scenarios.Future research will focus on validating these results through real-world testing,addressing computational challenges for real-time implementation,and exploring the adaptability of MPC under various environmental conditions.This study provides a significant step towards achieving safer and more efficient autonomous vehicle navigation,paving the way for broader adoption of MPC in AV systems.
文摘安全生产事故往往由多组织交互、多因素耦合造成,事故原因涉及多个组织。为预防和遏制多组织生产安全事故的发生,基于系统理论事故建模与过程模型(Systems-Theory Accident Modeling and Process,STAMP)、24Model,构建一种用于多组织事故分析的方法,并以青岛石油爆炸事故为例进行事故原因分析。结果显示:STAMP-24Model可以分组织,分层次且有效、全面、详细地分析涉及多个组织的事故原因,探究多组织之间的交互关系;对事故进行动态演化分析,可得到各组织不安全动作耦合关系与形成的事故失效链及管控失效路径,进而为预防多组织事故提供思路和参考。
基金supported by the project of the National Social Science Fundation(21BJL052,20BJY020,20BJL127,19BJY090)the 2018 Fujian Social Science Planning Project(FJ2018B067)The Planning Fund Project of Humanities and Social Sciences Research of the Ministry of Education in 2019(19YJA790102),The grant has been received by Aoqi Xu.
文摘In many problems,to analyze the process/metabolism behavior,a mod-el of the system is identified.The main gap is the weakness of current methods vs.noisy environments.The primary objective of this study is to present a more robust method against uncertainties.This paper proposes a new deep learning scheme for modeling and identification applications.The suggested approach is based on non-singleton type-3 fuzzy logic systems(NT3-FLSs)that can support measurement errors and high-level uncertainties.Besides the rule optimization,the antecedent parameters and the level of secondary memberships are also adjusted by the suggested square root cubature Kalmanfilter(SCKF).In the learn-ing algorithm,the presented NT3-FLSs are deeply learned,and their nonlinear structure is preserved.The designed scheme is applied for modeling carbon cap-ture and sequestration problem using real-world data sets.Through various ana-lyses and comparisons,the better efficiency of the proposed fuzzy modeling scheme is verified.The main advantages of the suggested approach include better resistance against uncertainties,deep learning,and good convergence.
基金supported by Natural Science Foundation of China (No. 12305190)Lingchuang Research Project of China National Nuclear Corporation (CNNC)the Science and Technology on Reactor System Design Technology Laboratory
文摘We proposed and compared three methods(filter burnup,single energy burnup,and burnup extremum analysis)to build a high-resolution neutronics model for 238Pu production in high-flux reactors.The filter burnup and single energy burnup methods have no theoretical approximation and can achieve a spectrum resolution of up to~1 eV,thereby constructing the importance curve and yield curve of the full energy range.The burnup extreme analysis method combines the importance and yield curves to consider the influence of irradiation time on production efficiency,thereby constructing extreme curves.The three curves,which quantify the transmutation rate of the nuclei in each energy region,are of physical significance because they have similar distributions.A high-resolution neutronics model for ^(238)Pu production was established based on these three curves,and its universality and feasibility were proven.The neutronics model can guide the neutron spectrum optimization and improve the yield of ^(238)Pu by up to 18.81%.The neutronics model revealed the law of nuclei transmutation in all energy regions with high spectrum resolution,thus providing theoretical support for high-flux reactor design and irradiation production of ^(238)Pu.
基金the National Natural Science Foundation of China(Grant Number:81970631 to W.L.).
文摘Background:Diabetic nephropathy(DN)is the most common complication of type 2 diabetes mellitus and the main cause of end-stage renal disease worldwide.Diagnostic biomarkers may allow early diagnosis and treatment of DN to reduce the prevalence and delay the development of DN.Kidney biopsy is the gold standard for diagnosing DN;however,its invasive character is its primary limitation.The machine learning approach provides a non-invasive and specific criterion for diagnosing DN,although traditional machine learning algorithms need to be improved to enhance diagnostic performance.Methods:We applied high-throughput RNA sequencing to obtain the genes related to DN tubular tissues and normal tubular tissues of mice.Then machine learning algorithms,random forest,LASSO logistic regression,and principal component analysis were used to identify key genes(CES1G,CYP4A14,NDUFA4,ABCC4,ACE).Then,the genetic algorithm-optimized backpropagation neural network(GA-BPNN)was used to improve the DN diagnostic model.Results:The AUC value of the GA-BPNN model in the training dataset was 0.83,and the AUC value of the model in the validation dataset was 0.81,while the AUC values of the SVM model in the training dataset and external validation dataset were 0.756 and 0.650,respectively.Thus,this GA-BPNN gave better values than the traditional SVM model.This diagnosis model may aim for personalized diagnosis and treatment of patients with DN.Immunohistochemical staining further confirmed that the tissue and cell expression of NADH dehydrogenase(ubiquinone)1 alpha subcomplex,4-like 2(NDUFA4L2)in tubular tissue in DN mice were decreased.Conclusion:The GA-BPNN model has better accuracy than the traditional SVM model and may provide an effective tool for diagnosing DN.
基金supported by the Chinese–Norwegian Collaboration Projects within Climate Systems jointly funded by the National Key Research and Development Program of China (Grant No.2022YFE0106800)the Research Council of Norway funded project,MAPARC (Grant No.328943)+2 种基金the support from the Research Council of Norway funded project,COMBINED (Grant No.328935)the National Natural Science Foundation of China (Grant No.42075030)the Postgraduate Research and Practice Innovation Program of Jiangsu Province (KYCX23_1314)。
文摘Precipitous Arctic sea-ice decline and the corresponding increase in Arctic open-water areas in summer months give more space for sea-ice growth in the subsequent cold seasons. Compared to the decline of the entire Arctic multiyear sea ice,changes in newly formed sea ice indicate more thermodynamic and dynamic information on Arctic atmosphere–ocean–ice interaction and northern mid–high latitude atmospheric teleconnections. Here, we use a large multimodel ensemble from phase 6 of the Coupled Model Intercomparison Project(CMIP6) to investigate future changes in wintertime newly formed Arctic sea ice. The commonly used model-democracy approach that gives equal weight to each model essentially assumes that all models are independent and equally plausible, which contradicts with the fact that there are large interdependencies in the ensemble and discrepancies in models' performances in reproducing observations. Therefore, instead of using the arithmetic mean of well-performing models or all available models for projections like in previous studies, we employ a newly developed model weighting scheme that weights all models in the ensemble with consideration of their performance and independence to provide more reliable projections. Model democracy leads to evident bias and large intermodel spread in CMIP6 projections of newly formed Arctic sea ice. However, we show that both the bias and the intermodel spread can be effectively reduced by the weighting scheme. Projections from the weighted models indicate that wintertime newly formed Arctic sea ice is likely to increase dramatically until the middle of this century regardless of the emissions scenario.Thereafter, it may decrease(or remain stable) if the Arctic warming crosses a threshold(or is extensively constrained).
基金funded by the National Natural Science Foundation of China(Grant Nos.U22A20166 and 12172230)the Guangdong Basic and Applied Basic Research Foundation(Grant No.2023A1515012654)+1 种基金funded by the National Natural Science Foundation of China(Grant Nos.U22A20166 and 12172230)the Guangdong Basic and Applied Basic Research Foundation(Grant No.2023A1515012654)。
文摘Understanding the anisotropic creep behaviors of shale under direct shearing is a challenging issue.In this context,we conducted shear-creep and steady-creep tests on shale with five bedding orientations (i.e.0°,30°,45°,60°,and 90°),under multiple levels of direct shearing for the first time.The results show that the anisotropic creep of shale exhibits a significant stress-dependent behavior.Under a low shear stress,the creep compliance of shale increases linearly with the logarithm of time at all bedding orientations,and the increase depends on the bedding orientation and creep time.Under high shear stress conditions,the creep compliance of shale is minimal when the bedding orientation is 0°,and the steady-creep rate of shale increases significantly with increasing bedding orientations of 30°,45°,60°,and 90°.The stress-strain values corresponding to the inception of the accelerated creep stage show an increasing and then decreasing trend with the bedding orientation.A semilogarithmic model that could reflect the stress dependence of the steady-creep rate while considering the hardening and damage process is proposed.The model minimizes the deviation of the calculated steady-state creep rate from the observed value and reveals the behavior of the bedding orientation's influence on the steady-creep rate.The applicability of the five classical empirical creep models is quantitatively evaluated.It shows that the logarithmic model can well explain the experimental creep strain and creep rate,and it can accurately predict long-term shear creep deformation.Based on an improved logarithmic model,the variations in creep parameters with shear stress and bedding orientations are discussed.With abovementioned findings,a mathematical method for constructing an anisotropic shear creep model of shale is proposed,which can characterize the nonlinear dependence of the anisotropic shear creep behavior of shale on the bedding orientation.
基金We acknowledge funding from NSFC Grant 62306283.
文摘Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the last two decades.Recently,transformer-based Pre-trained Language Models(PLM)have excelled in Natural Language Processing(NLP)tasks by leveraging large-scale training corpora.Increasing the scale of these models enhances performance significantly,introducing abilities like context learning that smaller models lack.The advancement in Large Language Models,exemplified by the development of ChatGPT,has made significant impacts both academically and industrially,capturing widespread societal interest.This survey provides an overview of the development and prospects from Large Language Models(LLM)to Large Multimodal Models(LMM).It first discusses the contributions and technological advancements of LLMs in the field of natural language processing,especially in text generation and language understanding.Then,it turns to the discussion of LMMs,which integrates various data modalities such as text,images,and sound,demonstrating advanced capabilities in understanding and generating cross-modal content,paving new pathways for the adaptability and flexibility of AI systems.Finally,the survey highlights the prospects of LMMs in terms of technological development and application potential,while also pointing out challenges in data integration,cross-modal understanding accuracy,providing a comprehensive perspective on the latest developments in this field.
基金Supported by the Project of NINGBO Leading Medical Health Discipline,No.2022-B11Ningbo Natural Science Foundation,No.202003N4206Public Welfare Foundation of Ningbo,No.2021S108.
文摘BACKGROUND Colorectal cancer(CRC)is a serious threat worldwide.Although early screening is suggested to be the most effective method to prevent and control CRC,the current situation of early screening for CRC is still not optimistic.In China,the incidence of CRC in the Yangtze River Delta region is increasing dramatically,but few studies have been conducted.Therefore,it is necessary to develop a simple and efficient early screening model for CRC.AIM To develop and validate an early-screening nomogram model to identify individuals at high risk of CRC.METHODS Data of 64448 participants obtained from Ningbo Hospital,China between 2014 and 2017 were retrospectively analyzed.The cohort comprised 64448 individuals,of which,530 were excluded due to missing or incorrect data.Of 63918,7607(11.9%)individuals were considered to be high risk for CRC,and 56311(88.1%)were not.The participants were randomly allocated to a training set(44743)or validation set(19175).The discriminatory ability,predictive accuracy,and clinical utility of the model were evaluated by constructing and analyzing receiver operating characteristic(ROC)curves and calibration curves and by decision curve analysis.Finally,the model was validated internally using a bootstrap resampling technique.RESULTS Seven variables,including demographic,lifestyle,and family history information,were examined.Multifactorial logistic regression analysis revealed that age[odds ratio(OR):1.03,95%confidence interval(CI):1.02-1.03,P<0.001],body mass index(BMI)(OR:1.07,95%CI:1.06-1.08,P<0.001),waist circumference(WC)(OR:1.03,95%CI:1.02-1.03 P<0.001),lifestyle(OR:0.45,95%CI:0.42-0.48,P<0.001),and family history(OR:4.28,95%CI:4.04-4.54,P<0.001)were the most significant predictors of high-risk CRC.Healthy lifestyle was a protective factor,whereas family history was the most significant risk factor.The area under the curve was 0.734(95%CI:0.723-0.745)for the final validation set ROC curve and 0.735(95%CI:0.728-0.742)for the training set ROC curve.The calibration curve demonstrated a high correlation between the CRC high-risk population predicted by the nomogram model and the actual CRC high-risk population.CONCLUSION The early-screening nomogram model for CRC prediction in high-risk populations developed in this study based on age,BMI,WC,lifestyle,and family history exhibited high accuracy.
文摘Flow units(FU)rock typing is a common technique for characterizing reservoir flow behavior,producing reliable porosity and permeability estimation even in complex geological settings.However,the lateral extrapolation of FU away from the well into the whole reservoir grid is commonly a difficult task and using the seismic data as constraints is rarely a subject of study.This paper proposes a workflow to generate numerous possible 3D volumes of flow units,porosity and permeability below the seismic resolution limit,respecting the available seismic data at larger scales.The methodology is used in the Mero Field,a Brazilian presalt carbonate reservoir located in the Santos Basin,who presents a complex and heterogenic geological setting with different sedimentological processes and diagenetic history.We generated metric flow units using the conventional core analysis and transposed to the well log data.Then,given a Markov chain Monte Carlo algorithm,the seismic data and the well log statistics,we simulated acoustic impedance,decametric flow units(DFU),metric flow units(MFU),porosity and permeability volumes in the metric scale.The aim is to estimate a minimum amount of MFU able to calculate realistic scenarios porosity and permeability scenarios,without losing the seismic lateral control.In other words,every porosity and permeability volume simulated produces a synthetic seismic that match the real seismic of the area,even in the metric scale.The achieved 3D results represent a high-resolution fluid flow reservoir modelling considering the lateral control of the seismic during the process and can be directly incorporated in the dynamic characterization workflow.
基金supported by the National Natural Science Foundation of China(Grant Nos.42141019 and 42261144687)and STEP(Grant No.2019QZKK0102)supported by the Korea Environmental Industry&Technology Institute(KEITI)through the“Project for developing an observation-based GHG emissions geospatial information map”,funded by the Korea Ministry of Environment(MOE)(Grant No.RS-2023-00232066).
文摘Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of physics in climate science has occasionally been overlooked.Our perspective suggests that the future of climate modeling involves a synergistic partnership between AI and physics,rather than an“either/or”scenario.Scrutinizing controversies around current physical inconsistencies in large AI models,we stress the critical need for detailed dynamic diagnostics and physical constraints.Furthermore,we provide illustrative examples to guide future assessments and constraints for AI models.Regarding AI integration with numerical models,we argue that offline AI parameterization schemes may fall short of achieving global optimality,emphasizing the importance of constructing online schemes.Additionally,we highlight the significance of fostering a community culture and propose the OCR(Open,Comparable,Reproducible)principles.Through a better community culture and a deep integration of physics and AI,we contend that developing a learnable climate model,balancing AI and physics,is an achievable goal.
文摘BACKGROUND A cure for Helicobacter pylori(H.pylori)remains a problem of global concern.The prevalence of antimicrobial resistance is widely rising and becoming a challenging issue worldwide.Optimizing sequential therapy seems to be one of the most attractive strategies in terms of efficacy,tolerability and cost.The most common sequential therapy consists of a dual therapy[proton-pump inhibitors(PPIs)and amoxicillin]for the first period(5 to 7 d),followed by a triple therapy for the second period(PPI,clarithromycin and metronidazole).PPIs play a key role in maintaining a gastric pH at a level that allows an optimal efficacy of antibiotics,hence the idea of using new generation molecules.This open-label prospective study randomized 328 patients with confirmed H.pylori infection into three groups(1:1:1):The first group received quadruple therapy consisting of twice-daily(bid)omeprazole 20 mg,amoxicillin 1 g,clarith-romycin 500 mg and metronidazole 500 mg for 10 d(QT-10),the second group received a 14 d quadruple therapy following the same regimen(QT-14),and the third group received an optimized sequential therapy consisting of bid rabe-prazole 20 mg plus amoxicillin 1 g for 7 d,followed by bid rabeprazole 20 mg,clarithromycin 500 mg and metronidazole 500 mg for the next 7 d(OST-14).AEs were recorded throughout the study,and the H.pylori eradication rate was determined 4 to 6 wk after the end of treatment,using the 13C urea breath test.RESULTS In the intention-to-treat and per-protocol analysis,the eradication rate was higher in the OST-14 group compared to the QT-10 group:(93.5%,85.5%P=0.04)and(96.2%,89.5%P=0.03)respectively.However,there was no statist-ically significant difference in eradication rates between the OST-14 and QT-14 groups:(93.5%,91.8%P=0.34)and(96.2%,94.4%P=0.35),respectively.The overall incidence of AEs was significantly lower in the OST-14 group(P=0.01).Furthermore,OST-14 was the most cost-effective among the three groups.CONCLUSION The optimized 14-d sequential therapy is a safe and effective alternative.Its eradication rate is comparable to that of the 14-d concomitant therapy while causing fewer AEs and allowing a gain in terms of cost.
基金supported by the NSF grant AGS-1928883the NASA grants,80NSSC20K1670 and 80MSFC20C0019+2 种基金support from NASA GSFC IRADHIFISFM funds。
文摘Lunar Environment heliospheric X-ray Imager(LEXI)and Solar wind−Magnetosphere−Ionosphere Link Explorer(SMILE)will observe magnetosheath and its boundary motion in soft X-rays for understanding magnetopause reconnection modes under various solar wind conditions after their respective launches in 2024 and 2025.Magnetosheath conditions,namely,plasma density,velocity,and temperature,are key parameters for predicting and analyzing soft X-ray images from the LEXI and SMILE missions.We developed a userfriendly model of magnetosheath that parameterizes number density,velocity,temperature,and magnetic field by utilizing the global Magnetohydrodynamics(MHD)model as well as the pre-existing gas-dynamic and analytic models.Using this parameterized magnetosheath model,scientists can easily reconstruct expected soft X-ray images and utilize them for analysis of observed images of LEXI and SMILE without simulating the complicated global magnetosphere models.First,we created an MHD-based magnetosheath model by running a total of 14 OpenGGCM global MHD simulations under 7 solar wind densities(1,5,10,15,20,25,and 30 cm)and 2 interplanetary magnetic field Bz components(±4 nT),and then parameterizing the results in new magnetosheath conditions.We compared the magnetosheath model result with THEMIS statistical data and it showed good agreement with a weighted Pearson correlation coefficient greater than 0.77,especially for plasma density and plasma velocity.Second,we compiled a suite of magnetosheath models incorporating previous magnetosheath models(gas-dynamic,analytic),and did two case studies to test the performance.The MHD-based model was comparable to or better than the previous models while providing self-consistency among the magnetosheath parameters.Third,we constructed a tool to calculate a soft X-ray image from any given vantage point,which can support the planning and data analysis of the aforementioned LEXI and SMILE missions.A release of the code has been uploaded to a Github repository.