Maintenance is an important technical measure to maintain and restore the performance status of equipment and ensure the safety of the production process in industrial production,and is an indispensable part of predic...Maintenance is an important technical measure to maintain and restore the performance status of equipment and ensure the safety of the production process in industrial production,and is an indispensable part of prediction and health management.However,most of the existing remaining useful life(RUL)prediction methods assume that there is no maintenance or only perfect maintenance during the whole life cycle;thus,the predicted RUL value of the system is obviously lower than its actual operating value.The complex environment of the system further increases the difficulty of maintenance,and its maintenance nodes and maintenance degree are limited by the construction period and working conditions,which increases the difficulty of RUL prediction.An RUL prediction method for a multi-omponent system based on the Wiener process considering maintenance is proposed.The performance degradation model of components is established by a dynamic Bayesian network as the initial model,which solves the uncertainty of insufficient data problems.Based on the experience of experts,the degree of degradation is divided according to Poisson process simulation random failure,and different maintenance strategies are used to estimate a variety of condition maintenance factors.An example of a subsea tree system is given to verify the effectiveness of the proposed method.展开更多
Rock mass quality serves as a vital index for predicting the stability and safety status of rock tunnel faces.In tunneling practice,the rock mass quality is often assessed via a combination of qualitative and quantita...Rock mass quality serves as a vital index for predicting the stability and safety status of rock tunnel faces.In tunneling practice,the rock mass quality is often assessed via a combination of qualitative and quantitative parameters.However,due to the harsh on-site construction conditions,it is rather difficult to obtain some of the evaluation parameters which are essential for the rock mass quality prediction.In this study,a novel improved Swin Transformer is proposed to detect,segment,and quantify rock mass characteristic parameters such as water leakage,fractures,weak interlayers.The site experiment results demonstrate that the improved Swin Transformer achieves optimal segmentation results and achieving accuracies of 92%,81%,and 86%for water leakage,fractures,and weak interlayers,respectively.A multisource rock tunnel face characteristic(RTFC)dataset includes 11 parameters for predicting rock mass quality is established.Considering the limitations in predictive performance of incomplete evaluation parameters exist in this dataset,a novel tree-augmented naive Bayesian network(BN)is proposed to address the challenge of the incomplete dataset and achieved a prediction accuracy of 88%.In comparison with other commonly used Machine Learning models the proposed BN-based approach proved an improved performance on predicting the rock mass quality with the incomplete dataset.By utilizing the established BN,a further sensitivity analysis is conducted to quantitatively evaluate the importance of the various parameters,results indicate that the rock strength and fractures parameter exert the most significant influence on rock mass quality.展开更多
Objective This study aimed to investigate the potential relationship between urinary metals copper(Cu),arsenic(As),strontium(Sr),barium(Ba),iron(Fe),lead(Pb)and manganese(Mn)and grip strength.Methods We used linear re...Objective This study aimed to investigate the potential relationship between urinary metals copper(Cu),arsenic(As),strontium(Sr),barium(Ba),iron(Fe),lead(Pb)and manganese(Mn)and grip strength.Methods We used linear regression models,quantile g-computation and Bayesian kernel machine regression(BKMR)to assess the relationship between metals and grip strength.Results In the multimetal linear regression,Cu(β=−2.119),As(β=−1.318),Sr(β=−2.480),Ba(β=0.781),Fe(β=1.130)and Mn(β=−0.404)were significantly correlated with grip strength(P<0.05).The results of the quantile g-computation showed that the risk of occurrence of grip strength reduction was−1.007(95%confidence interval:−1.362,−0.652;P<0.001)when each quartile of the mixture of the seven metals was increased.Bayesian kernel function regression model analysis showed that mixtures of the seven metals had a negative overall effect on grip strength,with Cu,As and Sr being negatively associated with grip strength levels.In the total population,potential interactions were observed between As and Mn and between Cu and Mn(P_(interactions) of 0.003 and 0.018,respectively).Conclusion In summary,this study suggests that combined exposure to metal mixtures is negatively associated with grip strength.Cu,Sr and As were negatively correlated with grip strength levels,and there were potential interactions between As and Mn and between Cu and Mn.展开更多
For high-reliability systems in military,aerospace,and railway fields,the challenges of reliability analysis lie in dealing with unclear failure mechanisms,complex fault relationships,lack of fault data,and uncertaint...For high-reliability systems in military,aerospace,and railway fields,the challenges of reliability analysis lie in dealing with unclear failure mechanisms,complex fault relationships,lack of fault data,and uncertainty of fault states.To overcome these problems,this paper proposes a reliability analysismethod based on T-S fault tree analysis(T-S FTA)and Hyper-ellipsoidal Bayesian network(HE-BN).The method describes the connection between the various systemfault events by T-S fuzzy gates and translates them into a Bayesian network(BN)model.Combining the advantages of T-S fault tree modeling with the advantages of Bayesian network computation,a reliability modeling method is proposed that can fully reflect the fault characteristics of complex systems.Experts describe the degree of failure of the event in the form of interval numbers.The knowledge and experience of experts are fused with the D-S evidence theory to obtain the initial failure probability interval of the BN root node.Then,the Hyper-ellipsoidal model(HM)constrains the initial failure probability interval and constructs a HE-BN for the system.A reliability analysismethod is proposed to solve the problem of insufficient failure data and uncertainty in the degree of failure.The failure probability of the system is further calculated and the key components that affect the system’s reliability are identified.The proposedmethod accounts for the uncertainty and incompleteness of the failure data in complex multi-state systems and establishes an easily computable reliability model that fully reflects the characteristics of complex faults and accurately identifies system weaknesses.The feasibility and accuracy of the method are further verified by conducting case studies.展开更多
With the development of edge devices and cloud computing,the question of how to accomplish machine learning and optimization tasks in a privacy-preserving and secure way has attracted increased attention over the past...With the development of edge devices and cloud computing,the question of how to accomplish machine learning and optimization tasks in a privacy-preserving and secure way has attracted increased attention over the past decade.As a privacy-preserving distributed machine learning method,federated learning(FL)has become popular in the last few years.However,the data privacy issue also occurs when solving optimization problems,which has received little attention so far.This survey paper is concerned with privacy-preserving optimization,with a focus on privacy-preserving data-driven evolutionary optimization.It aims to provide a roadmap from secure privacy-preserving learning to secure privacy-preserving optimization by summarizing security mechanisms and privacy-preserving approaches that can be employed in machine learning and optimization.We provide a formal definition of security and privacy in learning,followed by a comprehensive review of FL schemes and cryptographic privacy-preserving techniques.Then,we present ideas on the emerging area of privacy-preserving optimization,ranging from privacy-preserving distributed optimization to privacy-preserving evolutionary optimization and privacy-preserving Bayesian optimization(BO).We further provide a thorough security analysis of BO and evolutionary optimization methods from the perspective of inferring attacks and active attacks.On the basis of the above,an in-depth discussion is given to analyze what FL and distributed optimization strategies can be used for the design of federated optimization and what additional requirements are needed for achieving these strategies.Finally,we conclude the survey by outlining open questions and remaining challenges in federated data-driven optimization.We hope this survey can provide insights into the relationship between FL and federated optimization and will promote research interest in secure federated optimization.展开更多
We apply stochastic seismic inversion and Bayesian facies classification for porosity modeling and igneous rock identification in the presalt interval of the Santos Basin. This integration of seismic and well-derived ...We apply stochastic seismic inversion and Bayesian facies classification for porosity modeling and igneous rock identification in the presalt interval of the Santos Basin. This integration of seismic and well-derived information enhances reservoir characterization. Stochastic inversion and Bayesian classification are powerful tools because they permit addressing the uncertainties in the model. We used the ES-MDA algorithm to achieve the realizations equivalent to the percentiles P10, P50, and P90 of acoustic impedance, a novel method for acoustic inversion in presalt. The facies were divided into five: reservoir 1,reservoir 2, tight carbonates, clayey rocks, and igneous rocks. To deal with the overlaps in acoustic impedance values of facies, we included geological information using a priori probability, indicating that structural highs are reservoir-dominated. To illustrate our approach, we conducted porosity modeling using facies-related rock-physics models for rock-physics inversion in an area with a well drilled in a coquina bank and evaluated the thickness and extension of an igneous intrusion near the carbonate-salt interface. The modeled porosity and the classified seismic facies are in good agreement with the ones observed in the wells. Notably, the coquinas bank presents an improvement in the porosity towards the top. The a priori probability model was crucial for limiting the clayey rocks to the structural lows. In Well B, the hit rate of the igneous rock in the three scenarios is higher than 60%, showing an excellent thickness-prediction capability.展开更多
We evaluate an adaptive optimisation methodology,Bayesian optimisation(BO),for designing a minimum weight explosive reactive armour(ERA)for protection against a surrogate medium calibre kinetic energy(KE)long rod proj...We evaluate an adaptive optimisation methodology,Bayesian optimisation(BO),for designing a minimum weight explosive reactive armour(ERA)for protection against a surrogate medium calibre kinetic energy(KE)long rod projectile and surrogate shaped charge(SC)warhead.We perform the optimisation using a conventional BO methodology and compare it with a conventional trial-and-error approach from a human expert.A third approach,utilising a novel human-machine teaming framework for BO is also evaluated.Data for the optimisation is generated using numerical simulations that are demonstrated to provide reasonable qualitative agreement with reference experiments.The human-machine teaming methodology is shown to identify the optimum ERA design in the fewest number of evaluations,outperforming both the stand-alone human and stand-alone BO methodologies.From a design space of almost 1800 configurations the human-machine teaming approach identifies the minimum weight ERA design in 10 samples.展开更多
Recently,the application of Bayesian updating to predict excavation-induced deformation has proven successful and improved prediction accuracy significantly.However,updating the ground settlement profile,which is cruc...Recently,the application of Bayesian updating to predict excavation-induced deformation has proven successful and improved prediction accuracy significantly.However,updating the ground settlement profile,which is crucial for determining potential damage to nearby infrastructures,has received limited attention.To address this,this paper proposes a physics-guided simplified model combined with a Bayesian updating framework to accurately predict the ground settlement profile.The advantage of this model is that it eliminates the need for complex finite element modeling and makes the updating framework user-friendly.Furthermore,the model is physically interpretable,which can provide valuable references for construction adjustments.The effectiveness of the proposed method is demonstrated through two field case studies,showing that it can yield satisfactory predictions for the settlement profile.展开更多
A novel inverted generalized gamma(IGG)distribution,proposed for data modelling with an upside-down bathtub hazard rate,is considered.In many real-world practical situations,when a researcher wants to conduct a compar...A novel inverted generalized gamma(IGG)distribution,proposed for data modelling with an upside-down bathtub hazard rate,is considered.In many real-world practical situations,when a researcher wants to conduct a comparative study of the life testing of items based on cost and duration of testing,censoring strategies are frequently used.From this point of view,in the presence of censored data compiled from the most well-known progressively Type-Ⅱ censoring technique,this study examines different parameters of the IGG distribution.From a classical point of view,the likelihood and product of spacing estimation methods are considered.Observed Fisher information and the deltamethod are used to obtain the approximate confidence intervals for any unknown parametric function of the suggestedmodel.In the Bayesian paradigm,the same traditional inferential approaches are used to estimate all unknown subjects.Markov-Chain with Monte-Carlo steps are considered to approximate all Bayes’findings.Extensive numerical comparisons are presented to examine the performance of the proposed methodologies using various criteria of accuracy.Further,using several optimality criteria,the optimumprogressive censoring design is suggested.To highlight how the proposed estimators can be used in practice and to verify the flexibility of the proposed model,we analyze the failure times of twenty mechanical components of a diesel engine.展开更多
In recent years,network attacks have been characterized by diversification and scale,which indicates a requirement for defense strategies to sacrifice generalizability for higher security.As the latest theoretical ach...In recent years,network attacks have been characterized by diversification and scale,which indicates a requirement for defense strategies to sacrifice generalizability for higher security.As the latest theoretical achievement in active defense,mimic defense demonstrates high robustness against complex attacks.This study proposes a Function-aware,Bayesian adjudication,and Adaptive updating Mimic Defense(FBAMD)theory for addressing the current problems of existing work including limited ability to resist unknown threats,imprecise heterogeneous metrics,and over-reliance on relatively-correct axiom.FBAMD incorporates three critical steps.Firstly,the common features of executors’vulnerabilities are obtained from the perspective of the functional implementation(i.e,input-output relationships extraction).Secondly,a new adjudication mechanism considering Bayes’theory is proposed by leveraging the advantages of both current results and historical confidence.Furthermore,posterior confidence can be updated regularly with prior adjudication information,which provides mimic system adaptability.The experimental analysis shows that FBAMD exhibits the best performance in the face of different types of attacks compared to the state-of-the-art over real-world datasets.This study presents a promising step toward the theo-retical innovation of mimic defense.展开更多
Magnesium(Mg),being the lightest structural metal,holds immense potential for widespread applications in various fields.The development of high-performance and cost-effective Mg alloys is crucial to further advancing ...Magnesium(Mg),being the lightest structural metal,holds immense potential for widespread applications in various fields.The development of high-performance and cost-effective Mg alloys is crucial to further advancing their commercial utilization.With the rapid advancement of machine learning(ML)technology in recent years,the“data-driven''approach for alloy design has provided new perspectives and opportunities for enhancing the performance of Mg alloys.This paper introduces a novel regression-based Bayesian optimization active learning model(RBOALM)for the development of high-performance Mg-Mn-based wrought alloys.RBOALM employs active learning to automatically explore optimal alloy compositions and process parameters within predefined ranges,facilitating the discovery of superior alloy combinations.This model further integrates pre-established regression models as surrogate functions in Bayesian optimization,significantly enhancing the precision of the design process.Leveraging RBOALM,several new high-performance alloys have been successfully designed and prepared.Notably,after mechanical property testing of the designed alloys,the Mg-2.1Zn-2.0Mn-0.5Sn-0.1Ca alloy demonstrates exceptional mechanical properties,including an ultimate tensile strength of 406 MPa,a yield strength of 287 MPa,and a 23%fracture elongation.Furthermore,the Mg-2.7Mn-0.5Al-0.1Ca alloy exhibits an ultimate tensile strength of 211 MPa,coupled with a remarkable 41%fracture elongation.展开更多
The accurate estimation of parameters is the premise for establishing a high-fidelity simulation model of a valve-controlled cylinder system.Bench test data are easily obtained,but it is challenging to emulate actual ...The accurate estimation of parameters is the premise for establishing a high-fidelity simulation model of a valve-controlled cylinder system.Bench test data are easily obtained,but it is challenging to emulate actual loads in the research on parameter estimation of valve-controlled cylinder system.Despite the actual load information contained in the operating data of the control valve,its acquisition remains challenging.This paper proposes a method that fuses bench test and operating data for parameter estimation to address the aforementioned problems.The proposed method is based on Bayesian theory,and its core is a pool fusion of prior information from bench test and operating data.Firstly,a system model is established,and the parameters in the model are analysed.Secondly,the bench and operating data of the system are collected.Then,the model parameters and weight coefficients are estimated using the data fusion method.Finally,the estimated effects of the data fusion method,Bayesian method,and particle swarm optimisation(PSO)algorithm on system model parameters are compared.The research shows that the weight coefficient represents the contribution of different prior information to the parameter estimation result.The effect of parameter estimation based on the data fusion method is better than that of the Bayesian method and the PSO algorithm.Increasing load complexity leads to a decrease in model accuracy,highlighting the crucial role of the data fusion method in parameter estimation studies.展开更多
Stable water isotopes are natural tracers quantifying the contribution of moisture recycling to local precipitation,i.e.,the moisture recycling ratio,but various isotope-based models usually lead to different results,...Stable water isotopes are natural tracers quantifying the contribution of moisture recycling to local precipitation,i.e.,the moisture recycling ratio,but various isotope-based models usually lead to different results,which affects the accuracy of local moisture recycling.In this study,a total of 18 stations from four typical areas in China were selected to compare the performance of isotope-based linear and Bayesian mixing models and to determine local moisture recycling ratio.Among the three vapor sources including advection,transpiration,and surface evaporation,the advection vapor usually played a dominant role,and the contribution of surface evaporation was less than that of transpiration.When the abnormal values were ignored,the arithmetic averages of differences between isotope-based linear and the Bayesian mixing models were 0.9%for transpiration,0.2%for surface evaporation,and–1.1%for advection,respectively,and the medians were 0.5%,0.2%,and–0.8%,respectively.The importance of transpiration was slightly less for most cases when the Bayesian mixing model was applied,and the contribution of advection was relatively larger.The Bayesian mixing model was found to perform better in determining an efficient solution since linear model sometimes resulted in negative contribution ratios.Sensitivity test with two isotope scenarios indicated that the Bayesian model had a relatively low sensitivity to the changes in isotope input,and it was important to accurately estimate the isotopes in precipitation vapor.Generally,the Bayesian mixing model should be recommended instead of a linear model.The findings are useful for understanding the performance of isotope-based linear and Bayesian mixing models under various climate backgrounds.展开更多
Indoor localization systems are crucial in addressing the limitations of traditional global positioning system(GPS)in indoor environments due to signal attenuation issues.As complex indoor spaces become more sophistic...Indoor localization systems are crucial in addressing the limitations of traditional global positioning system(GPS)in indoor environments due to signal attenuation issues.As complex indoor spaces become more sophisticated,indoor localization systems become essential for improving user experience,safety,and operational efficiency.Indoor localization methods based on Wi-Fi fingerprints require a high-density location fingerprint database,but this can increase the computational burden in the online phase.Bayesian networks,which integrate prior knowledge or domain expertise,are an effective solution for accurately determining indoor user locations.These networks use probabilistic reasoning to model relationships among various localization parameters for indoor environments that are challenging to navigate.This article proposes an adaptive Bayesian model for multi-floor environments based on fingerprinting techniques to minimize errors in estimating user location.The proposed system is an off-the-shelf solution that uses existing Wi-Fi infrastructures to estimate user’s location.It operates in both online and offline phases.In the offline phase,a mobile device with Wi-Fi capability collects radio signals,while in the online phase,generating samples using Gibbs sampling based on the proposed Bayesian model and radio map to predict user’s location.Experimental results unequivocally showcase the superior performance of the proposed model when compared to other existing models and methods.The proposed model achieved an impressive lower average localization error,surpassing the accuracy of competing approaches.Notably,this noteworthy achievement was attained with minimal reliance on reference points,underscoring the efficiency and efficacy of the proposed model in accurately estimating user locations in indoor environments.展开更多
When learning the structure of a Bayesian network,the search space expands significantly as the network size and the number of nodes increase,leading to a noticeable decrease in algorithm efficiency.Traditional constr...When learning the structure of a Bayesian network,the search space expands significantly as the network size and the number of nodes increase,leading to a noticeable decrease in algorithm efficiency.Traditional constraint-based methods typically rely on the results of conditional independence tests.However,excessive reliance on these test results can lead to a series of problems,including increased computational complexity and inaccurate results,especially when dealing with large-scale networks where performance bottlenecks are particularly evident.To overcome these challenges,we propose a Markov blanket discovery algorithm based on constrained local neighborhoods for constructing undirected independence graphs.This method uses the Markov blanket discovery algorithm to refine the constraints in the initial search space,sets an appropriate constraint radius,thereby reducing the initial computational cost of the algorithm and effectively narrowing the initial solution range.Specifically,the method first determines the local neighborhood space to limit the search range,thereby reducing the number of possible graph structures that need to be considered.This process not only improves the accuracy of the search space constraints but also significantly reduces the number of conditional independence tests.By performing conditional independence tests within the local neighborhood of each node,the method avoids comprehensive tests across the entire network,greatly reducing computational complexity.At the same time,the setting of the constraint radius further improves computational efficiency while ensuring accuracy.Compared to other algorithms,this method can quickly and efficiently construct undirected independence graphs while maintaining high accuracy.Experimental simulation results show that,this method has significant advantages in obtaining the structure of undirected independence graphs,not only maintaining an accuracy of over 96%but also reducing the number of conditional independence tests by at least 50%.This significant performance improvement is due to the effective constraint on the search space and the fine control of computational costs.展开更多
基金financially supported by the National Key Research and Development Program of China(Grant No.2022YFC3004802)the National Natural Science Foundation of China(Grant Nos.52171287,52325107)+3 种基金High Tech Ship Research Project of Ministry of Industry and Information Technology(Grant Nos.2023GXB01-05-004-03,GXBZH2022-293)the Science Foundation for Distinguished Young Scholars of Shandong Province(Grant No.ZR2022JQ25)the Taishan Scholars Project(Grant No.tsqn201909063)the sub project of the major special project of CNOOC Development Technology,“Research on the Integrated Technology of Intrinsic Safety of Offshore Oil Facilities”(Phase I),“Research on Dynamic Quantitative Analysis and Control Technology of Risks in Offshore Production Equipment”(Grant No.HFKJ-2D2X-AQ-2021-03)。
文摘Maintenance is an important technical measure to maintain and restore the performance status of equipment and ensure the safety of the production process in industrial production,and is an indispensable part of prediction and health management.However,most of the existing remaining useful life(RUL)prediction methods assume that there is no maintenance or only perfect maintenance during the whole life cycle;thus,the predicted RUL value of the system is obviously lower than its actual operating value.The complex environment of the system further increases the difficulty of maintenance,and its maintenance nodes and maintenance degree are limited by the construction period and working conditions,which increases the difficulty of RUL prediction.An RUL prediction method for a multi-omponent system based on the Wiener process considering maintenance is proposed.The performance degradation model of components is established by a dynamic Bayesian network as the initial model,which solves the uncertainty of insufficient data problems.Based on the experience of experts,the degree of degradation is divided according to Poisson process simulation random failure,and different maintenance strategies are used to estimate a variety of condition maintenance factors.An example of a subsea tree system is given to verify the effectiveness of the proposed method.
基金supported by the National Natural Science Foundation of China(Nos.52279107 and 52379106)the Qingdao Guoxin Jiaozhou Bay Second Submarine Tunnel Co.,Ltd.,the Academician and Expert Workstation of Yunnan Province(No.202205AF150015)the Science and Technology Innovation Project of YCIC Group Co.,Ltd.(No.YCIC-YF-2022-15)。
文摘Rock mass quality serves as a vital index for predicting the stability and safety status of rock tunnel faces.In tunneling practice,the rock mass quality is often assessed via a combination of qualitative and quantitative parameters.However,due to the harsh on-site construction conditions,it is rather difficult to obtain some of the evaluation parameters which are essential for the rock mass quality prediction.In this study,a novel improved Swin Transformer is proposed to detect,segment,and quantify rock mass characteristic parameters such as water leakage,fractures,weak interlayers.The site experiment results demonstrate that the improved Swin Transformer achieves optimal segmentation results and achieving accuracies of 92%,81%,and 86%for water leakage,fractures,and weak interlayers,respectively.A multisource rock tunnel face characteristic(RTFC)dataset includes 11 parameters for predicting rock mass quality is established.Considering the limitations in predictive performance of incomplete evaluation parameters exist in this dataset,a novel tree-augmented naive Bayesian network(BN)is proposed to address the challenge of the incomplete dataset and achieved a prediction accuracy of 88%.In comparison with other commonly used Machine Learning models the proposed BN-based approach proved an improved performance on predicting the rock mass quality with the incomplete dataset.By utilizing the established BN,a further sensitivity analysis is conducted to quantitatively evaluate the importance of the various parameters,results indicate that the rock strength and fractures parameter exert the most significant influence on rock mass quality.
基金supported by the National Natural Science Foundation of China[rant Nos.81960583,81760577,81560523 and 82260629]Major Science and Technology Projects in Guangxi[GKAA22399 and AA22096026]+3 种基金the Guangxi Science and Technology Development Project[Grant Nos.AD 17129003 and 18050005]the Guangxi Natural Science Foundation for Innovation Research Team[2019GXNSFGA245002]the Innovation Platform and Talent Plan in Guilin[20220120-2]the Guangxi Scholarship Fund of Guangxi Education Department of China。
文摘Objective This study aimed to investigate the potential relationship between urinary metals copper(Cu),arsenic(As),strontium(Sr),barium(Ba),iron(Fe),lead(Pb)and manganese(Mn)and grip strength.Methods We used linear regression models,quantile g-computation and Bayesian kernel machine regression(BKMR)to assess the relationship between metals and grip strength.Results In the multimetal linear regression,Cu(β=−2.119),As(β=−1.318),Sr(β=−2.480),Ba(β=0.781),Fe(β=1.130)and Mn(β=−0.404)were significantly correlated with grip strength(P<0.05).The results of the quantile g-computation showed that the risk of occurrence of grip strength reduction was−1.007(95%confidence interval:−1.362,−0.652;P<0.001)when each quartile of the mixture of the seven metals was increased.Bayesian kernel function regression model analysis showed that mixtures of the seven metals had a negative overall effect on grip strength,with Cu,As and Sr being negatively associated with grip strength levels.In the total population,potential interactions were observed between As and Mn and between Cu and Mn(P_(interactions) of 0.003 and 0.018,respectively).Conclusion In summary,this study suggests that combined exposure to metal mixtures is negatively associated with grip strength.Cu,Sr and As were negatively correlated with grip strength levels,and there were potential interactions between As and Mn and between Cu and Mn.
基金the National Natural Science Foundation of China(51875073).
文摘For high-reliability systems in military,aerospace,and railway fields,the challenges of reliability analysis lie in dealing with unclear failure mechanisms,complex fault relationships,lack of fault data,and uncertainty of fault states.To overcome these problems,this paper proposes a reliability analysismethod based on T-S fault tree analysis(T-S FTA)and Hyper-ellipsoidal Bayesian network(HE-BN).The method describes the connection between the various systemfault events by T-S fuzzy gates and translates them into a Bayesian network(BN)model.Combining the advantages of T-S fault tree modeling with the advantages of Bayesian network computation,a reliability modeling method is proposed that can fully reflect the fault characteristics of complex systems.Experts describe the degree of failure of the event in the form of interval numbers.The knowledge and experience of experts are fused with the D-S evidence theory to obtain the initial failure probability interval of the BN root node.Then,the Hyper-ellipsoidal model(HM)constrains the initial failure probability interval and constructs a HE-BN for the system.A reliability analysismethod is proposed to solve the problem of insufficient failure data and uncertainty in the degree of failure.The failure probability of the system is further calculated and the key components that affect the system’s reliability are identified.The proposedmethod accounts for the uncertainty and incompleteness of the failure data in complex multi-state systems and establishes an easily computable reliability model that fully reflects the characteristics of complex faults and accurately identifies system weaknesses.The feasibility and accuracy of the method are further verified by conducting case studies.
基金supported in part by the National Natural Science Foundation of China (62136003,62302147,62103150,62006053,and 62306097)in part by the China Postdoctoral Science Foundation (2021M691012)+1 种基金in part by the Natural Science Foundation of Guangdong Province (2022A1515010443)in part by the National Research,Development and Innovation Fund of Hungary under the Establishment of Competence Centers,Development of Research Infrastructure Programme funding scheme (2019-1.3.1-KK-2019-00011).
文摘With the development of edge devices and cloud computing,the question of how to accomplish machine learning and optimization tasks in a privacy-preserving and secure way has attracted increased attention over the past decade.As a privacy-preserving distributed machine learning method,federated learning(FL)has become popular in the last few years.However,the data privacy issue also occurs when solving optimization problems,which has received little attention so far.This survey paper is concerned with privacy-preserving optimization,with a focus on privacy-preserving data-driven evolutionary optimization.It aims to provide a roadmap from secure privacy-preserving learning to secure privacy-preserving optimization by summarizing security mechanisms and privacy-preserving approaches that can be employed in machine learning and optimization.We provide a formal definition of security and privacy in learning,followed by a comprehensive review of FL schemes and cryptographic privacy-preserving techniques.Then,we present ideas on the emerging area of privacy-preserving optimization,ranging from privacy-preserving distributed optimization to privacy-preserving evolutionary optimization and privacy-preserving Bayesian optimization(BO).We further provide a thorough security analysis of BO and evolutionary optimization methods from the perspective of inferring attacks and active attacks.On the basis of the above,an in-depth discussion is given to analyze what FL and distributed optimization strategies can be used for the design of federated optimization and what additional requirements are needed for achieving these strategies.Finally,we conclude the survey by outlining open questions and remaining challenges in federated data-driven optimization.We hope this survey can provide insights into the relationship between FL and federated optimization and will promote research interest in secure federated optimization.
基金Equinor for financing the R&D projectthe Institute of Science and Technology of Petroleum Geophysics of Brazil for supporting this research。
文摘We apply stochastic seismic inversion and Bayesian facies classification for porosity modeling and igneous rock identification in the presalt interval of the Santos Basin. This integration of seismic and well-derived information enhances reservoir characterization. Stochastic inversion and Bayesian classification are powerful tools because they permit addressing the uncertainties in the model. We used the ES-MDA algorithm to achieve the realizations equivalent to the percentiles P10, P50, and P90 of acoustic impedance, a novel method for acoustic inversion in presalt. The facies were divided into five: reservoir 1,reservoir 2, tight carbonates, clayey rocks, and igneous rocks. To deal with the overlaps in acoustic impedance values of facies, we included geological information using a priori probability, indicating that structural highs are reservoir-dominated. To illustrate our approach, we conducted porosity modeling using facies-related rock-physics models for rock-physics inversion in an area with a well drilled in a coquina bank and evaluated the thickness and extension of an igneous intrusion near the carbonate-salt interface. The modeled porosity and the classified seismic facies are in good agreement with the ones observed in the wells. Notably, the coquinas bank presents an improvement in the porosity towards the top. The a priori probability model was crucial for limiting the clayey rocks to the structural lows. In Well B, the hit rate of the igneous rock in the three scenarios is higher than 60%, showing an excellent thickness-prediction capability.
文摘We evaluate an adaptive optimisation methodology,Bayesian optimisation(BO),for designing a minimum weight explosive reactive armour(ERA)for protection against a surrogate medium calibre kinetic energy(KE)long rod projectile and surrogate shaped charge(SC)warhead.We perform the optimisation using a conventional BO methodology and compare it with a conventional trial-and-error approach from a human expert.A third approach,utilising a novel human-machine teaming framework for BO is also evaluated.Data for the optimisation is generated using numerical simulations that are demonstrated to provide reasonable qualitative agreement with reference experiments.The human-machine teaming methodology is shown to identify the optimum ERA design in the fewest number of evaluations,outperforming both the stand-alone human and stand-alone BO methodologies.From a design space of almost 1800 configurations the human-machine teaming approach identifies the minimum weight ERA design in 10 samples.
基金the financial support from the Guangdong Provincial Department of Science and Technology(Grant No.2022A0505030019)the Science and Technology Development Fund,Macao SAR,China(File Nos.0056/2023/RIB2 and SKL-IOTSC-2021-2023).
文摘Recently,the application of Bayesian updating to predict excavation-induced deformation has proven successful and improved prediction accuracy significantly.However,updating the ground settlement profile,which is crucial for determining potential damage to nearby infrastructures,has received limited attention.To address this,this paper proposes a physics-guided simplified model combined with a Bayesian updating framework to accurately predict the ground settlement profile.The advantage of this model is that it eliminates the need for complex finite element modeling and makes the updating framework user-friendly.Furthermore,the model is physically interpretable,which can provide valuable references for construction adjustments.The effectiveness of the proposed method is demonstrated through two field case studies,showing that it can yield satisfactory predictions for the settlement profile.
基金funded by the Deanship of Scientific Research and Libraries,Princess Nourah bint Abdulrahman University,through the Program of Research Project Funding after Publication,Grant No.(RPFAP-34-1445).
文摘A novel inverted generalized gamma(IGG)distribution,proposed for data modelling with an upside-down bathtub hazard rate,is considered.In many real-world practical situations,when a researcher wants to conduct a comparative study of the life testing of items based on cost and duration of testing,censoring strategies are frequently used.From this point of view,in the presence of censored data compiled from the most well-known progressively Type-Ⅱ censoring technique,this study examines different parameters of the IGG distribution.From a classical point of view,the likelihood and product of spacing estimation methods are considered.Observed Fisher information and the deltamethod are used to obtain the approximate confidence intervals for any unknown parametric function of the suggestedmodel.In the Bayesian paradigm,the same traditional inferential approaches are used to estimate all unknown subjects.Markov-Chain with Monte-Carlo steps are considered to approximate all Bayes’findings.Extensive numerical comparisons are presented to examine the performance of the proposed methodologies using various criteria of accuracy.Further,using several optimality criteria,the optimumprogressive censoring design is suggested.To highlight how the proposed estimators can be used in practice and to verify the flexibility of the proposed model,we analyze the failure times of twenty mechanical components of a diesel engine.
基金supported by the National Key Research and Development Program of China(Grant No.2020YFB1804604).
文摘In recent years,network attacks have been characterized by diversification and scale,which indicates a requirement for defense strategies to sacrifice generalizability for higher security.As the latest theoretical achievement in active defense,mimic defense demonstrates high robustness against complex attacks.This study proposes a Function-aware,Bayesian adjudication,and Adaptive updating Mimic Defense(FBAMD)theory for addressing the current problems of existing work including limited ability to resist unknown threats,imprecise heterogeneous metrics,and over-reliance on relatively-correct axiom.FBAMD incorporates three critical steps.Firstly,the common features of executors’vulnerabilities are obtained from the perspective of the functional implementation(i.e,input-output relationships extraction).Secondly,a new adjudication mechanism considering Bayes’theory is proposed by leveraging the advantages of both current results and historical confidence.Furthermore,posterior confidence can be updated regularly with prior adjudication information,which provides mimic system adaptability.The experimental analysis shows that FBAMD exhibits the best performance in the face of different types of attacks compared to the state-of-the-art over real-world datasets.This study presents a promising step toward the theo-retical innovation of mimic defense.
基金supported by the National Natural the Science Foundation of China(51971042,51901028)the Chongqing Academician Special Fund(cstc2020yszxjcyj X0001)+1 种基金the China Scholarship Council(CSC)Norwegian University of Science and Technology(NTNU)for their financial and technical support。
文摘Magnesium(Mg),being the lightest structural metal,holds immense potential for widespread applications in various fields.The development of high-performance and cost-effective Mg alloys is crucial to further advancing their commercial utilization.With the rapid advancement of machine learning(ML)technology in recent years,the“data-driven''approach for alloy design has provided new perspectives and opportunities for enhancing the performance of Mg alloys.This paper introduces a novel regression-based Bayesian optimization active learning model(RBOALM)for the development of high-performance Mg-Mn-based wrought alloys.RBOALM employs active learning to automatically explore optimal alloy compositions and process parameters within predefined ranges,facilitating the discovery of superior alloy combinations.This model further integrates pre-established regression models as surrogate functions in Bayesian optimization,significantly enhancing the precision of the design process.Leveraging RBOALM,several new high-performance alloys have been successfully designed and prepared.Notably,after mechanical property testing of the designed alloys,the Mg-2.1Zn-2.0Mn-0.5Sn-0.1Ca alloy demonstrates exceptional mechanical properties,including an ultimate tensile strength of 406 MPa,a yield strength of 287 MPa,and a 23%fracture elongation.Furthermore,the Mg-2.7Mn-0.5Al-0.1Ca alloy exhibits an ultimate tensile strength of 211 MPa,coupled with a remarkable 41%fracture elongation.
基金Supported by National Key R&D Program of China(Grant Nos.2020YFB1709901,2020YFB1709904)National Natural Science Foundation of China(Grant Nos.51975495,51905460)+1 种基金Guangdong Provincial Basic and Applied Basic Research Foundation of China(Grant No.2021-A1515012286)Science and Technology Plan Project of Fuzhou City of China(Grant No.2022-P-022).
文摘The accurate estimation of parameters is the premise for establishing a high-fidelity simulation model of a valve-controlled cylinder system.Bench test data are easily obtained,but it is challenging to emulate actual loads in the research on parameter estimation of valve-controlled cylinder system.Despite the actual load information contained in the operating data of the control valve,its acquisition remains challenging.This paper proposes a method that fuses bench test and operating data for parameter estimation to address the aforementioned problems.The proposed method is based on Bayesian theory,and its core is a pool fusion of prior information from bench test and operating data.Firstly,a system model is established,and the parameters in the model are analysed.Secondly,the bench and operating data of the system are collected.Then,the model parameters and weight coefficients are estimated using the data fusion method.Finally,the estimated effects of the data fusion method,Bayesian method,and particle swarm optimisation(PSO)algorithm on system model parameters are compared.The research shows that the weight coefficient represents the contribution of different prior information to the parameter estimation result.The effect of parameter estimation based on the data fusion method is better than that of the Bayesian method and the PSO algorithm.Increasing load complexity leads to a decrease in model accuracy,highlighting the crucial role of the data fusion method in parameter estimation studies.
基金This study was supported by the National Natural Science Foundation of China(42261008,41971034)the Natural Science Foundation of Gansu Province,China(22JR5RA074).
文摘Stable water isotopes are natural tracers quantifying the contribution of moisture recycling to local precipitation,i.e.,the moisture recycling ratio,but various isotope-based models usually lead to different results,which affects the accuracy of local moisture recycling.In this study,a total of 18 stations from four typical areas in China were selected to compare the performance of isotope-based linear and Bayesian mixing models and to determine local moisture recycling ratio.Among the three vapor sources including advection,transpiration,and surface evaporation,the advection vapor usually played a dominant role,and the contribution of surface evaporation was less than that of transpiration.When the abnormal values were ignored,the arithmetic averages of differences between isotope-based linear and the Bayesian mixing models were 0.9%for transpiration,0.2%for surface evaporation,and–1.1%for advection,respectively,and the medians were 0.5%,0.2%,and–0.8%,respectively.The importance of transpiration was slightly less for most cases when the Bayesian mixing model was applied,and the contribution of advection was relatively larger.The Bayesian mixing model was found to perform better in determining an efficient solution since linear model sometimes resulted in negative contribution ratios.Sensitivity test with two isotope scenarios indicated that the Bayesian model had a relatively low sensitivity to the changes in isotope input,and it was important to accurately estimate the isotopes in precipitation vapor.Generally,the Bayesian mixing model should be recommended instead of a linear model.The findings are useful for understanding the performance of isotope-based linear and Bayesian mixing models under various climate backgrounds.
基金This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(Grant Number IMSIU-RPP2023011).
文摘Indoor localization systems are crucial in addressing the limitations of traditional global positioning system(GPS)in indoor environments due to signal attenuation issues.As complex indoor spaces become more sophisticated,indoor localization systems become essential for improving user experience,safety,and operational efficiency.Indoor localization methods based on Wi-Fi fingerprints require a high-density location fingerprint database,but this can increase the computational burden in the online phase.Bayesian networks,which integrate prior knowledge or domain expertise,are an effective solution for accurately determining indoor user locations.These networks use probabilistic reasoning to model relationships among various localization parameters for indoor environments that are challenging to navigate.This article proposes an adaptive Bayesian model for multi-floor environments based on fingerprinting techniques to minimize errors in estimating user location.The proposed system is an off-the-shelf solution that uses existing Wi-Fi infrastructures to estimate user’s location.It operates in both online and offline phases.In the offline phase,a mobile device with Wi-Fi capability collects radio signals,while in the online phase,generating samples using Gibbs sampling based on the proposed Bayesian model and radio map to predict user’s location.Experimental results unequivocally showcase the superior performance of the proposed model when compared to other existing models and methods.The proposed model achieved an impressive lower average localization error,surpassing the accuracy of competing approaches.Notably,this noteworthy achievement was attained with minimal reliance on reference points,underscoring the efficiency and efficacy of the proposed model in accurately estimating user locations in indoor environments.
基金This work is supported by the National Natural Science Foundation of China(62262016,61961160706,62231010)14th Five-Year Plan Civil Aerospace Technology Preliminary Research Project(D040405)the National Key Laboratory Foundation 2022-JCJQ-LB-006(Grant No.6142411212201).
文摘When learning the structure of a Bayesian network,the search space expands significantly as the network size and the number of nodes increase,leading to a noticeable decrease in algorithm efficiency.Traditional constraint-based methods typically rely on the results of conditional independence tests.However,excessive reliance on these test results can lead to a series of problems,including increased computational complexity and inaccurate results,especially when dealing with large-scale networks where performance bottlenecks are particularly evident.To overcome these challenges,we propose a Markov blanket discovery algorithm based on constrained local neighborhoods for constructing undirected independence graphs.This method uses the Markov blanket discovery algorithm to refine the constraints in the initial search space,sets an appropriate constraint radius,thereby reducing the initial computational cost of the algorithm and effectively narrowing the initial solution range.Specifically,the method first determines the local neighborhood space to limit the search range,thereby reducing the number of possible graph structures that need to be considered.This process not only improves the accuracy of the search space constraints but also significantly reduces the number of conditional independence tests.By performing conditional independence tests within the local neighborhood of each node,the method avoids comprehensive tests across the entire network,greatly reducing computational complexity.At the same time,the setting of the constraint radius further improves computational efficiency while ensuring accuracy.Compared to other algorithms,this method can quickly and efficiently construct undirected independence graphs while maintaining high accuracy.Experimental simulation results show that,this method has significant advantages in obtaining the structure of undirected independence graphs,not only maintaining an accuracy of over 96%but also reducing the number of conditional independence tests by at least 50%.This significant performance improvement is due to the effective constraint on the search space and the fine control of computational costs.