Available safety egress time under ship fire (SFAT) is critical to ship fire safety assessment, design and emergency rescue. Although it is available to determine SFAT by using fire models such as the two-zone fire ...Available safety egress time under ship fire (SFAT) is critical to ship fire safety assessment, design and emergency rescue. Although it is available to determine SFAT by using fire models such as the two-zone fire model CFAST and the field model FDS, none of these models can address the uncertainties involved in the input parameters. To solve this problem, current study presents a framework of uncertainty analysis for SFAT. Firstly, a deterministic model estimating SFAT is built. The uncertainties of the input parameters are regarded as random variables with the given probability distribution functions. Subsequently, the deterministic SFAT model is employed to couple with a Monte Carlo sampling method to investigate the uncertainties of the SFAT. The Spearman's rank-order correlation coefficient (SRCC) is used to examine the sensitivity of each input uncertainty parameter on SFAT. To illustrate the proposed approach in detail, a case study is performed. Based on the proposed approach, probability density function and cumulative density function of SFAT are obtained. Furthermore, sensitivity analysis with regard to SFAT is also conducted. The results give a high-negative correlation of SFAT and the fire growth coefficient whereas the effect of other parameters is so weak that they can be neglected.展开更多
We introduce the potential-decomposition strategy (PDS), which can be used in Markov chain Monte Carlo sampling algorithms. PDS can be designed to make particles move in a modified potential that favors diffusion in...We introduce the potential-decomposition strategy (PDS), which can be used in Markov chain Monte Carlo sampling algorithms. PDS can be designed to make particles move in a modified potential that favors diffusion in phase space, then, by rejecting some trial samples, the target distributions can be sampled in an unbiased manner. Furthermore, if the accepted trial samples are insumcient, they can be recycled as initial states to form more unbiased samples. This strategy can greatly improve efficiency when the original potential has multiple metastable states separated by large barriers. We apply PDS to the 2d Ising model and a double-well potential model with a large barrier, demonstrating in these two representative examples that convergence is accelerated by orders of magnitude.展开更多
We propose Monte Carlo Nonlocal physics-informed neural networks(MC-Nonlocal-PINNs),which are a generalization of MC-fPINNs in L.Guo et al.(Comput.Methods Appl.Mech.Eng.400(2022),115523)for solving general nonlocal mo...We propose Monte Carlo Nonlocal physics-informed neural networks(MC-Nonlocal-PINNs),which are a generalization of MC-fPINNs in L.Guo et al.(Comput.Methods Appl.Mech.Eng.400(2022),115523)for solving general nonlocal models such as integral equations and nonlocal PDEs.Similar to MC-fPINNs,our MC-Nonlocal-PINNs handle nonlocal operators in a Monte Carlo way,resulting in a very stable approach for high dimensional problems.We present a variety of test problems,including high dimensional Volterra type integral equations,hypersingular integral equations and nonlocal PDEs,to demonstrate the effectiveness of our approach.展开更多
In tomographic statics seismic data processing, it 1s crucial to cletermme an optimum base for a near-surface model. In this paper, we consider near-surface model base determination as a global optimum problem. Given ...In tomographic statics seismic data processing, it 1s crucial to cletermme an optimum base for a near-surface model. In this paper, we consider near-surface model base determination as a global optimum problem. Given information from uphole shooting and the first-arrival times from a surface seismic survey, we present a near-surface velocity model construction method based on a Monte-Carlo sampling scheme using a layered equivalent medium assumption. Compared with traditional least-squares first-arrival tomography, this scheme can delineate a clearer, weathering-layer base, resulting in a better implementation of damming correction. Examples using synthetic and field data are used to demonstrate the effectiveness of the proposed scheme.展开更多
Electrical impedance tomography (EIT) aims to reconstruct the conductivity distribution using the boundary measured voltage potential. Traditional regularization based method would suffer from error propagation due to...Electrical impedance tomography (EIT) aims to reconstruct the conductivity distribution using the boundary measured voltage potential. Traditional regularization based method would suffer from error propagation due to the iteration process. The statistical inverse problem method uses statistical inference to estimate unknown parameters. In this article, we develop a nonlinear weighted anisotropic total variation (NWATV) prior density function based on the recently proposed NWATV regularization method. We calculate the corresponding posterior density function, i.e., the solution of the EIT inverse problem in the statistical sense, via a modified Markov chain Monte Carlo (MCMC) sampling. We do numerical experiment to validate the proposed approach.展开更多
Three-parameter Weibull distribution is one of the preferable distribution models to describe product life. However, it is difficult to estimate its location parameter in the situation of a small size of sample. This ...Three-parameter Weibull distribution is one of the preferable distribution models to describe product life. However, it is difficult to estimate its location parameter in the situation of a small size of sample. This paper presents a stochastic simulation method to estimate the Weibull location parameters according to a small size of sample of product life observations and a large amount of statistically simulated life date. Big data technique is applied to find the relationship between the minimal observation in a product life sample of size <em>n</em> (<em>n</em> ≥ 3) and the Weibull location parameter. An example is presented to demonstrate the applicability and the value of the big data based stochastic simulation method. Comparing with other methods, the stochastic simulation method can be applied to very small size of sample such as the sample size of three, and it is easy to apply.展开更多
Regression random forest is becoming a widely-used machine learning technique for spatial prediction that shows competitive prediction performance in various geoscience fields.Like other popular machine learning metho...Regression random forest is becoming a widely-used machine learning technique for spatial prediction that shows competitive prediction performance in various geoscience fields.Like other popular machine learning methods for spatial prediction,regression random forest does not exactly honor the response variable’s measured values at sampled locations.However,competitor methods such as regression-kriging perfectly fit the response variable’s observed values at sampled locations by construction.Exactly matching the response variable’s measured values at sampled locations is often desirable in many geoscience applications.This paper presents a new approach ensuring that regression random forest perfectly matches the response variable’s observed values at sampled locations.The main idea consists of using the principal component analysis to create an orthogonal representation of the ensemble of regression tree predictors resulting from the traditional regression random forest.Then,the exact conditioning problem is reformulated as a Bayes-linear-Gauss problem on principal component scores.This problem has an analytical solution making it easy to perform Monte Carlo sampling of new principal component scores and then reconstruct regression tree predictors that perfectly match the response variable’s observed values at sampled locations.The reconstructed regression tree predictors’average also precisely matches the response variable’s measured values at sampled locations by construction.The proposed method’s effectiveness is illustrated on the one hand using a synthetic dataset where the ground-truth is available everywhere within the study region,and on the other hand,using a real dataset comprising southwest England’s geochemical concentration data.It is compared with the regression-kriging and the traditional regression random forest.It appears that the proposed method can perfectly fit the response variable’s measured values at sampled locations while achieving good out of sample predictive performance comparatively to regression-kriging and traditional regression random forest.展开更多
Three-dimensional(3D)roughness of discontinuity affects the quality of the rock mass,but 3D roughness is hard to be measured due to that the discontinuity is invisible in the engineering.Two-dimensional(2D)roughness c...Three-dimensional(3D)roughness of discontinuity affects the quality of the rock mass,but 3D roughness is hard to be measured due to that the discontinuity is invisible in the engineering.Two-dimensional(2D)roughness can be calculated from the visible traces,but it is difficult to obtain enough quantity of the traces to directly derive 3D roughness during the tunnel excavation.In this study,a new method using Bayesian theory is proposed to derive 3D roughness from the low quantity of 2D roughness samples.For more accurately calculating 3D roughness,a new regression formula of 2D roughness is established firstly based on wavelet analysis.The new JRC3D prediction model based on Bayesian theory is then developed,and Markov chain Monte Carlo(MCMC)sampling is adopted to process JRC3D prediction model.The discontinuity sample collected from the literature is used to verify the proposed method.Twenty groups with the sampling size of 2,3,4,and 5 of each group are randomly sampled from JRC2D values of 170 profiles of the discontinuity,respectively.The research results indicate that 100%,90%,85%,and 60%predicting JRC3D of the sample groups corresponding to the sampling size of 5,4,3,and 2 fall into the tolerance interval[JRC_(true)–1,JRC_(true)+1].It is validated that the sampling size of 5 is enough for predicting JRC3D.The sensitivities of sampling results are then analyzed on the influencing factors,which are the correlation function,the prior distribution,and the prior information.The discontinuity across the excavation face at ZK78+67.5 of Daxiagu tunnel is taken as the tunnel engineering application,and the results further verify that the predicting JRC3D with the sampling size of 5 is generally in good agreement with JRC3D true values.展开更多
Passive source localization via a maximum likelihood (ML) estimator can achieve a high accuracy but involves high calculation burdens, especially when based on time-of-arrival and frequency-of-arrival measurements f...Passive source localization via a maximum likelihood (ML) estimator can achieve a high accuracy but involves high calculation burdens, especially when based on time-of-arrival and frequency-of-arrival measurements for its internal nonlinearity and nonconvex nature. In this paper, we use the Pincus theorem and Monte Carlo importance sampling (MCIS) to achieve an approximate global solution to the ML problem in a computationally efficient manner. The main contribution is that we construct a probability density function (PDF) of Gaussian distribution, which is called an important function for efficient sampling, to approximate the ML estimation related to complicated distributions. The improved performance of the proposed method is at- tributed to the optimal selection of the important function and also the guaranteed convergence to a global maximum. This process greatly reduces the amount of calculation, but an initial solution estimation is required resulting from Taylor series expansion. However, the MCIS method is robust to this prior knowledge for point sampling and correction of importance weights. Simulation results show that the proposed method can achieve the Cram6r-Rao lower bound at a moderate Gaussian noise level and outper- forms the existing methods.展开更多
In the interception engagement,if the target movement information is not accurate enough for the mid-course guidance of intercepting missiles,the interception mission may fail as a result of large handover errors.This...In the interception engagement,if the target movement information is not accurate enough for the mid-course guidance of intercepting missiles,the interception mission may fail as a result of large handover errors.This paper proposes a novel cooperative mid-course guidance scheme for multiple missiles to intercept a target under the condition of large detection errors.Under this scheme,the launch and interception moments are staggered for different missiles.The earlier launched missiles can obtain a relatively accurate detection to the target during their terminal guidance,based on which the latter missiles are permitted to eliminate the handover error in the mid-course guidance.A significant merit of this scheme is that the available resources are fully exploited and less missiles are needed to achieve the interception mission.To this end,first,the design of cooperative handover parameters is formulated as an optimization problem.Then,an algorithm based on Monte Carlo sampling and stochastic approximation is proposed to solve this optimization problem,and the convergence of the algorithm is proved as well.Finally,simulation experiments are carried out to validate the effectiveness of the proposed cooperative scheme and algorithm.展开更多
Urban electricity and heat networks(UEHN)consist of the coupling and interactions between electric power systems and district heating systems,in which the geographical and functional features of integrated energy syst...Urban electricity and heat networks(UEHN)consist of the coupling and interactions between electric power systems and district heating systems,in which the geographical and functional features of integrated energy systems are demonstrated.UEHN have been expected to provide an effective way to accommodate the intermittent and unpredictable renewable energy sources,in which the application of stochastic optimization approaches to UEHN analysis is highly desired.In this paper,we propose a chance-constrained coordinated optimization approach for UEHN considering the uncertainties in electricity loads,heat loads,and photovoltaic outputs,as well as the correlations between these uncertain sources.A solution strategy,which combines the Latin Hypercube Sampling Monte Carlo Simulation(LHSMCS)approach and a heuristic algorithm,is specifically designed to deal with the proposed chance-constrained coordinated optimization.Finally,test results on an UEHN comprised of a modified IEEE 33-bus system and a 32-node district heating system at Barry Island have verified the feasibility and effectiveness of the proposed framework.展开更多
基金supported by the National Natural Science Foundation of China (Grant No. 50909058)"Chen Guang" Project of Shanghai Municipal Education Commission and Shanghai Education Development Foundation Science & Technology(Grant No. 10CG51)the Innovation Program of Shanghai Municipal Education Commission (Grant No.11YZ133)
文摘Available safety egress time under ship fire (SFAT) is critical to ship fire safety assessment, design and emergency rescue. Although it is available to determine SFAT by using fire models such as the two-zone fire model CFAST and the field model FDS, none of these models can address the uncertainties involved in the input parameters. To solve this problem, current study presents a framework of uncertainty analysis for SFAT. Firstly, a deterministic model estimating SFAT is built. The uncertainties of the input parameters are regarded as random variables with the given probability distribution functions. Subsequently, the deterministic SFAT model is employed to couple with a Monte Carlo sampling method to investigate the uncertainties of the SFAT. The Spearman's rank-order correlation coefficient (SRCC) is used to examine the sensitivity of each input uncertainty parameter on SFAT. To illustrate the proposed approach in detail, a case study is performed. Based on the proposed approach, probability density function and cumulative density function of SFAT are obtained. Furthermore, sensitivity analysis with regard to SFAT is also conducted. The results give a high-negative correlation of SFAT and the fire growth coefficient whereas the effect of other parameters is so weak that they can be neglected.
基金Supported by the National Natural Science Foundation of China under Grant Nos.10674016,10875013the Specialized Research Foundation for the Doctoral Program of Higher Education under Grant No.20080027005
文摘We introduce the potential-decomposition strategy (PDS), which can be used in Markov chain Monte Carlo sampling algorithms. PDS can be designed to make particles move in a modified potential that favors diffusion in phase space, then, by rejecting some trial samples, the target distributions can be sampled in an unbiased manner. Furthermore, if the accepted trial samples are insumcient, they can be recycled as initial states to form more unbiased samples. This strategy can greatly improve efficiency when the original potential has multiple metastable states separated by large barriers. We apply PDS to the 2d Ising model and a double-well potential model with a large barrier, demonstrating in these two representative examples that convergence is accelerated by orders of magnitude.
基金sponsored by the National Natural Science Foundation of China(NSFC:11971259).
文摘We propose Monte Carlo Nonlocal physics-informed neural networks(MC-Nonlocal-PINNs),which are a generalization of MC-fPINNs in L.Guo et al.(Comput.Methods Appl.Mech.Eng.400(2022),115523)for solving general nonlocal models such as integral equations and nonlocal PDEs.Similar to MC-fPINNs,our MC-Nonlocal-PINNs handle nonlocal operators in a Monte Carlo way,resulting in a very stable approach for high dimensional problems.We present a variety of test problems,including high dimensional Volterra type integral equations,hypersingular integral equations and nonlocal PDEs,to demonstrate the effectiveness of our approach.
基金funded by the National Science VIP specialized project of China(Grant No.2011ZX05025-001-03)by the National Science Foundation of China(Grant No.41274117)
文摘In tomographic statics seismic data processing, it 1s crucial to cletermme an optimum base for a near-surface model. In this paper, we consider near-surface model base determination as a global optimum problem. Given information from uphole shooting and the first-arrival times from a surface seismic survey, we present a near-surface velocity model construction method based on a Monte-Carlo sampling scheme using a layered equivalent medium assumption. Compared with traditional least-squares first-arrival tomography, this scheme can delineate a clearer, weathering-layer base, resulting in a better implementation of damming correction. Examples using synthetic and field data are used to demonstrate the effectiveness of the proposed scheme.
文摘Electrical impedance tomography (EIT) aims to reconstruct the conductivity distribution using the boundary measured voltage potential. Traditional regularization based method would suffer from error propagation due to the iteration process. The statistical inverse problem method uses statistical inference to estimate unknown parameters. In this article, we develop a nonlinear weighted anisotropic total variation (NWATV) prior density function based on the recently proposed NWATV regularization method. We calculate the corresponding posterior density function, i.e., the solution of the EIT inverse problem in the statistical sense, via a modified Markov chain Monte Carlo (MCMC) sampling. We do numerical experiment to validate the proposed approach.
文摘Three-parameter Weibull distribution is one of the preferable distribution models to describe product life. However, it is difficult to estimate its location parameter in the situation of a small size of sample. This paper presents a stochastic simulation method to estimate the Weibull location parameters according to a small size of sample of product life observations and a large amount of statistically simulated life date. Big data technique is applied to find the relationship between the minimal observation in a product life sample of size <em>n</em> (<em>n</em> ≥ 3) and the Weibull location parameter. An example is presented to demonstrate the applicability and the value of the big data based stochastic simulation method. Comparing with other methods, the stochastic simulation method can be applied to very small size of sample such as the sample size of three, and it is easy to apply.
文摘Regression random forest is becoming a widely-used machine learning technique for spatial prediction that shows competitive prediction performance in various geoscience fields.Like other popular machine learning methods for spatial prediction,regression random forest does not exactly honor the response variable’s measured values at sampled locations.However,competitor methods such as regression-kriging perfectly fit the response variable’s observed values at sampled locations by construction.Exactly matching the response variable’s measured values at sampled locations is often desirable in many geoscience applications.This paper presents a new approach ensuring that regression random forest perfectly matches the response variable’s observed values at sampled locations.The main idea consists of using the principal component analysis to create an orthogonal representation of the ensemble of regression tree predictors resulting from the traditional regression random forest.Then,the exact conditioning problem is reformulated as a Bayes-linear-Gauss problem on principal component scores.This problem has an analytical solution making it easy to perform Monte Carlo sampling of new principal component scores and then reconstruct regression tree predictors that perfectly match the response variable’s observed values at sampled locations.The reconstructed regression tree predictors’average also precisely matches the response variable’s measured values at sampled locations by construction.The proposed method’s effectiveness is illustrated on the one hand using a synthetic dataset where the ground-truth is available everywhere within the study region,and on the other hand,using a real dataset comprising southwest England’s geochemical concentration data.It is compared with the regression-kriging and the traditional regression random forest.It appears that the proposed method can perfectly fit the response variable’s measured values at sampled locations while achieving good out of sample predictive performance comparatively to regression-kriging and traditional regression random forest.
基金partially supported by the National Natural Science Foundation of China(Grant Nos.41972277,42277158,and U1934212)Special Fund for Basic Research on Scientific Instruments of the National Natural Science Foundation of China(Grant No.41827807).
文摘Three-dimensional(3D)roughness of discontinuity affects the quality of the rock mass,but 3D roughness is hard to be measured due to that the discontinuity is invisible in the engineering.Two-dimensional(2D)roughness can be calculated from the visible traces,but it is difficult to obtain enough quantity of the traces to directly derive 3D roughness during the tunnel excavation.In this study,a new method using Bayesian theory is proposed to derive 3D roughness from the low quantity of 2D roughness samples.For more accurately calculating 3D roughness,a new regression formula of 2D roughness is established firstly based on wavelet analysis.The new JRC3D prediction model based on Bayesian theory is then developed,and Markov chain Monte Carlo(MCMC)sampling is adopted to process JRC3D prediction model.The discontinuity sample collected from the literature is used to verify the proposed method.Twenty groups with the sampling size of 2,3,4,and 5 of each group are randomly sampled from JRC2D values of 170 profiles of the discontinuity,respectively.The research results indicate that 100%,90%,85%,and 60%predicting JRC3D of the sample groups corresponding to the sampling size of 5,4,3,and 2 fall into the tolerance interval[JRC_(true)–1,JRC_(true)+1].It is validated that the sampling size of 5 is enough for predicting JRC3D.The sensitivities of sampling results are then analyzed on the influencing factors,which are the correlation function,the prior distribution,and the prior information.The discontinuity across the excavation face at ZK78+67.5 of Daxiagu tunnel is taken as the tunnel engineering application,and the results further verify that the predicting JRC3D with the sampling size of 5 is generally in good agreement with JRC3D true values.
基金Project supported by the National Natural Science Foundation of China (No. 61201381 ) and the China Postdoctoral Science Foundation (No. 2016M592989)
文摘Passive source localization via a maximum likelihood (ML) estimator can achieve a high accuracy but involves high calculation burdens, especially when based on time-of-arrival and frequency-of-arrival measurements for its internal nonlinearity and nonconvex nature. In this paper, we use the Pincus theorem and Monte Carlo importance sampling (MCIS) to achieve an approximate global solution to the ML problem in a computationally efficient manner. The main contribution is that we construct a probability density function (PDF) of Gaussian distribution, which is called an important function for efficient sampling, to approximate the ML estimation related to complicated distributions. The improved performance of the proposed method is at- tributed to the optimal selection of the important function and also the guaranteed convergence to a global maximum. This process greatly reduces the amount of calculation, but an initial solution estimation is required resulting from Taylor series expansion. However, the MCIS method is robust to this prior knowledge for point sampling and correction of importance weights. Simulation results show that the proposed method can achieve the Cram6r-Rao lower bound at a moderate Gaussian noise level and outper- forms the existing methods.
基金partially supported by the National Natural Science Foundation of China(Nos.61333001 and 61473099)
文摘In the interception engagement,if the target movement information is not accurate enough for the mid-course guidance of intercepting missiles,the interception mission may fail as a result of large handover errors.This paper proposes a novel cooperative mid-course guidance scheme for multiple missiles to intercept a target under the condition of large detection errors.Under this scheme,the launch and interception moments are staggered for different missiles.The earlier launched missiles can obtain a relatively accurate detection to the target during their terminal guidance,based on which the latter missiles are permitted to eliminate the handover error in the mid-course guidance.A significant merit of this scheme is that the available resources are fully exploited and less missiles are needed to achieve the interception mission.To this end,first,the design of cooperative handover parameters is formulated as an optimization problem.Then,an algorithm based on Monte Carlo sampling and stochastic approximation is proposed to solve this optimization problem,and the convergence of the algorithm is proved as well.Finally,simulation experiments are carried out to validate the effectiveness of the proposed cooperative scheme and algorithm.
基金This work was supported in part by Natural Science Foundation of Jiangsu Province,China(No.BK20171433)in part by Science and Technology Project of State Grid Jiangsu Electric Power Corporation,China(No.J2018066).
文摘Urban electricity and heat networks(UEHN)consist of the coupling and interactions between electric power systems and district heating systems,in which the geographical and functional features of integrated energy systems are demonstrated.UEHN have been expected to provide an effective way to accommodate the intermittent and unpredictable renewable energy sources,in which the application of stochastic optimization approaches to UEHN analysis is highly desired.In this paper,we propose a chance-constrained coordinated optimization approach for UEHN considering the uncertainties in electricity loads,heat loads,and photovoltaic outputs,as well as the correlations between these uncertain sources.A solution strategy,which combines the Latin Hypercube Sampling Monte Carlo Simulation(LHSMCS)approach and a heuristic algorithm,is specifically designed to deal with the proposed chance-constrained coordinated optimization.Finally,test results on an UEHN comprised of a modified IEEE 33-bus system and a 32-node district heating system at Barry Island have verified the feasibility and effectiveness of the proposed framework.