混煤燃烧存在复杂的相互影响,将混煤当成单一煤种并采用单混合分数/概率密度函数(probability density function,PDF)方法计算,意味着忽略了煤种之间的影响,结果会产生很大偏差。而双混合分数/PDF方法可以分别定义各单煤性质并跟踪各单...混煤燃烧存在复杂的相互影响,将混煤当成单一煤种并采用单混合分数/概率密度函数(probability density function,PDF)方法计算,意味着忽略了煤种之间的影响,结果会产生很大偏差。而双混合分数/PDF方法可以分别定义各单煤性质并跟踪各单煤的燃烧过程,能够体现煤种之间燃烧特性的影响。利用单、双混合分数/PDF方法对同1台300 MW四角切圆锅炉进行模拟研究,并与实测数据进行对比,结果表明:双混合分数/PDF方法模拟的结果更符合混煤在炉内实际的燃烧情况。同时采用双混合分数/PDF方法模拟某一混煤燃烧过程,得到燃烧煤粉锅炉的流动,温度和烟气分布等特性。展开更多
By combining fractal theory with D-S evidence theory, an algorithm based on the fusion of multi-fractal features is presented. Fractal features are extracted, and basic probability assignment function is designed. Com...By combining fractal theory with D-S evidence theory, an algorithm based on the fusion of multi-fractal features is presented. Fractal features are extracted, and basic probability assignment function is designed. Comparison and simulation are performed on the new algorithm, the old algorithm based on single feature and the algorithm based on neural network. Results of the comparison and simulation illustrate that the new algorithm is feasible and valid.展开更多
Sparsity constrained deconvolution can improve the resolution of band-limited seismic data compared to conventional deconvolution. However, such deconvolution methods result in nonunique solutions and suppress weak re...Sparsity constrained deconvolution can improve the resolution of band-limited seismic data compared to conventional deconvolution. However, such deconvolution methods result in nonunique solutions and suppress weak reflections. The Cauchy function, modified Cauchy function, and Huber function are commonly used constraint criteria in sparse deconvolution. We used numerical experiments to analyze the ability of sparsity constrained deconvolution to restore reflectivity sequences and protect weak reflections under different constraint criteria. The experimental results demonstrate that the performance of sparsity constrained deconvolution depends on the agreement between the constraint criteria and the probability distribution of the reflectivity sequences; furthermore, the modified Cauchy- constrained criterion protects the weak reflections better than the other criteria. Based on the model experiments, the probability distribution of the reflectivity sequences of carbonate and clastic formations is statistically analyzed by using well-logging data and then the modified Cauchy-constrained deconvolution is applied to real seismic data much improving the resolution.展开更多
混合分数概率密度函数(probability density function,PDF)反映了湍流对燃料和氧化剂混合过程的影响,在湍流非预混燃烧的理论研究和数值模拟中有非常重要的作用。该文基于大涡模拟(large eddy simulation,LES)对非预混火焰中的混合分数...混合分数概率密度函数(probability density function,PDF)反映了湍流对燃料和氧化剂混合过程的影响,在湍流非预混燃烧的理论研究和数值模拟中有非常重要的作用。该文基于大涡模拟(large eddy simulation,LES)对非预混火焰中的混合分数PDF进行了研究。利用LES预测的SandiaFlame D的速度和温度的均值和均方根分布与实验结果符合很好,瞬态温度场显示了合理的湍流火焰形态。混合分数PDF在反应区为钟形分布,在贫燃侧和富燃侧为钟形分布或单调形分布,取决于当地流场状态。对简化PDF模型的研究表明:β函数模型对钟形PDF和单调形PDF的预测效果都很好;截尾Gauss函数模型只能较好地预测钟形分布PDF;多点δ函数模型的预测能力与截尾Gauss函数模型的预测能力类似;双δ函数模型的预测结果偏差较大。展开更多
According to Cubic law and incompressible fluid law of mass conservation, the seepage character of the fracture surface was simulated with the simulation method of fractal theory and random Brown function. Furthermore...According to Cubic law and incompressible fluid law of mass conservation, the seepage character of the fracture surface was simulated with the simulation method of fractal theory and random Brown function. Furthermore, the permeability coefficient of the single fracture was obtained. In order to test the stability of the method, 500 simulations were conducted on each different fractal dimension. The simulated permeability coefficient was analyzed in probability density distribution and probability cumulative distribution statistics. Statistics showed that the discrete degree of the permeability coefficient increases with the increase of the fractal dimension. And the calculation result has better stability when the fractal dimension value is relatively small. According to the Bayes theory, the characteristic index of the permeability coefficient on fractal dimension P(Dfi| Ri) is established. The index, P(Dfi| Ri), shows that when the simulated permeability coefficient is relatively large, it can clearly represent the fractal dimension of the structure surface, the probability is 82%. The calculated results of the characteristic index verify the feasibility of the method.展开更多
Using lAP AGCM simulation results for the period 1961-2005, summer hot days in China were calculated and then compared with observations. Generally, the spatial pattern of hot days is reasonably reproduced, with more ...Using lAP AGCM simulation results for the period 1961-2005, summer hot days in China were calculated and then compared with observations. Generally, the spatial pattern of hot days is reasonably reproduced, with more hot days found in northern China, the Yangtze and Huaihe River basin, the Chuan-Yu region, and southern Xinjiang. However, the model tends to overestimate the number of hot days in the above-mentioned regions, particularly in the Yangtze and Huaihe River basin where the simulated summer-mean hot days is 13 days more than observed when averaged over the whole region, and the maximum overestimation of hot days can reach 23 days in the region. Analysis of the probability distribution of daily maximum temperature (Trnax) suggests that the warm bias in the model-simulated Tmax contributes largely to the overestimation of hot days in the model. Furthermore, the discrepancy in the simulated variance of the Tmax distribution also plays a non- negligible role in the overestimation of hot days. Indeed, the latter can even account for 22% of the total bias of simulated hot days in August in the Yangtze and Huaihe River basin. The quantification of model bias from the mean value and variability can provide more information for further model improvement.展开更多
Non-equilibrium fission has been described by diffusion model. In order to describe the diffusion process analytically, the analytical solution of Smoluchowski equation in harmonic oscillator potential is obtained. Th...Non-equilibrium fission has been described by diffusion model. In order to describe the diffusion process analytically, the analytical solution of Smoluchowski equation in harmonic oscillator potential is obtained. This analytical solution is able to describe the probability distribution and the diffusive current with the variable x and t. The results indicate that the probability distribution and the diffusive current are relevant to the initial distribution shape, initial position, and the nuclear temperature T; the time to reach the quasi-stationary state is proportional to friction coefficient beta, but is independent of the initial distribution status and the nuclear temperature T. The prerequisites of negative diffusive current are justified. This method provides an approach to describe the diffusion process for fissile process in complicated potentials analytically.展开更多
To improve prediction accuracy of strip thickness in hot rolling, a kind of Dempster/Shafer(D_S) information reconstitution prediction method(DSIRPM) was presented. DSIRPM basically consisted of three steps to impleme...To improve prediction accuracy of strip thickness in hot rolling, a kind of Dempster/Shafer(D_S) information reconstitution prediction method(DSIRPM) was presented. DSIRPM basically consisted of three steps to implement the prediction of strip thickness. Firstly, iba Analyzer was employed to analyze the periodicity of hot rolling and find three sensitive parameters to strip thickness, which were used to undertake polynomial curve fitting prediction based on least square respectively, and preliminary prediction results were obtained. Then, D_S evidence theory was used to reconstruct the prediction results under different parameters, in which basic probability assignment(BPA) was the key and the proposed contribution rate calculated using grey relational degree was regarded as BPA, which realizes BPA selection objectively. Finally, from this distribution, future strip thickness trend was inferred. Experimental results clearly show the improved prediction accuracy and stability compared with other prediction models, such as GM(1,1) and the weighted average prediction model.展开更多
The uncertainties of some key influence factors on coal crushing,such as rock strength,pore pressure and magnitude and orientation of three principal stresses,can lead to the uncertainty of coal crushing and make it v...The uncertainties of some key influence factors on coal crushing,such as rock strength,pore pressure and magnitude and orientation of three principal stresses,can lead to the uncertainty of coal crushing and make it very difficult to predict coal crushing under the condition of in-situ reservoir.To account for the uncertainty involved in coal crushing,a deterministic prediction model of coal crushing under the condition of in-situ reservoir was established based on Hoek-Brown criterion.Through this model,key influence factors on coal crushing were selected as random variables and the corresponding probability density functions were determined by combining experiment data and Latin Hypercube method.Then,to analyze the uncertainty of coal crushing,the firstorder second-moment method and the presented model were combined to address the failure probability involved in coal crushing analysis.Using the presented method,the failure probabilities of coal crushing were analyzed for WS5-5 well in Ningwu basin,China,and the relations between failure probability and the influence factors were furthermore discussed.The results show that the failure probabilities of WS5-5 CBM well vary from 0.6 to 1.0; moreover,for the coal seam section at depth of 784.3-785 m,the failure probabilities are equal to 1,which fit well with experiment results; the failure probability of coal crushing presents nonlinear growth relationships with the increase of principal stress difference and the decrease of uniaxial compressive strength.展开更多
Commonly used statistical procedure to describe the observed statistical sets is to use their conventional moments or cumulants. When choosing an appropriate parametric distribution for the data set is typically that ...Commonly used statistical procedure to describe the observed statistical sets is to use their conventional moments or cumulants. When choosing an appropriate parametric distribution for the data set is typically that parameters of a parametric distribution are estimated using the moment method of creating a system of equations in which the sample conventional moments lay in the equality of the corresponding moments of the theoretical distribution. However, the moment method of parameter estimation is not always convenient, especially for small samples. An alternative approach is based on the use of other characteristics, which the author calls L-moments. L-moments are analogous to conventional moments, but they are based on linear combinations of order statistics, i.e., L-statistics. Using L-moments is theoretically preferable to the conventional moments and consists in the fact that L-moments characterize a wider range of distribution. When estimating from sample L-moments, L-moments are more robust to the presence of outliers in the data. Experience also shows that, compared to conventional moments, L-moments are less prone to bias of estimation. Parameter estimates obtained using L-moments are mainly in the case of small samples often even more accurate than estimates of parameters made by maximum likelihood method. Using the method of L-moments in the case of small data sets from the meteorology is primarily known in statistical literature. This paper deals with the use of L-moments in the case for large data sets of income distribution (individual data) and wage distribution (data are ordered to form of interval frequency distribution of extreme open intervals). This paper also presents a comparison of the accuracy of the method of L-moments with an accuracy of other methods of point estimation of parameters of parametric probability distribution in the case of large data sets of individual data and data ordered to form of interval frequency distribution.展开更多
In this paper a new proposal of a straight line, the "modified Tukey's line", for fitting to a normal quantile-quantile Plot, or normal Q-Q plot, is presented. This probability plot allows us to determine whether a...In this paper a new proposal of a straight line, the "modified Tukey's line", for fitting to a normal quantile-quantile Plot, or normal Q-Q plot, is presented. This probability plot allows us to determine whether a set of sample observations is distributed according to a normal distribution. For this, the sample quantiles are represented against the quantiles of a theoretical probability model, which in this case is the normal distribution. If the data set follows the above mentioned distribution, the plotted points will have a rectilinear configuration. To verify this, there are different alternatives for fitting a straight line to the plotted points. Among the straight lines which can be fitted to a Q-Q plot, in this paper, besides the proposed straight line, the following straight lines are considered: straight line that passes through the first and third quartiles, straight line that passes through the 10th and 90th percentiles, straight line fitted by the method of least squares, straight line with slope s and constant the average of the data set, Theil's line and Tukey's line. In addition, an example, in which there are represented the different straight lines considered and the proposed straight line on a normal Q-Q plot obtained for the same set of observations, is developed. In this example the existing differences among the straight lines are observed.展开更多
Dempster-Shafer (DS) theory of evidence has been widely used in many data fusion ap- plication systems. However, how to determine basic probability assignment, which is the main and the first step in evidence theory, ...Dempster-Shafer (DS) theory of evidence has been widely used in many data fusion ap- plication systems. However, how to determine basic probability assignment, which is the main and the first step in evidence theory, is still an open issue. In this paper, a new method to obtain Basic Probability Assignment (BPA) is proposed based on the similarity measure between generalized fuzzy numbers. In the proposed method, species model can be constructed by determination of the min, average and max value to construct a fuzzy number. Then, a new Radius Of Gravity (ROG) method to determine the similarity measure between generalized fuzzy numbers is used to calculate the BPA functions of each instance. Finally, the efficiency of the proposed method is illustrated by the classi- fication of Iris data.展开更多
Since in most blind source separation(BSS)algorithms the estimations of probability density function(pdf)of sources are fixed or can only switch between one sup-Gaussian and other sub-Gaussian model,they may not be ef...Since in most blind source separation(BSS)algorithms the estimations of probability density function(pdf)of sources are fixed or can only switch between one sup-Gaussian and other sub-Gaussian model,they may not be efficient to separate sources with different distributions.So to solve the problem of pdf mismatch and the separation of hybrid mixture in BSS,the generalized Gaussian model(GGM)is introduced to model the pdf of the sources since it can provide a general structure of univariate distributions.Its great advantage is that only one parameter needs to be determined in modeling the pdf of different sources,so it is less complex than Gaussian mixture model.By using maximum likelihood(ML)approach,the convergence of the proposed algorithm is improved.The computer simulations show that it is more efficient and valid than conventional methods with fixed pdf estimation.展开更多
It is desired to obtain the joint probability distribution(JPD) over a set of random variables with local data, so as to avoid the hard work to collect statistical data in the scale of all variables. A lot of work has...It is desired to obtain the joint probability distribution(JPD) over a set of random variables with local data, so as to avoid the hard work to collect statistical data in the scale of all variables. A lot of work has been done when all variables are in a known directed acyclic graph(DAG). However, steady directed cyclic graphs(DCGs) may be involved when we simply combine modules containing local data together, where a module is composed of a child variable and its parent variables. So far, the physical and statistical meaning of steady DCGs remain unclear and unsolved. This paper illustrates the physical and statistical meaning of steady DCGs, and presents a method to calculate the JPD with local data, given that all variables are in a known single-valued Dynamic Uncertain Causality Graph(S-DUCG), and thus defines a new Bayesian Network with steady DCGs. The so-called single-valued means that only the causes of the true state of a variable are specified, while the false state is the complement of the true state.展开更多
The configurational properties of tail-like polymer chains with one end attached to a flat surface are studied by using dynamic Monte Carlo technique. We find that the probability distribution of the free end in z dir...The configurational properties of tail-like polymer chains with one end attached to a flat surface are studied by using dynamic Monte Carlo technique. We find that the probability distribution of the free end in z direction P(Rz) and the density profile p(z) can be scaled approximately by a factor β to be a length independent function for both random walking (RW) and self-avoiding walking (SAW) tail-like chains, where the factor β is related to the mean square end-to-end distance 〈RE〉. The scaled P(Rz) of the SAW chain roughly overlaps that of the RW chain, but the scaled p(z) of the SAW chain locates at smaller βz than that of the RW chain.展开更多
Considering the dependent relationship among wave height,wind speed,and current velocity,we construct novel trivariate joint probability distributions via Archimedean copula functions.Total 30-year data of wave height...Considering the dependent relationship among wave height,wind speed,and current velocity,we construct novel trivariate joint probability distributions via Archimedean copula functions.Total 30-year data of wave height,wind speed,and current velocity in the Bohai Sea are hindcast and sampled for case study.Four kinds of distributions,namely,Gumbel distribution,lognormal distribution,Weibull distribution,and Pearson Type III distribution,are candidate models for marginal distributions of wave height,wind speed,and current velocity.The Pearson Type III distribution is selected as the optimal model.Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas,namely,Clayton,Frank,Gumbel-Hougaard,and Ali-Mikhail-Haq copulas.These joint probability models can maximize marginal information and the dependence among the three variables.The design return values of these three variables can be obtained by three methods:univariate probability,conditional probability,and joint probability.The joint return periods of different load combinations are estimated by the proposed models.Platform responses(including base shear,overturning moment,and deck displacement) are further calculated.For the same return period,the design values of wave height,wind speed,and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability.Considering the dependence among variables,the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.展开更多
The logistic growth model with correlated additive and multiplicative Gaussian white noise is used to anedyze tumor cell population. The effects of perfectly correlated and anti-correlated noise on the stationary prop...The logistic growth model with correlated additive and multiplicative Gaussian white noise is used to anedyze tumor cell population. The effects of perfectly correlated and anti-correlated noise on the stationary properties of tumor cell population are studied. As in both cases the diffusion coefficient has zero point in real number field, some special features of the system are arisen. It is found that in cause tumor cell extinction. In the perfectly anti-correlated tumor cell population exhibit two extrema. both cases, the increase of the multiplicative noise intensity case, the stationary probability distribution as a function of展开更多
The proper determination of the curve number (CN) in the SCS-CN method reduces errors in predicting runoff volume. In this paper the variability of CN was studied for 5 Slovak and S Polish Carpathian catchments. Emp...The proper determination of the curve number (CN) in the SCS-CN method reduces errors in predicting runoff volume. In this paper the variability of CN was studied for 5 Slovak and S Polish Carpathian catchments. Empirical curve numbers were applied to the distribution fitting. Next, theoretical characteristics of CN were estimated. For loo-CN the Generalized Extreme Value (GEV) distribution was identified as the best fit in most of the catchments. An assessment of the differences between the characteristics estimated from theoretical distributions and the tabulated values of CN was performed. The comparison between the antecedent runoff conditions (ARC) of Hawkins and Hjelmfelt was also completed. The analysis was done for various magnitudes of rainfall. Confidence intervals (CI) were helpful in this evaluation. The studies revealed discordances between the tabulated and estimated CNs. The tabulated CNs were usually lower than estimated values; therefore, an application of the median value and the probabilistic ARC of Hjelmfelt for wet runoff conditions is advisable. For dry conditions the ARC of Hjelmfelt usually better estimated CN than ARC of Hawkins did, but in several catchments neither the ARC of Hawkins nor Hjelmfelt sufficiently depicted the variability in CN.展开更多
文摘混煤燃烧存在复杂的相互影响,将混煤当成单一煤种并采用单混合分数/概率密度函数(probability density function,PDF)方法计算,意味着忽略了煤种之间的影响,结果会产生很大偏差。而双混合分数/PDF方法可以分别定义各单煤性质并跟踪各单煤的燃烧过程,能够体现煤种之间燃烧特性的影响。利用单、双混合分数/PDF方法对同1台300 MW四角切圆锅炉进行模拟研究,并与实测数据进行对比,结果表明:双混合分数/PDF方法模拟的结果更符合混煤在炉内实际的燃烧情况。同时采用双混合分数/PDF方法模拟某一混煤燃烧过程,得到燃烧煤粉锅炉的流动,温度和烟气分布等特性。
文摘By combining fractal theory with D-S evidence theory, an algorithm based on the fusion of multi-fractal features is presented. Fractal features are extracted, and basic probability assignment function is designed. Comparison and simulation are performed on the new algorithm, the old algorithm based on single feature and the algorithm based on neural network. Results of the comparison and simulation illustrate that the new algorithm is feasible and valid.
基金supported by the Major Basic Research Development Program of China (973 Program)(No.2013CB228606)the National Science foundation of China (No.41174117)+1 种基金the National Major Science-Technology Project (No.2011ZX05031-001)Innovation Fund of PetroChina (No.2010D-5006-0301)
文摘Sparsity constrained deconvolution can improve the resolution of band-limited seismic data compared to conventional deconvolution. However, such deconvolution methods result in nonunique solutions and suppress weak reflections. The Cauchy function, modified Cauchy function, and Huber function are commonly used constraint criteria in sparse deconvolution. We used numerical experiments to analyze the ability of sparsity constrained deconvolution to restore reflectivity sequences and protect weak reflections under different constraint criteria. The experimental results demonstrate that the performance of sparsity constrained deconvolution depends on the agreement between the constraint criteria and the probability distribution of the reflectivity sequences; furthermore, the modified Cauchy- constrained criterion protects the weak reflections better than the other criteria. Based on the model experiments, the probability distribution of the reflectivity sequences of carbonate and clastic formations is statistically analyzed by using well-logging data and then the modified Cauchy-constrained deconvolution is applied to real seismic data much improving the resolution.
基金Project(50934006) supported by the National Natural Science Foundation of ChinaProject(CX2012B070) supported by Hunan Provincial Innovation Foundation for Postgraduate,ChinaProject(1343-76140000024) Supported by Academic New Artist Ministry of Education Doctoral Post Graduate in 2012,China
文摘According to Cubic law and incompressible fluid law of mass conservation, the seepage character of the fracture surface was simulated with the simulation method of fractal theory and random Brown function. Furthermore, the permeability coefficient of the single fracture was obtained. In order to test the stability of the method, 500 simulations were conducted on each different fractal dimension. The simulated permeability coefficient was analyzed in probability density distribution and probability cumulative distribution statistics. Statistics showed that the discrete degree of the permeability coefficient increases with the increase of the fractal dimension. And the calculation result has better stability when the fractal dimension value is relatively small. According to the Bayes theory, the characteristic index of the permeability coefficient on fractal dimension P(Dfi| Ri) is established. The index, P(Dfi| Ri), shows that when the simulated permeability coefficient is relatively large, it can clearly represent the fractal dimension of the structure surface, the probability is 82%. The calculated results of the characteristic index verify the feasibility of the method.
基金supported by the Special Scientific Research Fund of the Meteorological Public Welfare Profession of China[grant number GYHY01406021]National Key Research and Development Program[grant number 2016YFC0402702]the National Natural Science Foundation of China[grant numbers 41575095,41175073]
文摘Using lAP AGCM simulation results for the period 1961-2005, summer hot days in China were calculated and then compared with observations. Generally, the spatial pattern of hot days is reasonably reproduced, with more hot days found in northern China, the Yangtze and Huaihe River basin, the Chuan-Yu region, and southern Xinjiang. However, the model tends to overestimate the number of hot days in the above-mentioned regions, particularly in the Yangtze and Huaihe River basin where the simulated summer-mean hot days is 13 days more than observed when averaged over the whole region, and the maximum overestimation of hot days can reach 23 days in the region. Analysis of the probability distribution of daily maximum temperature (Trnax) suggests that the warm bias in the model-simulated Tmax contributes largely to the overestimation of hot days in the model. Furthermore, the discrepancy in the simulated variance of the Tmax distribution also plays a non- negligible role in the overestimation of hot days. Indeed, the latter can even account for 22% of the total bias of simulated hot days in August in the Yangtze and Huaihe River basin. The quantification of model bias from the mean value and variability can provide more information for further model improvement.
文摘Non-equilibrium fission has been described by diffusion model. In order to describe the diffusion process analytically, the analytical solution of Smoluchowski equation in harmonic oscillator potential is obtained. This analytical solution is able to describe the probability distribution and the diffusive current with the variable x and t. The results indicate that the probability distribution and the diffusive current are relevant to the initial distribution shape, initial position, and the nuclear temperature T; the time to reach the quasi-stationary state is proportional to friction coefficient beta, but is independent of the initial distribution status and the nuclear temperature T. The prerequisites of negative diffusive current are justified. This method provides an approach to describe the diffusion process for fissile process in complicated potentials analytically.
基金Projects(61174115,51104044)supported by the National Natural Science Foundation of ChinaProject(L2010153)supported by Scientific Research Project of Liaoning Provincial Education Department,China
文摘To improve prediction accuracy of strip thickness in hot rolling, a kind of Dempster/Shafer(D_S) information reconstitution prediction method(DSIRPM) was presented. DSIRPM basically consisted of three steps to implement the prediction of strip thickness. Firstly, iba Analyzer was employed to analyze the periodicity of hot rolling and find three sensitive parameters to strip thickness, which were used to undertake polynomial curve fitting prediction based on least square respectively, and preliminary prediction results were obtained. Then, D_S evidence theory was used to reconstruct the prediction results under different parameters, in which basic probability assignment(BPA) was the key and the proposed contribution rate calculated using grey relational degree was regarded as BPA, which realizes BPA selection objectively. Finally, from this distribution, future strip thickness trend was inferred. Experimental results clearly show the improved prediction accuracy and stability compared with other prediction models, such as GM(1,1) and the weighted average prediction model.
基金Project(51204201)supported by the National Natural Science Foundation of ChinaProjects(2011ZX05036-001,2011ZX05037-004)supported by the National Science and Technology Major Program of China+1 种基金Project(2010CB226706)supported by the National Basic Research Program of ChinaProject(11CX04050A)supported by the Fundamental Research Funds for the Central Universities of China
文摘The uncertainties of some key influence factors on coal crushing,such as rock strength,pore pressure and magnitude and orientation of three principal stresses,can lead to the uncertainty of coal crushing and make it very difficult to predict coal crushing under the condition of in-situ reservoir.To account for the uncertainty involved in coal crushing,a deterministic prediction model of coal crushing under the condition of in-situ reservoir was established based on Hoek-Brown criterion.Through this model,key influence factors on coal crushing were selected as random variables and the corresponding probability density functions were determined by combining experiment data and Latin Hypercube method.Then,to analyze the uncertainty of coal crushing,the firstorder second-moment method and the presented model were combined to address the failure probability involved in coal crushing analysis.Using the presented method,the failure probabilities of coal crushing were analyzed for WS5-5 well in Ningwu basin,China,and the relations between failure probability and the influence factors were furthermore discussed.The results show that the failure probabilities of WS5-5 CBM well vary from 0.6 to 1.0; moreover,for the coal seam section at depth of 784.3-785 m,the failure probabilities are equal to 1,which fit well with experiment results; the failure probability of coal crushing presents nonlinear growth relationships with the increase of principal stress difference and the decrease of uniaxial compressive strength.
文摘Commonly used statistical procedure to describe the observed statistical sets is to use their conventional moments or cumulants. When choosing an appropriate parametric distribution for the data set is typically that parameters of a parametric distribution are estimated using the moment method of creating a system of equations in which the sample conventional moments lay in the equality of the corresponding moments of the theoretical distribution. However, the moment method of parameter estimation is not always convenient, especially for small samples. An alternative approach is based on the use of other characteristics, which the author calls L-moments. L-moments are analogous to conventional moments, but they are based on linear combinations of order statistics, i.e., L-statistics. Using L-moments is theoretically preferable to the conventional moments and consists in the fact that L-moments characterize a wider range of distribution. When estimating from sample L-moments, L-moments are more robust to the presence of outliers in the data. Experience also shows that, compared to conventional moments, L-moments are less prone to bias of estimation. Parameter estimates obtained using L-moments are mainly in the case of small samples often even more accurate than estimates of parameters made by maximum likelihood method. Using the method of L-moments in the case of small data sets from the meteorology is primarily known in statistical literature. This paper deals with the use of L-moments in the case for large data sets of income distribution (individual data) and wage distribution (data are ordered to form of interval frequency distribution of extreme open intervals). This paper also presents a comparison of the accuracy of the method of L-moments with an accuracy of other methods of point estimation of parameters of parametric probability distribution in the case of large data sets of individual data and data ordered to form of interval frequency distribution.
文摘In this paper a new proposal of a straight line, the "modified Tukey's line", for fitting to a normal quantile-quantile Plot, or normal Q-Q plot, is presented. This probability plot allows us to determine whether a set of sample observations is distributed according to a normal distribution. For this, the sample quantiles are represented against the quantiles of a theoretical probability model, which in this case is the normal distribution. If the data set follows the above mentioned distribution, the plotted points will have a rectilinear configuration. To verify this, there are different alternatives for fitting a straight line to the plotted points. Among the straight lines which can be fitted to a Q-Q plot, in this paper, besides the proposed straight line, the following straight lines are considered: straight line that passes through the first and third quartiles, straight line that passes through the 10th and 90th percentiles, straight line fitted by the method of least squares, straight line with slope s and constant the average of the data set, Theil's line and Tukey's line. In addition, an example, in which there are represented the different straight lines considered and the proposed straight line on a normal Q-Q plot obtained for the same set of observations, is developed. In this example the existing differences among the straight lines are observed.
基金Supported by National High Technology Project (863)(No. 2006AA02Z320)the National Natural Science Founda-tion of China (No.30700154, No.60874105)+1 种基金Zhejiang Natural Science Foundation (No.Y107458, RY1080422)the School Youth Found of Shanghai Jiaotong University
文摘Dempster-Shafer (DS) theory of evidence has been widely used in many data fusion ap- plication systems. However, how to determine basic probability assignment, which is the main and the first step in evidence theory, is still an open issue. In this paper, a new method to obtain Basic Probability Assignment (BPA) is proposed based on the similarity measure between generalized fuzzy numbers. In the proposed method, species model can be constructed by determination of the min, average and max value to construct a fuzzy number. Then, a new Radius Of Gravity (ROG) method to determine the similarity measure between generalized fuzzy numbers is used to calculate the BPA functions of each instance. Finally, the efficiency of the proposed method is illustrated by the classi- fication of Iris data.
文摘Since in most blind source separation(BSS)algorithms the estimations of probability density function(pdf)of sources are fixed or can only switch between one sup-Gaussian and other sub-Gaussian model,they may not be efficient to separate sources with different distributions.So to solve the problem of pdf mismatch and the separation of hybrid mixture in BSS,the generalized Gaussian model(GGM)is introduced to model the pdf of the sources since it can provide a general structure of univariate distributions.Its great advantage is that only one parameter needs to be determined in modeling the pdf of different sources,so it is less complex than Gaussian mixture model.By using maximum likelihood(ML)approach,the convergence of the proposed algorithm is improved.The computer simulations show that it is more efficient and valid than conventional methods with fixed pdf estimation.
基金supported by the National Natural Science Foundation of China under Grant 71671103
文摘It is desired to obtain the joint probability distribution(JPD) over a set of random variables with local data, so as to avoid the hard work to collect statistical data in the scale of all variables. A lot of work has been done when all variables are in a known directed acyclic graph(DAG). However, steady directed cyclic graphs(DCGs) may be involved when we simply combine modules containing local data together, where a module is composed of a child variable and its parent variables. So far, the physical and statistical meaning of steady DCGs remain unclear and unsolved. This paper illustrates the physical and statistical meaning of steady DCGs, and presents a method to calculate the JPD with local data, given that all variables are in a known single-valued Dynamic Uncertain Causality Graph(S-DUCG), and thus defines a new Bayesian Network with steady DCGs. The so-called single-valued means that only the causes of the true state of a variable are specified, while the false state is the complement of the true state.
基金Project (No. 20204014) supported by the National Natural ScienceFoundation of China
文摘The configurational properties of tail-like polymer chains with one end attached to a flat surface are studied by using dynamic Monte Carlo technique. We find that the probability distribution of the free end in z direction P(Rz) and the density profile p(z) can be scaled approximately by a factor β to be a length independent function for both random walking (RW) and self-avoiding walking (SAW) tail-like chains, where the factor β is related to the mean square end-to-end distance 〈RE〉. The scaled P(Rz) of the SAW chain roughly overlaps that of the RW chain, but the scaled p(z) of the SAW chain locates at smaller βz than that of the RW chain.
基金partially supported by the National Natural Science Foundation of China(No.51479183)the National Key Research and Development Program,China(Nos.2016YFC0302301 and 2016YFC0803401)the Fundamental Research Funds for the Central University(No.201564003)
文摘Considering the dependent relationship among wave height,wind speed,and current velocity,we construct novel trivariate joint probability distributions via Archimedean copula functions.Total 30-year data of wave height,wind speed,and current velocity in the Bohai Sea are hindcast and sampled for case study.Four kinds of distributions,namely,Gumbel distribution,lognormal distribution,Weibull distribution,and Pearson Type III distribution,are candidate models for marginal distributions of wave height,wind speed,and current velocity.The Pearson Type III distribution is selected as the optimal model.Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas,namely,Clayton,Frank,Gumbel-Hougaard,and Ali-Mikhail-Haq copulas.These joint probability models can maximize marginal information and the dependence among the three variables.The design return values of these three variables can be obtained by three methods:univariate probability,conditional probability,and joint probability.The joint return periods of different load combinations are estimated by the proposed models.Platform responses(including base shear,overturning moment,and deck displacement) are further calculated.For the same return period,the design values of wave height,wind speed,and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability.Considering the dependence among variables,the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.
基金Supported by the National Natural Science Foundation of China under Grant No. 11045004
文摘The logistic growth model with correlated additive and multiplicative Gaussian white noise is used to anedyze tumor cell population. The effects of perfectly correlated and anti-correlated noise on the stationary properties of tumor cell population are studied. As in both cases the diffusion coefficient has zero point in real number field, some special features of the system are arisen. It is found that in cause tumor cell extinction. In the perfectly anti-correlated tumor cell population exhibit two extrema. both cases, the increase of the multiplicative noise intensity case, the stationary probability distribution as a function of
基金supported by the Slovak Grant Agency VEGA under Project No.1/0776/13 and Project No.1/0710/15Research Project No.N N305 396238 founded by the Polish Ministry of Science and Higher Education
文摘The proper determination of the curve number (CN) in the SCS-CN method reduces errors in predicting runoff volume. In this paper the variability of CN was studied for 5 Slovak and S Polish Carpathian catchments. Empirical curve numbers were applied to the distribution fitting. Next, theoretical characteristics of CN were estimated. For loo-CN the Generalized Extreme Value (GEV) distribution was identified as the best fit in most of the catchments. An assessment of the differences between the characteristics estimated from theoretical distributions and the tabulated values of CN was performed. The comparison between the antecedent runoff conditions (ARC) of Hawkins and Hjelmfelt was also completed. The analysis was done for various magnitudes of rainfall. Confidence intervals (CI) were helpful in this evaluation. The studies revealed discordances between the tabulated and estimated CNs. The tabulated CNs were usually lower than estimated values; therefore, an application of the median value and the probabilistic ARC of Hjelmfelt for wet runoff conditions is advisable. For dry conditions the ARC of Hjelmfelt usually better estimated CN than ARC of Hawkins did, but in several catchments neither the ARC of Hawkins nor Hjelmfelt sufficiently depicted the variability in CN.