The statistical distribution of natural phenomena is of great significance in studying the laws of nature. In order to study the statistical characteristics of a random pulse signal, a random process model is proposed...The statistical distribution of natural phenomena is of great significance in studying the laws of nature. In order to study the statistical characteristics of a random pulse signal, a random process model is proposed theoretically for better studying of the random law of measured results. Moreover, a simple random pulse signal generation and testing system is designed for studying the counting distributions of three typical objects including particles suspended in the air, standard particles, and background noise. Both normal and lognormal distribution fittings are used for analyzing the experimental results and testified by chi-square distribution fit test and correlation coefficient for comparison. In addition, the statistical laws of three typical objects and the relations between them are discussed in detail. The relation is also the non-integral dimension fractal relation of statistical distributions of different random laser scattering pulse signal groups.展开更多
The two-parameter lognormal distribution is a variant of the normal distribution and the three-parameter lognormal distribution is an extension of the two-parameter lognormal distribution by introducing a location par...The two-parameter lognormal distribution is a variant of the normal distribution and the three-parameter lognormal distribution is an extension of the two-parameter lognormal distribution by introducing a location parameter. The Q-Q plot of the three-parameter lognormal distribution is widely used. To obtain the Q-Q plot one needs to iteratively try different values of the shape parameter and subjectively judge the linearity of the Q-Q plot. In this paper,a mathematical method was proposed to determine the value of the shape parameter so as to simplify the generation of the Q-Q plot. Then a new probability plot was proposed,which was more easily obtained and provided more accurate parameter estimates than the Q-Q plot. These are illustrated by three realworld examples.展开更多
Abstract Lognormal distribution is commonly used in engineering. It is also a life distribution of important research values. For long-life products follow this distribution, it is necessary to apply accelerated testi...Abstract Lognormal distribution is commonly used in engineering. It is also a life distribution of important research values. For long-life products follow this distribution, it is necessary to apply accelerated testing techniques to product demonstration. This paper describes the development of accelerated life testing sampling plans (ALSPs) for lognormal distribution under time-censoring conditions. ALSPs take both producer and consumer risks into account, and they can be designed to work whether acceleration factor (AF) is known or unknown. When AF is known, lift testing is assumed to be conducted under accelerated conditions with time-censoring. The producer and con- sumer risks are satisfied, and the size of test sample and the size of acceptance number arc opti- mized. Then sensitivity analyses are conducted. When AF is unknown, two or more predetermined levels of accelerated stress are used. The sample sizes and sample proportion allo- cated to each stress level are optimized. The acceptance constant that satisfies producer and consumer risk is obtdned by minimizing the generalized asymptotic variance of the test statistics. Finally, the properties of the two ALSPs (one for known-AF conditions and one for unknown AF conditions) are investigated to show that the proposed method is corrcct and usablc through numerical examples.展开更多
The failure mechanism stimulated by accelerated stress in the degradation may be different from that under normal conditions, which would lead to invalid accelerated life tests. To solve the problem, we study the re- ...The failure mechanism stimulated by accelerated stress in the degradation may be different from that under normal conditions, which would lead to invalid accelerated life tests. To solve the problem, we study the re- lation between the Arrhenius equation and the lognormal distribution in the degradation process. Two relationships of the lognormal distribution parameters must be satisfied in the conclusion of the unaltered failure mechanism, the first is that the logarithmic standard deviations must be equivalent at different temperature levels, and the second is that the ratio of the differences between logarithmic means must be equal to the ratio of the differences between reciprocals of temperature. The logarithm of distribution lines must simultaneously have the same slope and regular interval lines. We studied the degradation of thick-film resistors in MCM by accelerated stress at four temperature levels (390, 400, 410 and 420 K), and the result agreed well with our method.展开更多
Quantum Monte Carlo data are often afflicted with distributions that resemble lognormal probability distributions and consequently their statistical analysis cannot be based on simple Gaussian assumptions.To this exte...Quantum Monte Carlo data are often afflicted with distributions that resemble lognormal probability distributions and consequently their statistical analysis cannot be based on simple Gaussian assumptions.To this extent a method is introduced to estimate these distributions and thus give better estimates to errors associated with them.This method entails reconstructing the probability distribution of a set of data,with given mean and variance,that has been assumed to be lognormal prior to undergoing a blocking or renormalization transformation.In doing so,we perform a numerical evaluation of the renormalized sum of lognormal random variables.This technique is applied to a simple quantum model utilizing the single-thread Monte Carlo algorithm to estimate the ground state energy or dominant eigenvalue of a Hamiltonian matrix.展开更多
: Case studies on Poisson lognormal distribution of species abundance have been rare, especially in forest communities. We propose a numerical method to fit the Poisson lognormal to the species abundance data at an ev...: Case studies on Poisson lognormal distribution of species abundance have been rare, especially in forest communities. We propose a numerical method to fit the Poisson lognormal to the species abundance data at an evergreen mixed forest in the Dinghushan Biosphere Reserve, South China. Plants in the tree, shrub and herb layers in 25 quadrats of 20 m× 20 m, 5 m× 5 m, and 1 m× 1 m were surveyed. Results indicated that: (i) for each layer, the observed species abundance with a similarly small median, mode, and a variance larger than the mean was reverse J-shaped and followed well the zero-truncated Poisson lognormal; (ii) the coefficient of variation, skewness and kurtosis of abundance, and two Poisson lognormal parameters (& and μ) for shrub layer were closer to those for the herb layer than those for the tree layer; and (iii) from the tree to the shrub to the herb layer, the α and the coefficient of variation decreased, whereas diversity increased. We suggest that: (i) the species abundance distributions in the three layers reflects the overall community characteristics; (ii) the Poisson lognormal can describe the species abundance distribution in diverse communities with a few abundant species but many rare species; and (iii) 1/α should be an alternative measure of diversity.展开更多
Based on a lognormal particle size distribution, this paper makes a model analysis on the polydispersity effects on the magnetization behaviour of diluted ferrofluids. Using a modified Langevin relationship for the lo...Based on a lognormal particle size distribution, this paper makes a model analysis on the polydispersity effects on the magnetization behaviour of diluted ferrofluids. Using a modified Langevin relationship for the lognormal dispersion, it first performs reduced calculations without material parameters. From the results, it is extrapolated that for the ferrofluid of lognormal polydispersion, in comparison with the corresponding monodispersion, the saturation magnetization is enhanced higher by the particle size distribution. It also indicates that in an equivalent magnetic field, the lognormally polydispersed ferrofluid is magnetically saturated faster than the corresponding monodispersion. Along the theoretical extrapolations, the polydispersity effects are evaluated for a typical ferrofluid of magnetite, with a dispersity of σ = 0.20. The results indicate that the lognormal polydispersity leads to a slight increase of the saturation magnetization, but a noticeable increase of the speed to reach the saturation value in an equivalent magnetic field.展开更多
Commonly used statistical procedure to describe the observed statistical sets is to use their conventional moments or cumulants. When choosing an appropriate parametric distribution for the data set is typically that ...Commonly used statistical procedure to describe the observed statistical sets is to use their conventional moments or cumulants. When choosing an appropriate parametric distribution for the data set is typically that parameters of a parametric distribution are estimated using the moment method of creating a system of equations in which the sample conventional moments lay in the equality of the corresponding moments of the theoretical distribution. However, the moment method of parameter estimation is not always convenient, especially for small samples. An alternative approach is based on the use of other characteristics, which the author calls L-moments. L-moments are analogous to conventional moments, but they are based on linear combinations of order statistics, i.e., L-statistics. Using L-moments is theoretically preferable to the conventional moments and consists in the fact that L-moments characterize a wider range of distribution. When estimating from sample L-moments, L-moments are more robust to the presence of outliers in the data. Experience also shows that, compared to conventional moments, L-moments are less prone to bias of estimation. Parameter estimates obtained using L-moments are mainly in the case of small samples often even more accurate than estimates of parameters made by maximum likelihood method. Using the method of L-moments in the case of small data sets from the meteorology is primarily known in statistical literature. This paper deals with the use of L-moments in the case for large data sets of income distribution (individual data) and wage distribution (data are ordered to form of interval frequency distribution of extreme open intervals). This paper also presents a comparison of the accuracy of the method of L-moments with an accuracy of other methods of point estimation of parameters of parametric probability distribution in the case of large data sets of individual data and data ordered to form of interval frequency distribution.展开更多
This paper characterizes the rarely noticeable hot-cutting defect and statistically models the fracture governed by this new type defect. The morphology of the defect on fired ceramic is examined and quantitatively fe...This paper characterizes the rarely noticeable hot-cutting defect and statistically models the fracture governed by this new type defect. The morphology of the defect on fired ceramic is examined and quantitatively featured through comparing the fracture strength governed by intrinsic defect and hot-cutting defect. Weibull distribution is utilized to fit the observed strength data and chi-square goodness-of-fit test is conducted to analyze the deviation. Kernel density estimation is introduced to explore the underlying strength distribution dominated by hot-cutting defect. Based on the shape information from kernel density estimating,gamma and lognormal distribution are compared and the hot-cutting defect governing fracture statistics is finally confirmed by chisquare test. Results show that the newly-defined hot-cutting defect is more dangerous than the intrinsic defect and the priori Weibull distribution fails to describe the fracture statistic governed by the edge defect while the lognormal with a slightly right skewed shape fits it very well.展开更多
The statistical relationship between human height and weight is of especial importance to clinical medicine, epidemiology, and the biology of human development. Yet, after more than a century of anthropometric measure...The statistical relationship between human height and weight is of especial importance to clinical medicine, epidemiology, and the biology of human development. Yet, after more than a century of anthropometric measurements and analyses, there has been no consensus on this relationship. The purpose of this article is to provide a definitive statistical distribution function from which all desired statistics (probabilities, moments, and correlation functions) can be determined. The statistical analysis reported in this article provides strong evidence that height and weight in a diverse population of healthy adults constitute correlated bivariate lognormal random variables. This conclusion is supported by a battery of independent tests comparing empirical values of 1) probability density patterns, 2) linear and higher order correlation coefficients, 3) statistical and hyperstatistics moments up to 6th order, and 4) distance correlation (dCor) values to corresponding theoretical quantities: 1) predicted by the lognormal distribution and 2) simulated by use of appropriate random number generators. Furthermore, calculation of the conditional expectation of weight, given height, yields a theoretical power law that specifies conditions under which body mass index (BMI) can be a valid proxy of obesity. The consistency of the empirical data from a large, diverse anthropometric survey partitioned by gender with the predictions of a correlated bivariate lognormal distribution was found to be so extensive and close as to suggest that this outcome is not coincidental or approximate, but may be a consequence of some underlying biophysical mechanism.展开更多
A probabilistic seismic loss assessment of RC high-rise(RCHR)buildings designed according to Eurocode 8 and located in the Southern Euro-Mediterranean zone is presented herein.The loss assessment methodology is based ...A probabilistic seismic loss assessment of RC high-rise(RCHR)buildings designed according to Eurocode 8 and located in the Southern Euro-Mediterranean zone is presented herein.The loss assessment methodology is based on a comprehensive simulation approach which takes into account ground motion(GM)uncertainty,and the random effects in seismic demand,as well as in predicting the damage states(DSs).The methodology is implemented on three RCHR buildings of 20-story,30-story and 40-story with a core wall structural system.The loss functions described by a cumulative lognormal probability distribution are obtained for two intensity levels for a large set of simulations(NLTHAs)based on 60 GM records with a wide range of magnitude(M),distance to source(R)and different site soil conditions(SS).The losses expressed in percent of building replacement cost for RCHR buildings are obtained.In the estimation of losses,both structural(S)and nonstructural(NS)damage for four DSs are considered.The effect of different GM characteristics(M,R and SS)on the obtained losses are investigated.Finally,the estimated performance of the RCHR buildings are checked to ensure that they fulfill limit state requirements according to Eurocode 8.展开更多
The traditional reservoir classification methods based on conventional well logging are inefficient for determining the properties,such as the porosity,shale volume,J function,and flow zone index,of the tight sandston...The traditional reservoir classification methods based on conventional well logging are inefficient for determining the properties,such as the porosity,shale volume,J function,and flow zone index,of the tight sandstone reservoirs because of their complex pore structure and large heterogeneity.Specifically,the method that is commonly used to characterize the reservoir pore structure is dependent on the nuclear magnetic resonance(NMR)transverse relaxation time(T2)distribution,which is closely related to the pore size distribution.Further,the pore structure parameters(displacement pressure,maximum pore-throat radius,and median pore-throat radius)can be determined and applied to reservoir classification based on the empirical linear or power function obtained from the NMR T2 distributions and the mercury intrusion capillary pressure ourves.However,the effective generalization of these empirical functions is difficult because they differ according to the region and are limited by the representative samples of different regions.A lognormal distribution is commonly used to describe the pore size and particle size distributions of the rock and quantitatively characterize the reservoir pore structure based on the volume,mean radius,and standard deviation of the small and large pores.In this study,we obtain six parameters(the volume,mean radius,and standard deviation of the small and large pores)that represent the characteristics of pore distribution and rock heterogeneity,calculate the total porosity via NMR logging,and classify the reservoirs via cluster analysis by adopting a bimodal lognormal distribution to fit the NMR T2 spectrum.Finally,based on the data obtained from the core tests and the NMR logs,the proposed method,which is readily applicable,can effectively classify the tight sandstone reservoirs.展开更多
This article proposes a statistical method for working out reliability sampling plans under Type I censored sample for items whose failure times have either normal or lognormal distributions. The quality statistic is ...This article proposes a statistical method for working out reliability sampling plans under Type I censored sample for items whose failure times have either normal or lognormal distributions. The quality statistic is a method of moments estimator of a monotonous function of the unreliability. An approach of choosing a truncation time is recommended. The sample size and acceptability constant are approximately determined by using the Cornish-Fisher expansion for quantiles of distribution. Simulation results show that the method given in this article is feasible.展开更多
This paper deals with the effect of layer height randomness on the seismic response of a layered soil. These parameters are assumed to be lognormal random variables. The analysis is carried out via Monte Carlo simulat...This paper deals with the effect of layer height randomness on the seismic response of a layered soil. These parameters are assumed to be lognormal random variables. The analysis is carried out via Monte Carlo simulations coupled with the stiffness matrix method. A parametric study is conducted to derive the stochastic behavior of the peak ground acceleration and its response spectrum,the transfer function and the amplification factors. The input soil characteristics correspond to a site in Mexico City and the input seismic accelerations correspond to the Loma Prieta earthquake. It is found that the layer height heterogeneity causes a widening of the frequency content and a slight increase in the fundamental frequency of the soil profile,indicating that the resonance phenomenon is a concern for a large number of structures. Variation of the layer height randomness acts as a variation of the incident angle,i.e.,a decrease of the amplitude and a shift of the resonant frequencies.展开更多
A mature mathematical technique called copula joint function is introduced in this paper, which is commonly used in the financial risk analysis to estimate uncertainty. The joint function is generalized to the n-dimen...A mature mathematical technique called copula joint function is introduced in this paper, which is commonly used in the financial risk analysis to estimate uncertainty. The joint function is generalized to the n-dimensional Frank’s copula. In addition, we adopt two attenuation models proposed by YU and Boore et al, respectively, and construct a two-dimensional copula joint probabilistic function as an example to illustrate the uncertainty treatment at low probability. The results show that copula joint function gives us a better prediction of peak ground motion than that resultant from the simple linear weight technique which is commonly used in the traditional logic-tree treatment of model uncertainties. In light of widespread application in the risk analysis from financial investment to insurance assessment, we believe that the copula-based technique will have a potential application in the seismic hazard analysis.展开更多
This paper is a further investigation of large deviations for sums of random variables Sn=i=1∑n Xi and S(t)=i=1∑N(t)Xi,(t≥0), where {X_n,n≥1) are independent identically distribution and non-negative random...This paper is a further investigation of large deviations for sums of random variables Sn=i=1∑n Xi and S(t)=i=1∑N(t)Xi,(t≥0), where {X_n,n≥1) are independent identically distribution and non-negative random variables, and {N(t),t≥0} is a counting process of non-negative integer-valued random variables, independent of {X_n,n≥1}. In this paper, under the suppose F∈G, which is a bigger heavy-tailed class than C, proved large deviation results for sums of random variables.展开更多
To estimate the life of vacuum fluorescent display (VFD) more accurately and reduce test time and cost, four constant stress accelerated life tests (CSALTs) were conducted on an accelerated life test model. In the...To estimate the life of vacuum fluorescent display (VFD) more accurately and reduce test time and cost, four constant stress accelerated life tests (CSALTs) were conducted on an accelerated life test model. In the model, statistical analysis of test data is achieved by applying lognormal function to describe the life distribution, and least square method (LSM) to calculate the mean value and the standard deviation of logarithm. As a result, the accelerated life equation was obtained, and then a self-developed software was developed to predict the VFD life. The data analysis results demonstrate that the VFD life submits to lognormal distribution, that the accelerated model meets the linear Arrhenius equation, and that the precise accelerated parameter makes it possible to acquire the life information of VFD within one month.展开更多
This paper deals with the development of sample characteristics of wage distribution in recent years in the Czech Republic by the highest educational attainment. Gross monthly wage is the variable investigated. We dis...This paper deals with the development of sample characteristics of wage distribution in recent years in the Czech Republic by the highest educational attainment. Gross monthly wage is the variable investigated. We distinguish the following scale of the highest educational attainment: primary and incomplete education, secondary education without GCSE, secondary education with GCSE, higher vocational and bachelor education and tertiary education. Forecasts of wage distribution have been developed for the next two years for all of these categories. Three-parametric lognormal curve formed the basis of the theoretical probability distribution. Parameter values of relevant three-parametric lognormal curves were then estimated using the method of L-moments of parameter estimation. Forecasts of sample values of L-moments were calculated using trend analysis of their past development and the parameters of three-parametric Iognormal curves for forecasts of wage distribution were calculated using the predicted values of the first three sample L-moments. We have obtained the forecasts of wage distribution by the highest educational attainment on the basis of these probability density functions.展开更多
We report on the properties of strong pulses from PSR B0656+14 by analyzing the data obtained using the Urumqi 25-m radio telescope at 1540 MHz from August 2007 to September 2010.In 44 h of observational data,a total...We report on the properties of strong pulses from PSR B0656+14 by analyzing the data obtained using the Urumqi 25-m radio telescope at 1540 MHz from August 2007 to September 2010.In 44 h of observational data,a total of 67 pulses with signal-to-noise ratios above a 5σthreshold were detected.The peak flux densities of these pulses are 58 to 194 times that of the average profile,and their pulse energies are 3 to 68 times that of the average pulse.These pulses are clustered around phases about 5-ahead of the peak of the average profile.Compared with the width of the average profile,they are relatively narrow,with the full widths at half-maximum ranging from 0.28 ° to 1.78 °.The distribution of pulse-energies follows a lognormal distribution.These sporadic strong pulses detected from PSR B0656+14 have different characteristics from both typical giant pulses and its regular pulses.展开更多
In this paper, the insurance company considers venture capital and risk-free investment in a constant proportion. The surplus process is perturbed by diffusion. At first, the integro-differential equations satisfied b...In this paper, the insurance company considers venture capital and risk-free investment in a constant proportion. The surplus process is perturbed by diffusion. At first, the integro-differential equations satisfied by the expected discounted dividend payments and the Gerber-Shiu function are derived. Then, the approximate solutions of the integro-differential equations are obtained through the sinc method. Finally, the numerical examples are given when the claim sizes follow different distributions. Furthermore, the errors between the explicit solution and the numerical solution are discussed in a special case.展开更多
文摘The statistical distribution of natural phenomena is of great significance in studying the laws of nature. In order to study the statistical characteristics of a random pulse signal, a random process model is proposed theoretically for better studying of the random law of measured results. Moreover, a simple random pulse signal generation and testing system is designed for studying the counting distributions of three typical objects including particles suspended in the air, standard particles, and background noise. Both normal and lognormal distribution fittings are used for analyzing the experimental results and testified by chi-square distribution fit test and correlation coefficient for comparison. In addition, the statistical laws of three typical objects and the relations between them are discussed in detail. The relation is also the non-integral dimension fractal relation of statistical distributions of different random laser scattering pulse signal groups.
基金National Natural Science Foundation of China(No.71371035)
文摘The two-parameter lognormal distribution is a variant of the normal distribution and the three-parameter lognormal distribution is an extension of the two-parameter lognormal distribution by introducing a location parameter. The Q-Q plot of the three-parameter lognormal distribution is widely used. To obtain the Q-Q plot one needs to iteratively try different values of the shape parameter and subjectively judge the linearity of the Q-Q plot. In this paper,a mathematical method was proposed to determine the value of the shape parameter so as to simplify the generation of the Q-Q plot. Then a new probability plot was proposed,which was more easily obtained and provided more accurate parameter estimates than the Q-Q plot. These are illustrated by three realworld examples.
基金supported by the National Natural Science Foundation of China(No.61104182)
文摘Abstract Lognormal distribution is commonly used in engineering. It is also a life distribution of important research values. For long-life products follow this distribution, it is necessary to apply accelerated testing techniques to product demonstration. This paper describes the development of accelerated life testing sampling plans (ALSPs) for lognormal distribution under time-censoring conditions. ALSPs take both producer and consumer risks into account, and they can be designed to work whether acceleration factor (AF) is known or unknown. When AF is known, lift testing is assumed to be conducted under accelerated conditions with time-censoring. The producer and con- sumer risks are satisfied, and the size of test sample and the size of acceptance number arc opti- mized. Then sensitivity analyses are conducted. When AF is unknown, two or more predetermined levels of accelerated stress are used. The sample sizes and sample proportion allo- cated to each stress level are optimized. The acceptance constant that satisfies producer and consumer risk is obtdned by minimizing the generalized asymptotic variance of the test statistics. Finally, the properties of the two ALSPs (one for known-AF conditions and one for unknown AF conditions) are investigated to show that the proposed method is corrcct and usablc through numerical examples.
基金Project supported by the National Natural Science Foundation of China(No.61204081)the Research Project in Guangdong Province,China(No.2011B090400463)the Guangdong Provincial Science and Technology Major Project of the Ministry of Science and Technology of China(Nos.2011A080801005,2012A080304003)
文摘The failure mechanism stimulated by accelerated stress in the degradation may be different from that under normal conditions, which would lead to invalid accelerated life tests. To solve the problem, we study the re- lation between the Arrhenius equation and the lognormal distribution in the degradation process. Two relationships of the lognormal distribution parameters must be satisfied in the conclusion of the unaltered failure mechanism, the first is that the logarithmic standard deviations must be equivalent at different temperature levels, and the second is that the ratio of the differences between logarithmic means must be equal to the ratio of the differences between reciprocals of temperature. The logarithm of distribution lines must simultaneously have the same slope and regular interval lines. We studied the degradation of thick-film resistors in MCM by accelerated stress at four temperature levels (390, 400, 410 and 420 K), and the result agreed well with our method.
基金supported by the University of KwaZulu-Natal Competitive Grant.
文摘Quantum Monte Carlo data are often afflicted with distributions that resemble lognormal probability distributions and consequently their statistical analysis cannot be based on simple Gaussian assumptions.To this extent a method is introduced to estimate these distributions and thus give better estimates to errors associated with them.This method entails reconstructing the probability distribution of a set of data,with given mean and variance,that has been assumed to be lognormal prior to undergoing a blocking or renormalization transformation.In doing so,we perform a numerical evaluation of the renormalized sum of lognormal random variables.This technique is applied to a simple quantum model utilizing the single-thread Monte Carlo algorithm to estimate the ground state energy or dominant eigenvalue of a Hamiltonian matrix.
基金国家自然科学基金,the Forestry Science and TechnologyResearch Planning of Guangdong Province of China,中国科学院知识创新工程项目
文摘: Case studies on Poisson lognormal distribution of species abundance have been rare, especially in forest communities. We propose a numerical method to fit the Poisson lognormal to the species abundance data at an evergreen mixed forest in the Dinghushan Biosphere Reserve, South China. Plants in the tree, shrub and herb layers in 25 quadrats of 20 m× 20 m, 5 m× 5 m, and 1 m× 1 m were surveyed. Results indicated that: (i) for each layer, the observed species abundance with a similarly small median, mode, and a variance larger than the mean was reverse J-shaped and followed well the zero-truncated Poisson lognormal; (ii) the coefficient of variation, skewness and kurtosis of abundance, and two Poisson lognormal parameters (& and μ) for shrub layer were closer to those for the herb layer than those for the tree layer; and (iii) from the tree to the shrub to the herb layer, the α and the coefficient of variation decreased, whereas diversity increased. We suggest that: (i) the species abundance distributions in the three layers reflects the overall community characteristics; (ii) the Poisson lognormal can describe the species abundance distribution in diverse communities with a few abundant species but many rare species; and (iii) 1/α should be an alternative measure of diversity.
基金Project supported by the Shanghai Leading Academic Discipline Project of China (Grant No. B107)
文摘Based on a lognormal particle size distribution, this paper makes a model analysis on the polydispersity effects on the magnetization behaviour of diluted ferrofluids. Using a modified Langevin relationship for the lognormal dispersion, it first performs reduced calculations without material parameters. From the results, it is extrapolated that for the ferrofluid of lognormal polydispersion, in comparison with the corresponding monodispersion, the saturation magnetization is enhanced higher by the particle size distribution. It also indicates that in an equivalent magnetic field, the lognormally polydispersed ferrofluid is magnetically saturated faster than the corresponding monodispersion. Along the theoretical extrapolations, the polydispersity effects are evaluated for a typical ferrofluid of magnetite, with a dispersity of σ = 0.20. The results indicate that the lognormal polydispersity leads to a slight increase of the saturation magnetization, but a noticeable increase of the speed to reach the saturation value in an equivalent magnetic field.
文摘Commonly used statistical procedure to describe the observed statistical sets is to use their conventional moments or cumulants. When choosing an appropriate parametric distribution for the data set is typically that parameters of a parametric distribution are estimated using the moment method of creating a system of equations in which the sample conventional moments lay in the equality of the corresponding moments of the theoretical distribution. However, the moment method of parameter estimation is not always convenient, especially for small samples. An alternative approach is based on the use of other characteristics, which the author calls L-moments. L-moments are analogous to conventional moments, but they are based on linear combinations of order statistics, i.e., L-statistics. Using L-moments is theoretically preferable to the conventional moments and consists in the fact that L-moments characterize a wider range of distribution. When estimating from sample L-moments, L-moments are more robust to the presence of outliers in the data. Experience also shows that, compared to conventional moments, L-moments are less prone to bias of estimation. Parameter estimates obtained using L-moments are mainly in the case of small samples often even more accurate than estimates of parameters made by maximum likelihood method. Using the method of L-moments in the case of small data sets from the meteorology is primarily known in statistical literature. This paper deals with the use of L-moments in the case for large data sets of income distribution (individual data) and wage distribution (data are ordered to form of interval frequency distribution of extreme open intervals). This paper also presents a comparison of the accuracy of the method of L-moments with an accuracy of other methods of point estimation of parameters of parametric probability distribution in the case of large data sets of individual data and data ordered to form of interval frequency distribution.
文摘This paper characterizes the rarely noticeable hot-cutting defect and statistically models the fracture governed by this new type defect. The morphology of the defect on fired ceramic is examined and quantitatively featured through comparing the fracture strength governed by intrinsic defect and hot-cutting defect. Weibull distribution is utilized to fit the observed strength data and chi-square goodness-of-fit test is conducted to analyze the deviation. Kernel density estimation is introduced to explore the underlying strength distribution dominated by hot-cutting defect. Based on the shape information from kernel density estimating,gamma and lognormal distribution are compared and the hot-cutting defect governing fracture statistics is finally confirmed by chisquare test. Results show that the newly-defined hot-cutting defect is more dangerous than the intrinsic defect and the priori Weibull distribution fails to describe the fracture statistic governed by the edge defect while the lognormal with a slightly right skewed shape fits it very well.
文摘The statistical relationship between human height and weight is of especial importance to clinical medicine, epidemiology, and the biology of human development. Yet, after more than a century of anthropometric measurements and analyses, there has been no consensus on this relationship. The purpose of this article is to provide a definitive statistical distribution function from which all desired statistics (probabilities, moments, and correlation functions) can be determined. The statistical analysis reported in this article provides strong evidence that height and weight in a diverse population of healthy adults constitute correlated bivariate lognormal random variables. This conclusion is supported by a battery of independent tests comparing empirical values of 1) probability density patterns, 2) linear and higher order correlation coefficients, 3) statistical and hyperstatistics moments up to 6th order, and 4) distance correlation (dCor) values to corresponding theoretical quantities: 1) predicted by the lognormal distribution and 2) simulated by use of appropriate random number generators. Furthermore, calculation of the conditional expectation of weight, given height, yields a theoretical power law that specifies conditions under which body mass index (BMI) can be a valid proxy of obesity. The consistency of the empirical data from a large, diverse anthropometric survey partitioned by gender with the predictions of a correlated bivariate lognormal distribution was found to be so extensive and close as to suggest that this outcome is not coincidental or approximate, but may be a consequence of some underlying biophysical mechanism.
文摘A probabilistic seismic loss assessment of RC high-rise(RCHR)buildings designed according to Eurocode 8 and located in the Southern Euro-Mediterranean zone is presented herein.The loss assessment methodology is based on a comprehensive simulation approach which takes into account ground motion(GM)uncertainty,and the random effects in seismic demand,as well as in predicting the damage states(DSs).The methodology is implemented on three RCHR buildings of 20-story,30-story and 40-story with a core wall structural system.The loss functions described by a cumulative lognormal probability distribution are obtained for two intensity levels for a large set of simulations(NLTHAs)based on 60 GM records with a wide range of magnitude(M),distance to source(R)and different site soil conditions(SS).The losses expressed in percent of building replacement cost for RCHR buildings are obtained.In the estimation of losses,both structural(S)and nonstructural(NS)damage for four DSs are considered.The effect of different GM characteristics(M,R and SS)on the obtained losses are investigated.Finally,the estimated performance of the RCHR buildings are checked to ensure that they fulfill limit state requirements according to Eurocode 8.
基金supported by the by the National Science and Technology Major Project “Prediction Technique and Evaluation of Tight Oil Sweet Spot”(2016ZX05046-002)
文摘The traditional reservoir classification methods based on conventional well logging are inefficient for determining the properties,such as the porosity,shale volume,J function,and flow zone index,of the tight sandstone reservoirs because of their complex pore structure and large heterogeneity.Specifically,the method that is commonly used to characterize the reservoir pore structure is dependent on the nuclear magnetic resonance(NMR)transverse relaxation time(T2)distribution,which is closely related to the pore size distribution.Further,the pore structure parameters(displacement pressure,maximum pore-throat radius,and median pore-throat radius)can be determined and applied to reservoir classification based on the empirical linear or power function obtained from the NMR T2 distributions and the mercury intrusion capillary pressure ourves.However,the effective generalization of these empirical functions is difficult because they differ according to the region and are limited by the representative samples of different regions.A lognormal distribution is commonly used to describe the pore size and particle size distributions of the rock and quantitatively characterize the reservoir pore structure based on the volume,mean radius,and standard deviation of the small and large pores.In this study,we obtain six parameters(the volume,mean radius,and standard deviation of the small and large pores)that represent the characteristics of pore distribution and rock heterogeneity,calculate the total porosity via NMR logging,and classify the reservoirs via cluster analysis by adopting a bimodal lognormal distribution to fit the NMR T2 spectrum.Finally,based on the data obtained from the core tests and the NMR logs,the proposed method,which is readily applicable,can effectively classify the tight sandstone reservoirs.
基金This work is partially supported by National Natural Science Foundation of China (10071090 and 10271013).
文摘This article proposes a statistical method for working out reliability sampling plans under Type I censored sample for items whose failure times have either normal or lognormal distributions. The quality statistic is a method of moments estimator of a monotonous function of the unreliability. An approach of choosing a truncation time is recommended. The sample size and acceptability constant are approximately determined by using the Cornish-Fisher expansion for quantiles of distribution. Simulation results show that the method given in this article is feasible.
文摘This paper deals with the effect of layer height randomness on the seismic response of a layered soil. These parameters are assumed to be lognormal random variables. The analysis is carried out via Monte Carlo simulations coupled with the stiffness matrix method. A parametric study is conducted to derive the stochastic behavior of the peak ground acceleration and its response spectrum,the transfer function and the amplification factors. The input soil characteristics correspond to a site in Mexico City and the input seismic accelerations correspond to the Loma Prieta earthquake. It is found that the layer height heterogeneity causes a widening of the frequency content and a slight increase in the fundamental frequency of the soil profile,indicating that the resonance phenomenon is a concern for a large number of structures. Variation of the layer height randomness acts as a variation of the incident angle,i.e.,a decrease of the amplitude and a shift of the resonant frequencies.
基金Project of Institute of Crustal Dynamics, China Earthquake Administration (ZDJ2007-1)One Hundred Individual Program of Chinese Academy of Sciences (99M2009M02) National Natural Science Foundation of China (40574022)
文摘A mature mathematical technique called copula joint function is introduced in this paper, which is commonly used in the financial risk analysis to estimate uncertainty. The joint function is generalized to the n-dimensional Frank’s copula. In addition, we adopt two attenuation models proposed by YU and Boore et al, respectively, and construct a two-dimensional copula joint probabilistic function as an example to illustrate the uncertainty treatment at low probability. The results show that copula joint function gives us a better prediction of peak ground motion than that resultant from the simple linear weight technique which is commonly used in the traditional logic-tree treatment of model uncertainties. In light of widespread application in the risk analysis from financial investment to insurance assessment, we believe that the copula-based technique will have a potential application in the seismic hazard analysis.
文摘This paper is a further investigation of large deviations for sums of random variables Sn=i=1∑n Xi and S(t)=i=1∑N(t)Xi,(t≥0), where {X_n,n≥1) are independent identically distribution and non-negative random variables, and {N(t),t≥0} is a counting process of non-negative integer-valued random variables, independent of {X_n,n≥1}. In this paper, under the suppose F∈G, which is a bigger heavy-tailed class than C, proved large deviation results for sums of random variables.
基金Shanghai Municipal Natural Science Foun-dation (NO.09ZR1413000)Undergraduate Education High-land Construction Project of ShanghaiKey Technology R&D Program of Shanghai Municipality (No.08160510600)
文摘To estimate the life of vacuum fluorescent display (VFD) more accurately and reduce test time and cost, four constant stress accelerated life tests (CSALTs) were conducted on an accelerated life test model. In the model, statistical analysis of test data is achieved by applying lognormal function to describe the life distribution, and least square method (LSM) to calculate the mean value and the standard deviation of logarithm. As a result, the accelerated life equation was obtained, and then a self-developed software was developed to predict the VFD life. The data analysis results demonstrate that the VFD life submits to lognormal distribution, that the accelerated model meets the linear Arrhenius equation, and that the precise accelerated parameter makes it possible to acquire the life information of VFD within one month.
文摘This paper deals with the development of sample characteristics of wage distribution in recent years in the Czech Republic by the highest educational attainment. Gross monthly wage is the variable investigated. We distinguish the following scale of the highest educational attainment: primary and incomplete education, secondary education without GCSE, secondary education with GCSE, higher vocational and bachelor education and tertiary education. Forecasts of wage distribution have been developed for the next two years for all of these categories. Three-parametric lognormal curve formed the basis of the theoretical probability distribution. Parameter values of relevant three-parametric lognormal curves were then estimated using the method of L-moments of parameter estimation. Forecasts of sample values of L-moments were calculated using trend analysis of their past development and the parameters of three-parametric Iognormal curves for forecasts of wage distribution were calculated using the predicted values of the first three sample L-moments. We have obtained the forecasts of wage distribution by the highest educational attainment on the basis of these probability density functions.
基金funded by the National Natural Science Foundation of China(Grant No.10973026)
文摘We report on the properties of strong pulses from PSR B0656+14 by analyzing the data obtained using the Urumqi 25-m radio telescope at 1540 MHz from August 2007 to September 2010.In 44 h of observational data,a total of 67 pulses with signal-to-noise ratios above a 5σthreshold were detected.The peak flux densities of these pulses are 58 to 194 times that of the average profile,and their pulse energies are 3 to 68 times that of the average pulse.These pulses are clustered around phases about 5-ahead of the peak of the average profile.Compared with the width of the average profile,they are relatively narrow,with the full widths at half-maximum ranging from 0.28 ° to 1.78 °.The distribution of pulse-energies follows a lognormal distribution.These sporadic strong pulses detected from PSR B0656+14 have different characteristics from both typical giant pulses and its regular pulses.
基金supported by the National Natural Science Foundation of China (No. 71801085)。
文摘In this paper, the insurance company considers venture capital and risk-free investment in a constant proportion. The surplus process is perturbed by diffusion. At first, the integro-differential equations satisfied by the expected discounted dividend payments and the Gerber-Shiu function are derived. Then, the approximate solutions of the integro-differential equations are obtained through the sinc method. Finally, the numerical examples are given when the claim sizes follow different distributions. Furthermore, the errors between the explicit solution and the numerical solution are discussed in a special case.