Addressing the insufficiency in down-regulation leeway within integrated energy systems stemming from the erratic and volatile nature of wind and solar renewable energy generation,this study focuses on formulating a c...Addressing the insufficiency in down-regulation leeway within integrated energy systems stemming from the erratic and volatile nature of wind and solar renewable energy generation,this study focuses on formulating a coordinated strategy involving the carbon capture unit of the integrated energy system and the resources on the load storage side.A scheduling model is devised that takes into account the confidence interval associated with renewable energy generation,with the overarching goal of optimizing the system for low-carbon operation.To begin with,an in-depth analysis is conducted on the temporal energy-shifting attributes and the low-carbon modulation mechanisms exhibited by the source-side carbon capture power plant within the context of integrated and adaptable operational paradigms.Drawing from this analysis,a model is devised to represent the adjustable resources on the charge-storage side,predicated on the principles of electro-thermal coupling within the energy system.Subsequently,the dissimilarities in the confidence intervals of renewable energy generation are considered,leading to the proposition of a flexible upper threshold for the confidence interval.Building on this,a low-carbon dispatch model is established for the integrated energy system,factoring in the margin allowed by the adjustable resources.In the final phase,a simulation is performed on a regional electric heating integrated energy system.This simulation seeks to assess the impact of source-load-storage coordination on the system’s low-carbon operation across various scenarios of reduction margin reserves.The findings underscore that the proactive scheduling model incorporating confidence interval considerations for reduction margin reserves effectively mitigates the uncertainties tied to renewable energy generation.Through harmonized orchestration of source,load,and storage elements,it expands the utilization scope for renewable energy,safeguards the economic efficiency of system operations under low-carbon emission conditions,and empirically validates the soundness and efficacy of the proposed approach.展开更多
Let X denote a discrete distribution as Poisson, binomial or negative binomial variable. The score confidence interval for the mean of X is obtained based on inverting the hypothesis test and the central limit theorem...Let X denote a discrete distribution as Poisson, binomial or negative binomial variable. The score confidence interval for the mean of X is obtained based on inverting the hypothesis test and the central limit theorem is discussed and recommended widely. But it has sharp downward spikes for small means. This paper proposes to move the score interval left a little (about 0.04 unit), called by moved score confidence interval. Numerical computation and Edgeworth expansion show that the moved score interval is analogous to the score interval completely and behaves better for moderate means;for small means the moved interval raises the infimum of the coverage probability and improves the sharp spikes significantly. Especially, it has unified explicit formulations to compute easily.展开更多
Group testing is a method of pooling a number of units together and performing a single test on the resulting group. It is an appealing option when few individual units are thought to be infected leading to reduced co...Group testing is a method of pooling a number of units together and performing a single test on the resulting group. It is an appealing option when few individual units are thought to be infected leading to reduced costs of testing as compared to individually testing the units. Group testing aims to identify the positive groups in all the groups tested or to estimate the proportion of positives (p) in a population. Interval estimation methods of the proportions in group testing for unequal group sizes adjusted for overdispersion have been examined. Lately improvement in statistical methods allows the construction of highly accurate confidence intervals (CIs). The aim here is to apply group testing for estimation and generate highly accurate Bootstrap confidence intervals (CIs) for the proportion of defective or positive units in particular. This study provided a comparison of several proven methods of constructing CIs for a binomial proportion after adjusting for overdispersion in group testing with groups of unequal sizes. Bootstrap resampling was applied on data simulated from binomial distribution, and confidence intervals with high coverage probabilities were produced. This data was assumed to be overdispersed and independent between groups but correlated within these groups. Interval estimation methods based on the Wald, the Logit and Complementary log-log (CLL) functions were considered. The criterion used in the comparisons is mainly the coverage probabilities attained by nominal 95% CIs, though interval width is also regarded. Bootstrapping produced CIs with high coverage probabilities for each of the three interval methods.展开更多
Suppose that there are two populations x and y with missing data on both of them, where x has a distribution function F(·) which is unknown and y has a distribution function Gθ(·) with a probability den...Suppose that there are two populations x and y with missing data on both of them, where x has a distribution function F(·) which is unknown and y has a distribution function Gθ(·) with a probability density function gθ(·) with known form depending on some unknown parameter θ. Fractional imputation is used to fill in missing data. The asymptotic distributions of the semi-empirical likelihood ration statistic are obtained under some mild conditions. Then, empirical likelihood confidence intervals on the differences of x and y are constructed.展开更多
In cancer survival analysis, it is very frequently to estimate the confidence intervals for survival probabilities.But this calculation is not commonly involve in most popular computer packages, or only one methods of...In cancer survival analysis, it is very frequently to estimate the confidence intervals for survival probabilities.But this calculation is not commonly involve in most popular computer packages, or only one methods of estimation in the packages. In the present Paper, we will describe a microcomputer Program for estimating the confidence intervals of survival probabilities, when the survival functions are estimated using Kaplan-Meier product-limit or life-table method. There are five methods of estimation in the program (SPCI), which are the classical(based on Greenwood's formula of variance of S(ti), Rothman-Wilson, arcsin transformation, log(-Iog) transformation, Iogit transformation methods. Two example analysis are given for testing the performances of the program running.展开更多
Purpose:We aim to extend our investigations related to the Relative Intensity of Collaboration(RIC)indicator,by constructing a confidence interval for the obtained values.Design/methodology/approach:We use Mantel-Haen...Purpose:We aim to extend our investigations related to the Relative Intensity of Collaboration(RIC)indicator,by constructing a confidence interval for the obtained values.Design/methodology/approach:We use Mantel-Haenszel statistics as applied recently by Smolinsky,Klingenberg,and Marx.Findings:We obtain confidence intervals for the RIC indicatorResearch limitations:It is not obvious that data obtained from the Web of Science(or any other database)can be considered a random sample.Practical implications:We explain how to calculate confidence intervals.Bibliometric indicators are more often than not presented as precise values instead of an approximation depending on the database and the time of measurement.Our approach presents a suggestion to solve this problem.Originality/value:Our approach combines the statistics of binary categorical data and bibliometric studies of collaboration.展开更多
This paper provides methods for assessing the precision of cost elasticity estimates when the underlying regression function is assumed to be polynomial. Specifically, the paper adapts two well-known methods for compu...This paper provides methods for assessing the precision of cost elasticity estimates when the underlying regression function is assumed to be polynomial. Specifically, the paper adapts two well-known methods for computing confidential intervals for ratios: the delta-method and the Fieller method. We show that performing the estimation with mean-centered explanatory variables provides a straightforward way to estimate the elasticity and compute a confidence interval for it. A theoretical discussion of the proposed methods is provided, as well as an empirical example based on publicly available postal data. Possible areas of application include postal service providers worldwide, transportation and electricity.展开更多
This paper presents four methods of constructing the confidence interval for the proportion <i><span style="font-family:Verdana;">p</span></i><span style="font-family:;" ...This paper presents four methods of constructing the confidence interval for the proportion <i><span style="font-family:Verdana;">p</span></i><span style="font-family:;" "=""><span style="font-family:Verdana;"> of the binomial distribution. Evidence in the literature indicates the standard Wald confidence interval for the binomial proportion is inaccurate, especially for extreme values of </span><i><span style="font-family:Verdana;">p</span></i><span style="font-family:Verdana;">. Even for moderately large sample sizes, the coverage probabilities of the Wald confidence interval prove to be erratic for extreme values of </span><i><span style="font-family:Verdana;">p</span></i><span style="font-family:Verdana;">. Three alternative confidence intervals, namely, Wilson confidence interval, Clopper-Pearson interval, and likelihood interval</span></span><span style="font-family:Verdana;">,</span><span style="font-family:Verdana;"> are compared to the Wald confidence interval on the basis of coverage probability and expected length by means of simulation.</span>展开更多
To improve the forecasting reliability of travel time, the time-varying confidence interval of travel time on arterials is forecasted using an autoregressive integrated moving average and generalized autoregressive co...To improve the forecasting reliability of travel time, the time-varying confidence interval of travel time on arterials is forecasted using an autoregressive integrated moving average and generalized autoregressive conditional heteroskedasticity (ARIMA-GARCH) model. In which, the ARIMA model is used as the mean equation of the GARCH model to model the travel time levels and the GARCH model is used to model the conditional variances of travel time. The proposed method is validated and evaluated using actual traffic flow data collected from the traffic monitoring system of Kunshan city. The evaluation results show that, compared with the conventional ARIMA model, the proposed model cannot significantly improve the forecasting performance of travel time levels but has advantage in travel time volatility forecasting. The proposed model can well capture the travel time heteroskedasticity and forecast the time-varying confidence intervals of travel time which can better reflect the volatility of observed travel times than the fixed confidence interval provided by the ARIMA model.展开更多
The random finite difference method(RFDM) is a popular approach to quantitatively evaluate the influence of inherent spatial variability of soil on the deformation of embedded tunnels.However,the high computational co...The random finite difference method(RFDM) is a popular approach to quantitatively evaluate the influence of inherent spatial variability of soil on the deformation of embedded tunnels.However,the high computational cost is an ongoing challenge for its application in complex scenarios.To address this limitation,a deep learning-based method for efficient prediction of tunnel deformation in spatially variable soil is proposed.The proposed method uses one-dimensional convolutional neural network(CNN) to identify the pattern between random field input and factor of safety of tunnel deformation output.The mean squared error and correlation coefficient of the CNN model applied to the newly untrained dataset was less than 0.02 and larger than 0.96,respectively.It means that the trained CNN model can replace RFDM analysis for Monte Carlo simulations with a small but sufficient number of random field samples(about 40 samples for each case in this study).It is well known that the machine learning or deep learning model has a common limitation that the confidence of predicted result is unknown and only a deterministic outcome is given.This calls for an approach to gauge the model’s confidence interval.It is achieved by applying dropout to all layers of the original model to retrain the model and using the dropout technique when performing inference.The excellent agreement between the CNN model prediction and the RFDM calculated results demonstrated that the proposed deep learning-based method has potential for tunnel performance analysis in spatially variable soils.展开更多
A novel damage detection method is applied to a 3-story frame structure, to obtain statistical quantification control criterion of the existence, location and identification of damage. The mean, standard deviation, an...A novel damage detection method is applied to a 3-story frame structure, to obtain statistical quantification control criterion of the existence, location and identification of damage. The mean, standard deviation, and exponentially weighted moving average (EWMA) are applied to detect damage information according to statistical process control (SPC) theory. It is concluded that the detection is insignificant with the mean and EWMA because the structural response is not independent and is not a normal distribution. On the other hand, the damage information is detected well with the standard deviation because the influence of the data distribution is not pronounced with this parameter. A suitable moderate confidence level is explored for more significant damage location and quantification detection, and the impact of noise is investigated to illustrate the robustness of the method.展开更多
Probability density function (PDF) method is proposed for analysing the structure of the reconstructed attractor in computing the correlation dimensions of RR intervals of ten normal old men. PDF contains important in...Probability density function (PDF) method is proposed for analysing the structure of the reconstructed attractor in computing the correlation dimensions of RR intervals of ten normal old men. PDF contains important information about the spatial distribution of the phase points in the reconstructed attractor. To the best of our knowledge, it is the first time that the PDF method is put forward for the analysis of the reconstructed attractor structure. Numerical simulations demonstrate that the cardiac systems of healthy old men are about 6-6.5 dimensional complex dynamical systems. It is found that PDF is not symmetrically distributed when time delay is small, while PDF satisfies Gaussian distribution when time delay is big enough. A cluster effect mechanism is presented to explain this phenomenon. By studying the shape of PDFs, that the roles played by time delay are more important than embedding dimension in the reconstruction is clearly indicated. Results have demonstrated that the PDF method represents a promising numerical approach for the observation of the reconstructed attractor structure and may provide more information and new diagnostic potential of the analyzed cardiac system.展开更多
Now we extend one method into a sequence of binomial data, propose a stepwise confidence interval method for toxic-ity study, and also in our paper, two methods of constructing intervals for the risk difference are pr...Now we extend one method into a sequence of binomial data, propose a stepwise confidence interval method for toxic-ity study, and also in our paper, two methods of constructing intervals for the risk difference are proposed. The first one is based on the well-known conditional confidence intervals for odds ratio, and the other one comes from Santner“small-sample confidence intervals for the difference of two success probabilities”, and it produces exact intervals, through employing our method.展开更多
A Poisson distribution is well used as a standard model for analyzing count data. Most of the usual constructing confidence intervals are based on an asymptotic approximation to the distribution of the sample mean by ...A Poisson distribution is well used as a standard model for analyzing count data. Most of the usual constructing confidence intervals are based on an asymptotic approximation to the distribution of the sample mean by using the Wald interval. That is, the Wald interval has poor performance in terms of coverage probabilities and average widths interval for small means and small to moderate sample sizes. In this paper, an approximate confidence interval for a Poisson mean is proposed and is based on an empirically determined the tail probabilities. Simulation results show that the pro- posed interval outperforms the others when small means and small to moderate sample sizes.展开更多
The purpose of this article offers different algorithms of Weibull Geometric (WG) distribution estimation depending on the progressive Type II censoring samples plan, spatially the joint confidence intervals for the p...The purpose of this article offers different algorithms of Weibull Geometric (WG) distribution estimation depending on the progressive Type II censoring samples plan, spatially the joint confidence intervals for the parameters. The approximate joint confidence intervals for the parameters, the approximate confidence regions and percentile bootstrap intervals of confidence are discussed, and several Markov chain Monte Carlo (MCMC) techniques are also presented. The parts of mean square error (MSEs) and credible intervals lengths, the estimators of Bayes depend on non-informative implement more effective than the maximum likelihood estimates (MLEs) and bootstrap. Comparing the models, the MSEs, average confidence interval lengths of the MLEs, and Bayes estimators for parameters are less significant for censored models.展开更多
We discuss formulas and techniques for finding maximum-likelihood estimators of parameters of autoregressive (with particular emphasis on Markov and Yule) models, computing their asymptotic variance-covariance matrix ...We discuss formulas and techniques for finding maximum-likelihood estimators of parameters of autoregressive (with particular emphasis on Markov and Yule) models, computing their asymptotic variance-covariance matrix and displaying the resulting confidence regions;Monte Carlo simulation is then used to establish the accuracy of the corresponding level of confidence. The results indicate that a direct application of the Central Limit Theorem yields errors too large to be acceptable;instead, we recommend using a technique based directly on the natural logarithm of the likelihood function, verifying its substantially higher accuracy. Our study is then extended to the case of estimating only a subset of a model’s parameters, when the remaining ones (called nuisance) are of no interest to us.展开更多
In data envelopment analysis (DEA), input and output values are subject to change for several reasons. Such variations differ in their input/output items and their decision-making units (DMUs). Hence, DEA efficiency s...In data envelopment analysis (DEA), input and output values are subject to change for several reasons. Such variations differ in their input/output items and their decision-making units (DMUs). Hence, DEA efficiency scores need to be examined by considering these factors. In this paper, we propose new resampling models based on these variations for gauging the confidence intervals of DEA scores. The first model utilizes past-present data for estimating data variations imposing chronological order weights which are supplied by Lucas series (a variant of Fibonacci series). The second model deals with future prospects. This model aims at forecasting the future efficiency score and its confidence interval for each DMU. We applied our models to a dataset composed of Japanese municipal hospitals.展开更多
Yule-Simon distribution has a wide range of practical applications, such as in networkscience, biology and humanities. A lot of work focuses on the study of how well the empirical datafits Yule-Simon distribution or h...Yule-Simon distribution has a wide range of practical applications, such as in networkscience, biology and humanities. A lot of work focuses on the study of how well the empirical datafits Yule-Simon distribution or how to estimate the parameter. There are still some open problems,such as the error analysis of parameter estimation, the theoretical proof of the convergence of theiterative algorithm for maximum likelihood estimation of parameters. The Yule-Simon distributionis a heavy-tailed distribution and the parameter is usually less than 2, so the variance does notexist. This makes it difficult to give an interval estimation of the parameter. Using the compressiontransformation, this paper proposes a method of interval estimation based on the centrallimit theorem. This method can be applied to many heavy-tailed distributions. The other twoasymptotic confidence intervals of the parameter are obtained based on the maximum likelihoodand the mode method. These estimation methods are compared in simulations and applications toempirical data.展开更多
Although there are many measures of variability for qualitative variables, they are little used in social research, nor are they included in statistical software. The aim of this article is to present six measures of ...Although there are many measures of variability for qualitative variables, they are little used in social research, nor are they included in statistical software. The aim of this article is to present six measures of variation for qualitative variables of simple calculation, as well as to facilitate their use by means of the R software. The measures considered are, on the one hand, Freemans variation ratio, Morals universal variation ratio, Kvalseths standard deviation from the mode, and Wilcoxs variation ratio which are most affected by proximity to a constant random variable, where the measures of variability for qualitative variables reach their minimum value of 0. On the other hand, the Gibbs-Poston index of qualitative variation and Shannons relative entropy are included, which are more affected by the proximity to a uniform distribution, where the measures of variability for qualitative variables reach their maximum value of 1. Point and interval estimation are addressed. Bootstrap by the percentile and bias-corrected and accelerated percentile methods are used to obtain confidence intervals. Two calculation situations are presented: with a sample mode and with two or more modes. The standard deviation from the mode among the six considered measures, and the universal variation ratio among the three variation ratios, are particularly recommended for use.展开更多
An S-N curve fitting approach is proposed based on the weighted least square method, and the weights are inversely proportional to the length of mean confidence intervals of experimental data sets. The assumption coin...An S-N curve fitting approach is proposed based on the weighted least square method, and the weights are inversely proportional to the length of mean confidence intervals of experimental data sets. The assumption coincides with the physical characteristics of the fatigue life scatter. Two examples demonstrate the method. It is shown that the method has better accuracy and reasonableness compared with the usual least square method.展开更多
基金supported by the Science and Technology Project of State Grid Inner Mongolia East Power Co.,Ltd.:Research on Carbon Flow Apportionment and Assessment Methods for Distributed Energy under Dual Carbon Targets(52664K220004).
文摘Addressing the insufficiency in down-regulation leeway within integrated energy systems stemming from the erratic and volatile nature of wind and solar renewable energy generation,this study focuses on formulating a coordinated strategy involving the carbon capture unit of the integrated energy system and the resources on the load storage side.A scheduling model is devised that takes into account the confidence interval associated with renewable energy generation,with the overarching goal of optimizing the system for low-carbon operation.To begin with,an in-depth analysis is conducted on the temporal energy-shifting attributes and the low-carbon modulation mechanisms exhibited by the source-side carbon capture power plant within the context of integrated and adaptable operational paradigms.Drawing from this analysis,a model is devised to represent the adjustable resources on the charge-storage side,predicated on the principles of electro-thermal coupling within the energy system.Subsequently,the dissimilarities in the confidence intervals of renewable energy generation are considered,leading to the proposition of a flexible upper threshold for the confidence interval.Building on this,a low-carbon dispatch model is established for the integrated energy system,factoring in the margin allowed by the adjustable resources.In the final phase,a simulation is performed on a regional electric heating integrated energy system.This simulation seeks to assess the impact of source-load-storage coordination on the system’s low-carbon operation across various scenarios of reduction margin reserves.The findings underscore that the proactive scheduling model incorporating confidence interval considerations for reduction margin reserves effectively mitigates the uncertainties tied to renewable energy generation.Through harmonized orchestration of source,load,and storage elements,it expands the utilization scope for renewable energy,safeguards the economic efficiency of system operations under low-carbon emission conditions,and empirically validates the soundness and efficacy of the proposed approach.
文摘Let X denote a discrete distribution as Poisson, binomial or negative binomial variable. The score confidence interval for the mean of X is obtained based on inverting the hypothesis test and the central limit theorem is discussed and recommended widely. But it has sharp downward spikes for small means. This paper proposes to move the score interval left a little (about 0.04 unit), called by moved score confidence interval. Numerical computation and Edgeworth expansion show that the moved score interval is analogous to the score interval completely and behaves better for moderate means;for small means the moved interval raises the infimum of the coverage probability and improves the sharp spikes significantly. Especially, it has unified explicit formulations to compute easily.
文摘Group testing is a method of pooling a number of units together and performing a single test on the resulting group. It is an appealing option when few individual units are thought to be infected leading to reduced costs of testing as compared to individually testing the units. Group testing aims to identify the positive groups in all the groups tested or to estimate the proportion of positives (p) in a population. Interval estimation methods of the proportions in group testing for unequal group sizes adjusted for overdispersion have been examined. Lately improvement in statistical methods allows the construction of highly accurate confidence intervals (CIs). The aim here is to apply group testing for estimation and generate highly accurate Bootstrap confidence intervals (CIs) for the proportion of defective or positive units in particular. This study provided a comparison of several proven methods of constructing CIs for a binomial proportion after adjusting for overdispersion in group testing with groups of unequal sizes. Bootstrap resampling was applied on data simulated from binomial distribution, and confidence intervals with high coverage probabilities were produced. This data was assumed to be overdispersed and independent between groups but correlated within these groups. Interval estimation methods based on the Wald, the Logit and Complementary log-log (CLL) functions were considered. The criterion used in the comparisons is mainly the coverage probabilities attained by nominal 95% CIs, though interval width is also regarded. Bootstrapping produced CIs with high coverage probabilities for each of the three interval methods.
基金The NSF (10661003) of China,SRF for ROCS,SEM ([2004]527)the NSF (0728092) of GuangxiInnovation Project of Guangxi Graduate Education ([2006]40)
文摘Suppose that there are two populations x and y with missing data on both of them, where x has a distribution function F(·) which is unknown and y has a distribution function Gθ(·) with a probability density function gθ(·) with known form depending on some unknown parameter θ. Fractional imputation is used to fill in missing data. The asymptotic distributions of the semi-empirical likelihood ration statistic are obtained under some mild conditions. Then, empirical likelihood confidence intervals on the differences of x and y are constructed.
文摘In cancer survival analysis, it is very frequently to estimate the confidence intervals for survival probabilities.But this calculation is not commonly involve in most popular computer packages, or only one methods of estimation in the packages. In the present Paper, we will describe a microcomputer Program for estimating the confidence intervals of survival probabilities, when the survival functions are estimated using Kaplan-Meier product-limit or life-table method. There are five methods of estimation in the program (SPCI), which are the classical(based on Greenwood's formula of variance of S(ti), Rothman-Wilson, arcsin transformation, log(-Iog) transformation, Iogit transformation methods. Two example analysis are given for testing the performances of the program running.
文摘Purpose:We aim to extend our investigations related to the Relative Intensity of Collaboration(RIC)indicator,by constructing a confidence interval for the obtained values.Design/methodology/approach:We use Mantel-Haenszel statistics as applied recently by Smolinsky,Klingenberg,and Marx.Findings:We obtain confidence intervals for the RIC indicatorResearch limitations:It is not obvious that data obtained from the Web of Science(or any other database)can be considered a random sample.Practical implications:We explain how to calculate confidence intervals.Bibliometric indicators are more often than not presented as precise values instead of an approximation depending on the database and the time of measurement.Our approach presents a suggestion to solve this problem.Originality/value:Our approach combines the statistics of binary categorical data and bibliometric studies of collaboration.
文摘This paper provides methods for assessing the precision of cost elasticity estimates when the underlying regression function is assumed to be polynomial. Specifically, the paper adapts two well-known methods for computing confidential intervals for ratios: the delta-method and the Fieller method. We show that performing the estimation with mean-centered explanatory variables provides a straightforward way to estimate the elasticity and compute a confidence interval for it. A theoretical discussion of the proposed methods is provided, as well as an empirical example based on publicly available postal data. Possible areas of application include postal service providers worldwide, transportation and electricity.
文摘This paper presents four methods of constructing the confidence interval for the proportion <i><span style="font-family:Verdana;">p</span></i><span style="font-family:;" "=""><span style="font-family:Verdana;"> of the binomial distribution. Evidence in the literature indicates the standard Wald confidence interval for the binomial proportion is inaccurate, especially for extreme values of </span><i><span style="font-family:Verdana;">p</span></i><span style="font-family:Verdana;">. Even for moderately large sample sizes, the coverage probabilities of the Wald confidence interval prove to be erratic for extreme values of </span><i><span style="font-family:Verdana;">p</span></i><span style="font-family:Verdana;">. Three alternative confidence intervals, namely, Wilson confidence interval, Clopper-Pearson interval, and likelihood interval</span></span><span style="font-family:Verdana;">,</span><span style="font-family:Verdana;"> are compared to the Wald confidence interval on the basis of coverage probability and expected length by means of simulation.</span>
基金The National Natural Science Foundation of China(No.51108079)
文摘To improve the forecasting reliability of travel time, the time-varying confidence interval of travel time on arterials is forecasted using an autoregressive integrated moving average and generalized autoregressive conditional heteroskedasticity (ARIMA-GARCH) model. In which, the ARIMA model is used as the mean equation of the GARCH model to model the travel time levels and the GARCH model is used to model the conditional variances of travel time. The proposed method is validated and evaluated using actual traffic flow data collected from the traffic monitoring system of Kunshan city. The evaluation results show that, compared with the conventional ARIMA model, the proposed model cannot significantly improve the forecasting performance of travel time levels but has advantage in travel time volatility forecasting. The proposed model can well capture the travel time heteroskedasticity and forecast the time-varying confidence intervals of travel time which can better reflect the volatility of observed travel times than the fixed confidence interval provided by the ARIMA model.
基金supported by the National Natural Science Foundation of China(Grant Nos.52130805 and 52022070)Shanghai Science and Technology Committee Program(Grant No.20dz1202200)。
文摘The random finite difference method(RFDM) is a popular approach to quantitatively evaluate the influence of inherent spatial variability of soil on the deformation of embedded tunnels.However,the high computational cost is an ongoing challenge for its application in complex scenarios.To address this limitation,a deep learning-based method for efficient prediction of tunnel deformation in spatially variable soil is proposed.The proposed method uses one-dimensional convolutional neural network(CNN) to identify the pattern between random field input and factor of safety of tunnel deformation output.The mean squared error and correlation coefficient of the CNN model applied to the newly untrained dataset was less than 0.02 and larger than 0.96,respectively.It means that the trained CNN model can replace RFDM analysis for Monte Carlo simulations with a small but sufficient number of random field samples(about 40 samples for each case in this study).It is well known that the machine learning or deep learning model has a common limitation that the confidence of predicted result is unknown and only a deterministic outcome is given.This calls for an approach to gauge the model’s confidence interval.It is achieved by applying dropout to all layers of the original model to retrain the model and using the dropout technique when performing inference.The excellent agreement between the CNN model prediction and the RFDM calculated results demonstrated that the proposed deep learning-based method has potential for tunnel performance analysis in spatially variable soils.
基金Natural Natural Science Foundation of China Under Grant No 50778077 & 50608036the Graduate Innovation Fund of Huazhong University of Science and Technology Under Grant No HF-06-028
文摘A novel damage detection method is applied to a 3-story frame structure, to obtain statistical quantification control criterion of the existence, location and identification of damage. The mean, standard deviation, and exponentially weighted moving average (EWMA) are applied to detect damage information according to statistical process control (SPC) theory. It is concluded that the detection is insignificant with the mean and EWMA because the structural response is not independent and is not a normal distribution. On the other hand, the damage information is detected well with the standard deviation because the influence of the data distribution is not pronounced with this parameter. A suitable moderate confidence level is explored for more significant damage location and quantification detection, and the impact of noise is investigated to illustrate the robustness of the method.
文摘Probability density function (PDF) method is proposed for analysing the structure of the reconstructed attractor in computing the correlation dimensions of RR intervals of ten normal old men. PDF contains important information about the spatial distribution of the phase points in the reconstructed attractor. To the best of our knowledge, it is the first time that the PDF method is put forward for the analysis of the reconstructed attractor structure. Numerical simulations demonstrate that the cardiac systems of healthy old men are about 6-6.5 dimensional complex dynamical systems. It is found that PDF is not symmetrically distributed when time delay is small, while PDF satisfies Gaussian distribution when time delay is big enough. A cluster effect mechanism is presented to explain this phenomenon. By studying the shape of PDFs, that the roles played by time delay are more important than embedding dimension in the reconstruction is clearly indicated. Results have demonstrated that the PDF method represents a promising numerical approach for the observation of the reconstructed attractor structure and may provide more information and new diagnostic potential of the analyzed cardiac system.
文摘Now we extend one method into a sequence of binomial data, propose a stepwise confidence interval method for toxic-ity study, and also in our paper, two methods of constructing intervals for the risk difference are proposed. The first one is based on the well-known conditional confidence intervals for odds ratio, and the other one comes from Santner“small-sample confidence intervals for the difference of two success probabilities”, and it produces exact intervals, through employing our method.
文摘A Poisson distribution is well used as a standard model for analyzing count data. Most of the usual constructing confidence intervals are based on an asymptotic approximation to the distribution of the sample mean by using the Wald interval. That is, the Wald interval has poor performance in terms of coverage probabilities and average widths interval for small means and small to moderate sample sizes. In this paper, an approximate confidence interval for a Poisson mean is proposed and is based on an empirically determined the tail probabilities. Simulation results show that the pro- posed interval outperforms the others when small means and small to moderate sample sizes.
文摘The purpose of this article offers different algorithms of Weibull Geometric (WG) distribution estimation depending on the progressive Type II censoring samples plan, spatially the joint confidence intervals for the parameters. The approximate joint confidence intervals for the parameters, the approximate confidence regions and percentile bootstrap intervals of confidence are discussed, and several Markov chain Monte Carlo (MCMC) techniques are also presented. The parts of mean square error (MSEs) and credible intervals lengths, the estimators of Bayes depend on non-informative implement more effective than the maximum likelihood estimates (MLEs) and bootstrap. Comparing the models, the MSEs, average confidence interval lengths of the MLEs, and Bayes estimators for parameters are less significant for censored models.
文摘We discuss formulas and techniques for finding maximum-likelihood estimators of parameters of autoregressive (with particular emphasis on Markov and Yule) models, computing their asymptotic variance-covariance matrix and displaying the resulting confidence regions;Monte Carlo simulation is then used to establish the accuracy of the corresponding level of confidence. The results indicate that a direct application of the Central Limit Theorem yields errors too large to be acceptable;instead, we recommend using a technique based directly on the natural logarithm of the likelihood function, verifying its substantially higher accuracy. Our study is then extended to the case of estimating only a subset of a model’s parameters, when the remaining ones (called nuisance) are of no interest to us.
文摘In data envelopment analysis (DEA), input and output values are subject to change for several reasons. Such variations differ in their input/output items and their decision-making units (DMUs). Hence, DEA efficiency scores need to be examined by considering these factors. In this paper, we propose new resampling models based on these variations for gauging the confidence intervals of DEA scores. The first model utilizes past-present data for estimating data variations imposing chronological order weights which are supplied by Lucas series (a variant of Fibonacci series). The second model deals with future prospects. This model aims at forecasting the future efficiency score and its confidence interval for each DMU. We applied our models to a dataset composed of Japanese municipal hospitals.
基金supported by the National Natural Science Foundation of China(Grant No.11961035)Jiangxi Provincial Natural Science Foundation(Grant No.20224BCD41001).
文摘Yule-Simon distribution has a wide range of practical applications, such as in networkscience, biology and humanities. A lot of work focuses on the study of how well the empirical datafits Yule-Simon distribution or how to estimate the parameter. There are still some open problems,such as the error analysis of parameter estimation, the theoretical proof of the convergence of theiterative algorithm for maximum likelihood estimation of parameters. The Yule-Simon distributionis a heavy-tailed distribution and the parameter is usually less than 2, so the variance does notexist. This makes it difficult to give an interval estimation of the parameter. Using the compressiontransformation, this paper proposes a method of interval estimation based on the centrallimit theorem. This method can be applied to many heavy-tailed distributions. The other twoasymptotic confidence intervals of the parameter are obtained based on the maximum likelihoodand the mode method. These estimation methods are compared in simulations and applications toempirical data.
文摘Although there are many measures of variability for qualitative variables, they are little used in social research, nor are they included in statistical software. The aim of this article is to present six measures of variation for qualitative variables of simple calculation, as well as to facilitate their use by means of the R software. The measures considered are, on the one hand, Freemans variation ratio, Morals universal variation ratio, Kvalseths standard deviation from the mode, and Wilcoxs variation ratio which are most affected by proximity to a constant random variable, where the measures of variability for qualitative variables reach their minimum value of 0. On the other hand, the Gibbs-Poston index of qualitative variation and Shannons relative entropy are included, which are more affected by the proximity to a uniform distribution, where the measures of variability for qualitative variables reach their maximum value of 1. Point and interval estimation are addressed. Bootstrap by the percentile and bias-corrected and accelerated percentile methods are used to obtain confidence intervals. Two calculation situations are presented: with a sample mode and with two or more modes. The standard deviation from the mode among the six considered measures, and the universal variation ratio among the three variation ratios, are particularly recommended for use.
文摘An S-N curve fitting approach is proposed based on the weighted least square method, and the weights are inversely proportional to the length of mean confidence intervals of experimental data sets. The assumption coincides with the physical characteristics of the fatigue life scatter. Two examples demonstrate the method. It is shown that the method has better accuracy and reasonableness compared with the usual least square method.