Addressing the insufficiency in down-regulation leeway within integrated energy systems stemming from the erratic and volatile nature of wind and solar renewable energy generation,this study focuses on formulating a c...Addressing the insufficiency in down-regulation leeway within integrated energy systems stemming from the erratic and volatile nature of wind and solar renewable energy generation,this study focuses on formulating a coordinated strategy involving the carbon capture unit of the integrated energy system and the resources on the load storage side.A scheduling model is devised that takes into account the confidence interval associated with renewable energy generation,with the overarching goal of optimizing the system for low-carbon operation.To begin with,an in-depth analysis is conducted on the temporal energy-shifting attributes and the low-carbon modulation mechanisms exhibited by the source-side carbon capture power plant within the context of integrated and adaptable operational paradigms.Drawing from this analysis,a model is devised to represent the adjustable resources on the charge-storage side,predicated on the principles of electro-thermal coupling within the energy system.Subsequently,the dissimilarities in the confidence intervals of renewable energy generation are considered,leading to the proposition of a flexible upper threshold for the confidence interval.Building on this,a low-carbon dispatch model is established for the integrated energy system,factoring in the margin allowed by the adjustable resources.In the final phase,a simulation is performed on a regional electric heating integrated energy system.This simulation seeks to assess the impact of source-load-storage coordination on the system’s low-carbon operation across various scenarios of reduction margin reserves.The findings underscore that the proactive scheduling model incorporating confidence interval considerations for reduction margin reserves effectively mitigates the uncertainties tied to renewable energy generation.Through harmonized orchestration of source,load,and storage elements,it expands the utilization scope for renewable energy,safeguards the economic efficiency of system operations under low-carbon emission conditions,and empirically validates the soundness and efficacy of the proposed approach.展开更多
To improve the forecasting reliability of travel time, the time-varying confidence interval of travel time on arterials is forecasted using an autoregressive integrated moving average and generalized autoregressive co...To improve the forecasting reliability of travel time, the time-varying confidence interval of travel time on arterials is forecasted using an autoregressive integrated moving average and generalized autoregressive conditional heteroskedasticity (ARIMA-GARCH) model. In which, the ARIMA model is used as the mean equation of the GARCH model to model the travel time levels and the GARCH model is used to model the conditional variances of travel time. The proposed method is validated and evaluated using actual traffic flow data collected from the traffic monitoring system of Kunshan city. The evaluation results show that, compared with the conventional ARIMA model, the proposed model cannot significantly improve the forecasting performance of travel time levels but has advantage in travel time volatility forecasting. The proposed model can well capture the travel time heteroskedasticity and forecast the time-varying confidence intervals of travel time which can better reflect the volatility of observed travel times than the fixed confidence interval provided by the ARIMA model.展开更多
The random finite difference method(RFDM) is a popular approach to quantitatively evaluate the influence of inherent spatial variability of soil on the deformation of embedded tunnels.However,the high computational co...The random finite difference method(RFDM) is a popular approach to quantitatively evaluate the influence of inherent spatial variability of soil on the deformation of embedded tunnels.However,the high computational cost is an ongoing challenge for its application in complex scenarios.To address this limitation,a deep learning-based method for efficient prediction of tunnel deformation in spatially variable soil is proposed.The proposed method uses one-dimensional convolutional neural network(CNN) to identify the pattern between random field input and factor of safety of tunnel deformation output.The mean squared error and correlation coefficient of the CNN model applied to the newly untrained dataset was less than 0.02 and larger than 0.96,respectively.It means that the trained CNN model can replace RFDM analysis for Monte Carlo simulations with a small but sufficient number of random field samples(about 40 samples for each case in this study).It is well known that the machine learning or deep learning model has a common limitation that the confidence of predicted result is unknown and only a deterministic outcome is given.This calls for an approach to gauge the model’s confidence interval.It is achieved by applying dropout to all layers of the original model to retrain the model and using the dropout technique when performing inference.The excellent agreement between the CNN model prediction and the RFDM calculated results demonstrated that the proposed deep learning-based method has potential for tunnel performance analysis in spatially variable soils.展开更多
A novel damage detection method is applied to a 3-story frame structure, to obtain statistical quantification control criterion of the existence, location and identification of damage. The mean, standard deviation, an...A novel damage detection method is applied to a 3-story frame structure, to obtain statistical quantification control criterion of the existence, location and identification of damage. The mean, standard deviation, and exponentially weighted moving average (EWMA) are applied to detect damage information according to statistical process control (SPC) theory. It is concluded that the detection is insignificant with the mean and EWMA because the structural response is not independent and is not a normal distribution. On the other hand, the damage information is detected well with the standard deviation because the influence of the data distribution is not pronounced with this parameter. A suitable moderate confidence level is explored for more significant damage location and quantification detection, and the impact of noise is investigated to illustrate the robustness of the method.展开更多
Let X denote a discrete distribution as Poisson, binomial or negative binomial variable. The score confidence interval for the mean of X is obtained based on inverting the hypothesis test and the central limit theorem...Let X denote a discrete distribution as Poisson, binomial or negative binomial variable. The score confidence interval for the mean of X is obtained based on inverting the hypothesis test and the central limit theorem is discussed and recommended widely. But it has sharp downward spikes for small means. This paper proposes to move the score interval left a little (about 0.04 unit), called by moved score confidence interval. Numerical computation and Edgeworth expansion show that the moved score interval is analogous to the score interval completely and behaves better for moderate means;for small means the moved interval raises the infimum of the coverage probability and improves the sharp spikes significantly. Especially, it has unified explicit formulations to compute easily.展开更多
Group testing is a method of pooling a number of units together and performing a single test on the resulting group. It is an appealing option when few individual units are thought to be infected leading to reduced co...Group testing is a method of pooling a number of units together and performing a single test on the resulting group. It is an appealing option when few individual units are thought to be infected leading to reduced costs of testing as compared to individually testing the units. Group testing aims to identify the positive groups in all the groups tested or to estimate the proportion of positives (p) in a population. Interval estimation methods of the proportions in group testing for unequal group sizes adjusted for overdispersion have been examined. Lately improvement in statistical methods allows the construction of highly accurate confidence intervals (CIs). The aim here is to apply group testing for estimation and generate highly accurate Bootstrap confidence intervals (CIs) for the proportion of defective or positive units in particular. This study provided a comparison of several proven methods of constructing CIs for a binomial proportion after adjusting for overdispersion in group testing with groups of unequal sizes. Bootstrap resampling was applied on data simulated from binomial distribution, and confidence intervals with high coverage probabilities were produced. This data was assumed to be overdispersed and independent between groups but correlated within these groups. Interval estimation methods based on the Wald, the Logit and Complementary log-log (CLL) functions were considered. The criterion used in the comparisons is mainly the coverage probabilities attained by nominal 95% CIs, though interval width is also regarded. Bootstrapping produced CIs with high coverage probabilities for each of the three interval methods.展开更多
Now we extend one method into a sequence of binomial data, propose a stepwise confidence interval method for toxic-ity study, and also in our paper, two methods of constructing intervals for the risk difference are pr...Now we extend one method into a sequence of binomial data, propose a stepwise confidence interval method for toxic-ity study, and also in our paper, two methods of constructing intervals for the risk difference are proposed. The first one is based on the well-known conditional confidence intervals for odds ratio, and the other one comes from Santner“small-sample confidence intervals for the difference of two success probabilities”, and it produces exact intervals, through employing our method.展开更多
Suppose that there are two populations x and y with missing data on both of them, where x has a distribution function F(·) which is unknown and y has a distribution function Gθ(·) with a probability den...Suppose that there are two populations x and y with missing data on both of them, where x has a distribution function F(·) which is unknown and y has a distribution function Gθ(·) with a probability density function gθ(·) with known form depending on some unknown parameter θ. Fractional imputation is used to fill in missing data. The asymptotic distributions of the semi-empirical likelihood ration statistic are obtained under some mild conditions. Then, empirical likelihood confidence intervals on the differences of x and y are constructed.展开更多
A Poisson distribution is well used as a standard model for analyzing count data. Most of the usual constructing confidence intervals are based on an asymptotic approximation to the distribution of the sample mean by ...A Poisson distribution is well used as a standard model for analyzing count data. Most of the usual constructing confidence intervals are based on an asymptotic approximation to the distribution of the sample mean by using the Wald interval. That is, the Wald interval has poor performance in terms of coverage probabilities and average widths interval for small means and small to moderate sample sizes. In this paper, an approximate confidence interval for a Poisson mean is proposed and is based on an empirically determined the tail probabilities. Simulation results show that the pro- posed interval outperforms the others when small means and small to moderate sample sizes.展开更多
The purpose of this article offers different algorithms of Weibull Geometric (WG) distribution estimation depending on the progressive Type II censoring samples plan, spatially the joint confidence intervals for the p...The purpose of this article offers different algorithms of Weibull Geometric (WG) distribution estimation depending on the progressive Type II censoring samples plan, spatially the joint confidence intervals for the parameters. The approximate joint confidence intervals for the parameters, the approximate confidence regions and percentile bootstrap intervals of confidence are discussed, and several Markov chain Monte Carlo (MCMC) techniques are also presented. The parts of mean square error (MSEs) and credible intervals lengths, the estimators of Bayes depend on non-informative implement more effective than the maximum likelihood estimates (MLEs) and bootstrap. Comparing the models, the MSEs, average confidence interval lengths of the MLEs, and Bayes estimators for parameters are less significant for censored models.展开更多
In cancer survival analysis, it is very frequently to estimate the confidence intervals for survival probabilities.But this calculation is not commonly involve in most popular computer packages, or only one methods of...In cancer survival analysis, it is very frequently to estimate the confidence intervals for survival probabilities.But this calculation is not commonly involve in most popular computer packages, or only one methods of estimation in the packages. In the present Paper, we will describe a microcomputer Program for estimating the confidence intervals of survival probabilities, when the survival functions are estimated using Kaplan-Meier product-limit or life-table method. There are five methods of estimation in the program (SPCI), which are the classical(based on Greenwood's formula of variance of S(ti), Rothman-Wilson, arcsin transformation, log(-Iog) transformation, Iogit transformation methods. Two example analysis are given for testing the performances of the program running.展开更多
Purpose:We aim to extend our investigations related to the Relative Intensity of Collaboration(RIC)indicator,by constructing a confidence interval for the obtained values.Design/methodology/approach:We use Mantel-Haen...Purpose:We aim to extend our investigations related to the Relative Intensity of Collaboration(RIC)indicator,by constructing a confidence interval for the obtained values.Design/methodology/approach:We use Mantel-Haenszel statistics as applied recently by Smolinsky,Klingenberg,and Marx.Findings:We obtain confidence intervals for the RIC indicatorResearch limitations:It is not obvious that data obtained from the Web of Science(or any other database)can be considered a random sample.Practical implications:We explain how to calculate confidence intervals.Bibliometric indicators are more often than not presented as precise values instead of an approximation depending on the database and the time of measurement.Our approach presents a suggestion to solve this problem.Originality/value:Our approach combines the statistics of binary categorical data and bibliometric studies of collaboration.展开更多
This article deals with correlating two variables that have values that fall below the known limit of detection (LOD) of the measuring device;these values are known as non-detects (NDs). We use simulation to compare s...This article deals with correlating two variables that have values that fall below the known limit of detection (LOD) of the measuring device;these values are known as non-detects (NDs). We use simulation to compare several methods for estimating the association between two such variables. The most commonly used method, simple substitution, consists of replacing each ND with some representative value such as LOD/2. Spearman’s correlation, in which all NDs are assumed to be tied at some value just smaller than the LOD, is also used. We evaluate each method under several scenarios, including small to moderate sample size, moderate to large censoring proportions, extr</span><span style="font-family:Verdana;">eme imbalance in censoring proportions, and non-bivariate nor</span><span style="font-family:Verdana;">mal (BVN) data. In this article, we focus on the coverage probability of 95% confidence intervals obtained using each method. Confidence intervals using a maximum likelihood approach based on the assumption of BVN data have acceptable performance under most scenarios, even with non-BVN data. Intervals based on Spearman’s coefficient also perform well under many conditions. The methods are illustrated using real data taken from the biomarker literature.展开更多
This paper provides methods for assessing the precision of cost elasticity estimates when the underlying regression function is assumed to be polynomial. Specifically, the paper adapts two well-known methods for compu...This paper provides methods for assessing the precision of cost elasticity estimates when the underlying regression function is assumed to be polynomial. Specifically, the paper adapts two well-known methods for computing confidential intervals for ratios: the delta-method and the Fieller method. We show that performing the estimation with mean-centered explanatory variables provides a straightforward way to estimate the elasticity and compute a confidence interval for it. A theoretical discussion of the proposed methods is provided, as well as an empirical example based on publicly available postal data. Possible areas of application include postal service providers worldwide, transportation and electricity.展开更多
This paper presents four methods of constructing the confidence interval for the proportion <i><span style="font-family:Verdana;">p</span></i><span style="font-family:;" ...This paper presents four methods of constructing the confidence interval for the proportion <i><span style="font-family:Verdana;">p</span></i><span style="font-family:;" "=""><span style="font-family:Verdana;"> of the binomial distribution. Evidence in the literature indicates the standard Wald confidence interval for the binomial proportion is inaccurate, especially for extreme values of </span><i><span style="font-family:Verdana;">p</span></i><span style="font-family:Verdana;">. Even for moderately large sample sizes, the coverage probabilities of the Wald confidence interval prove to be erratic for extreme values of </span><i><span style="font-family:Verdana;">p</span></i><span style="font-family:Verdana;">. Three alternative confidence intervals, namely, Wilson confidence interval, Clopper-Pearson interval, and likelihood interval</span></span><span style="font-family:Verdana;">,</span><span style="font-family:Verdana;"> are compared to the Wald confidence interval on the basis of coverage probability and expected length by means of simulation.</span>展开更多
In data envelopment analysis (DEA), input and output values are subject to change for several reasons. Such variations differ in their input/output items and their decision-making units (DMUs). Hence, DEA efficiency s...In data envelopment analysis (DEA), input and output values are subject to change for several reasons. Such variations differ in their input/output items and their decision-making units (DMUs). Hence, DEA efficiency scores need to be examined by considering these factors. In this paper, we propose new resampling models based on these variations for gauging the confidence intervals of DEA scores. The first model utilizes past-present data for estimating data variations imposing chronological order weights which are supplied by Lucas series (a variant of Fibonacci series). The second model deals with future prospects. This model aims at forecasting the future efficiency score and its confidence interval for each DMU. We applied our models to a dataset composed of Japanese municipal hospitals.展开更多
As the existing heating load forecasting methods are almostly point forecasting,an interval forecasting approach based on Support Vector Regression (SVR) and interval estimation of relative error is proposed in this p...As the existing heating load forecasting methods are almostly point forecasting,an interval forecasting approach based on Support Vector Regression (SVR) and interval estimation of relative error is proposed in this paper.The forecasting output can be defined as energy saving control setting value of heating supply substation;meanwhile,it can also provide a practical basis for heating dispatching and peak load regulating operation.By means of the proposed approach,SVR model is used to point forecasting and the error interval can be gained by using nonparametric kernel estimation to the forecast error,which avoid the distributional assumptions.Combining the point forecasting results and error interval,the forecast confidence interval is obtained.Finally,the proposed model is performed through simulations by applying it to the data from a heating supply network in Harbin,and the results show that the method can meet the demands of energy saving control and heating dispatching.展开更多
A random parameter can be transformed into an interval number in the structural analysis with the concept of the confidence interval. Hence, analyses of uncertain structural systems can be used in the traditional FE...A random parameter can be transformed into an interval number in the structural analysis with the concept of the confidence interval. Hence, analyses of uncertain structural systems can be used in the traditional FEM software. In some cases, the amount of solutions in stochastic structures is nearly as many as that in the traditional structural problems. In addition, a new method to evaluate the failure probability of structures is presented for the needs of the modern engineering design.展开更多
Comparing two samples about corresponding parameters of their respective populations is an old and classical statistical problem. In this paper, we present a simple yet effective tool to compare two samples through th...Comparing two samples about corresponding parameters of their respective populations is an old and classical statistical problem. In this paper, we present a simple yet effective tool to compare two samples through their medians. We calculate the confidence of the statement “the median of the first population is strictly smaller (larger) than the median of the second.” We analyze two real data sets and empirically demonstrate the quality of the confidence for such a statement. This confidence in the order of the medians is to be seen as a pre-analysis tool that can provide useful insights for comparing two or more populations. The method is entirely based on their exact distribution with no need for asymptotic considerations. We also provide the Quor statistical software, an R package that implements the ideas discussed in this work.展开更多
基金supported by the Science and Technology Project of State Grid Inner Mongolia East Power Co.,Ltd.:Research on Carbon Flow Apportionment and Assessment Methods for Distributed Energy under Dual Carbon Targets(52664K220004).
文摘Addressing the insufficiency in down-regulation leeway within integrated energy systems stemming from the erratic and volatile nature of wind and solar renewable energy generation,this study focuses on formulating a coordinated strategy involving the carbon capture unit of the integrated energy system and the resources on the load storage side.A scheduling model is devised that takes into account the confidence interval associated with renewable energy generation,with the overarching goal of optimizing the system for low-carbon operation.To begin with,an in-depth analysis is conducted on the temporal energy-shifting attributes and the low-carbon modulation mechanisms exhibited by the source-side carbon capture power plant within the context of integrated and adaptable operational paradigms.Drawing from this analysis,a model is devised to represent the adjustable resources on the charge-storage side,predicated on the principles of electro-thermal coupling within the energy system.Subsequently,the dissimilarities in the confidence intervals of renewable energy generation are considered,leading to the proposition of a flexible upper threshold for the confidence interval.Building on this,a low-carbon dispatch model is established for the integrated energy system,factoring in the margin allowed by the adjustable resources.In the final phase,a simulation is performed on a regional electric heating integrated energy system.This simulation seeks to assess the impact of source-load-storage coordination on the system’s low-carbon operation across various scenarios of reduction margin reserves.The findings underscore that the proactive scheduling model incorporating confidence interval considerations for reduction margin reserves effectively mitigates the uncertainties tied to renewable energy generation.Through harmonized orchestration of source,load,and storage elements,it expands the utilization scope for renewable energy,safeguards the economic efficiency of system operations under low-carbon emission conditions,and empirically validates the soundness and efficacy of the proposed approach.
基金The National Natural Science Foundation of China(No.51108079)
文摘To improve the forecasting reliability of travel time, the time-varying confidence interval of travel time on arterials is forecasted using an autoregressive integrated moving average and generalized autoregressive conditional heteroskedasticity (ARIMA-GARCH) model. In which, the ARIMA model is used as the mean equation of the GARCH model to model the travel time levels and the GARCH model is used to model the conditional variances of travel time. The proposed method is validated and evaluated using actual traffic flow data collected from the traffic monitoring system of Kunshan city. The evaluation results show that, compared with the conventional ARIMA model, the proposed model cannot significantly improve the forecasting performance of travel time levels but has advantage in travel time volatility forecasting. The proposed model can well capture the travel time heteroskedasticity and forecast the time-varying confidence intervals of travel time which can better reflect the volatility of observed travel times than the fixed confidence interval provided by the ARIMA model.
基金supported by the National Natural Science Foundation of China(Grant Nos.52130805 and 52022070)Shanghai Science and Technology Committee Program(Grant No.20dz1202200)。
文摘The random finite difference method(RFDM) is a popular approach to quantitatively evaluate the influence of inherent spatial variability of soil on the deformation of embedded tunnels.However,the high computational cost is an ongoing challenge for its application in complex scenarios.To address this limitation,a deep learning-based method for efficient prediction of tunnel deformation in spatially variable soil is proposed.The proposed method uses one-dimensional convolutional neural network(CNN) to identify the pattern between random field input and factor of safety of tunnel deformation output.The mean squared error and correlation coefficient of the CNN model applied to the newly untrained dataset was less than 0.02 and larger than 0.96,respectively.It means that the trained CNN model can replace RFDM analysis for Monte Carlo simulations with a small but sufficient number of random field samples(about 40 samples for each case in this study).It is well known that the machine learning or deep learning model has a common limitation that the confidence of predicted result is unknown and only a deterministic outcome is given.This calls for an approach to gauge the model’s confidence interval.It is achieved by applying dropout to all layers of the original model to retrain the model and using the dropout technique when performing inference.The excellent agreement between the CNN model prediction and the RFDM calculated results demonstrated that the proposed deep learning-based method has potential for tunnel performance analysis in spatially variable soils.
基金Natural Natural Science Foundation of China Under Grant No 50778077 & 50608036the Graduate Innovation Fund of Huazhong University of Science and Technology Under Grant No HF-06-028
文摘A novel damage detection method is applied to a 3-story frame structure, to obtain statistical quantification control criterion of the existence, location and identification of damage. The mean, standard deviation, and exponentially weighted moving average (EWMA) are applied to detect damage information according to statistical process control (SPC) theory. It is concluded that the detection is insignificant with the mean and EWMA because the structural response is not independent and is not a normal distribution. On the other hand, the damage information is detected well with the standard deviation because the influence of the data distribution is not pronounced with this parameter. A suitable moderate confidence level is explored for more significant damage location and quantification detection, and the impact of noise is investigated to illustrate the robustness of the method.
文摘Let X denote a discrete distribution as Poisson, binomial or negative binomial variable. The score confidence interval for the mean of X is obtained based on inverting the hypothesis test and the central limit theorem is discussed and recommended widely. But it has sharp downward spikes for small means. This paper proposes to move the score interval left a little (about 0.04 unit), called by moved score confidence interval. Numerical computation and Edgeworth expansion show that the moved score interval is analogous to the score interval completely and behaves better for moderate means;for small means the moved interval raises the infimum of the coverage probability and improves the sharp spikes significantly. Especially, it has unified explicit formulations to compute easily.
文摘Group testing is a method of pooling a number of units together and performing a single test on the resulting group. It is an appealing option when few individual units are thought to be infected leading to reduced costs of testing as compared to individually testing the units. Group testing aims to identify the positive groups in all the groups tested or to estimate the proportion of positives (p) in a population. Interval estimation methods of the proportions in group testing for unequal group sizes adjusted for overdispersion have been examined. Lately improvement in statistical methods allows the construction of highly accurate confidence intervals (CIs). The aim here is to apply group testing for estimation and generate highly accurate Bootstrap confidence intervals (CIs) for the proportion of defective or positive units in particular. This study provided a comparison of several proven methods of constructing CIs for a binomial proportion after adjusting for overdispersion in group testing with groups of unequal sizes. Bootstrap resampling was applied on data simulated from binomial distribution, and confidence intervals with high coverage probabilities were produced. This data was assumed to be overdispersed and independent between groups but correlated within these groups. Interval estimation methods based on the Wald, the Logit and Complementary log-log (CLL) functions were considered. The criterion used in the comparisons is mainly the coverage probabilities attained by nominal 95% CIs, though interval width is also regarded. Bootstrapping produced CIs with high coverage probabilities for each of the three interval methods.
文摘Now we extend one method into a sequence of binomial data, propose a stepwise confidence interval method for toxic-ity study, and also in our paper, two methods of constructing intervals for the risk difference are proposed. The first one is based on the well-known conditional confidence intervals for odds ratio, and the other one comes from Santner“small-sample confidence intervals for the difference of two success probabilities”, and it produces exact intervals, through employing our method.
基金The NSF (10661003) of China,SRF for ROCS,SEM ([2004]527)the NSF (0728092) of GuangxiInnovation Project of Guangxi Graduate Education ([2006]40)
文摘Suppose that there are two populations x and y with missing data on both of them, where x has a distribution function F(·) which is unknown and y has a distribution function Gθ(·) with a probability density function gθ(·) with known form depending on some unknown parameter θ. Fractional imputation is used to fill in missing data. The asymptotic distributions of the semi-empirical likelihood ration statistic are obtained under some mild conditions. Then, empirical likelihood confidence intervals on the differences of x and y are constructed.
文摘A Poisson distribution is well used as a standard model for analyzing count data. Most of the usual constructing confidence intervals are based on an asymptotic approximation to the distribution of the sample mean by using the Wald interval. That is, the Wald interval has poor performance in terms of coverage probabilities and average widths interval for small means and small to moderate sample sizes. In this paper, an approximate confidence interval for a Poisson mean is proposed and is based on an empirically determined the tail probabilities. Simulation results show that the pro- posed interval outperforms the others when small means and small to moderate sample sizes.
文摘The purpose of this article offers different algorithms of Weibull Geometric (WG) distribution estimation depending on the progressive Type II censoring samples plan, spatially the joint confidence intervals for the parameters. The approximate joint confidence intervals for the parameters, the approximate confidence regions and percentile bootstrap intervals of confidence are discussed, and several Markov chain Monte Carlo (MCMC) techniques are also presented. The parts of mean square error (MSEs) and credible intervals lengths, the estimators of Bayes depend on non-informative implement more effective than the maximum likelihood estimates (MLEs) and bootstrap. Comparing the models, the MSEs, average confidence interval lengths of the MLEs, and Bayes estimators for parameters are less significant for censored models.
文摘In cancer survival analysis, it is very frequently to estimate the confidence intervals for survival probabilities.But this calculation is not commonly involve in most popular computer packages, or only one methods of estimation in the packages. In the present Paper, we will describe a microcomputer Program for estimating the confidence intervals of survival probabilities, when the survival functions are estimated using Kaplan-Meier product-limit or life-table method. There are five methods of estimation in the program (SPCI), which are the classical(based on Greenwood's formula of variance of S(ti), Rothman-Wilson, arcsin transformation, log(-Iog) transformation, Iogit transformation methods. Two example analysis are given for testing the performances of the program running.
文摘Purpose:We aim to extend our investigations related to the Relative Intensity of Collaboration(RIC)indicator,by constructing a confidence interval for the obtained values.Design/methodology/approach:We use Mantel-Haenszel statistics as applied recently by Smolinsky,Klingenberg,and Marx.Findings:We obtain confidence intervals for the RIC indicatorResearch limitations:It is not obvious that data obtained from the Web of Science(or any other database)can be considered a random sample.Practical implications:We explain how to calculate confidence intervals.Bibliometric indicators are more often than not presented as precise values instead of an approximation depending on the database and the time of measurement.Our approach presents a suggestion to solve this problem.Originality/value:Our approach combines the statistics of binary categorical data and bibliometric studies of collaboration.
文摘This article deals with correlating two variables that have values that fall below the known limit of detection (LOD) of the measuring device;these values are known as non-detects (NDs). We use simulation to compare several methods for estimating the association between two such variables. The most commonly used method, simple substitution, consists of replacing each ND with some representative value such as LOD/2. Spearman’s correlation, in which all NDs are assumed to be tied at some value just smaller than the LOD, is also used. We evaluate each method under several scenarios, including small to moderate sample size, moderate to large censoring proportions, extr</span><span style="font-family:Verdana;">eme imbalance in censoring proportions, and non-bivariate nor</span><span style="font-family:Verdana;">mal (BVN) data. In this article, we focus on the coverage probability of 95% confidence intervals obtained using each method. Confidence intervals using a maximum likelihood approach based on the assumption of BVN data have acceptable performance under most scenarios, even with non-BVN data. Intervals based on Spearman’s coefficient also perform well under many conditions. The methods are illustrated using real data taken from the biomarker literature.
文摘This paper provides methods for assessing the precision of cost elasticity estimates when the underlying regression function is assumed to be polynomial. Specifically, the paper adapts two well-known methods for computing confidential intervals for ratios: the delta-method and the Fieller method. We show that performing the estimation with mean-centered explanatory variables provides a straightforward way to estimate the elasticity and compute a confidence interval for it. A theoretical discussion of the proposed methods is provided, as well as an empirical example based on publicly available postal data. Possible areas of application include postal service providers worldwide, transportation and electricity.
文摘This paper presents four methods of constructing the confidence interval for the proportion <i><span style="font-family:Verdana;">p</span></i><span style="font-family:;" "=""><span style="font-family:Verdana;"> of the binomial distribution. Evidence in the literature indicates the standard Wald confidence interval for the binomial proportion is inaccurate, especially for extreme values of </span><i><span style="font-family:Verdana;">p</span></i><span style="font-family:Verdana;">. Even for moderately large sample sizes, the coverage probabilities of the Wald confidence interval prove to be erratic for extreme values of </span><i><span style="font-family:Verdana;">p</span></i><span style="font-family:Verdana;">. Three alternative confidence intervals, namely, Wilson confidence interval, Clopper-Pearson interval, and likelihood interval</span></span><span style="font-family:Verdana;">,</span><span style="font-family:Verdana;"> are compared to the Wald confidence interval on the basis of coverage probability and expected length by means of simulation.</span>
文摘In data envelopment analysis (DEA), input and output values are subject to change for several reasons. Such variations differ in their input/output items and their decision-making units (DMUs). Hence, DEA efficiency scores need to be examined by considering these factors. In this paper, we propose new resampling models based on these variations for gauging the confidence intervals of DEA scores. The first model utilizes past-present data for estimating data variations imposing chronological order weights which are supplied by Lucas series (a variant of Fibonacci series). The second model deals with future prospects. This model aims at forecasting the future efficiency score and its confidence interval for each DMU. We applied our models to a dataset composed of Japanese municipal hospitals.
基金Sponsored by the National 11th 5-year Plan Key Project of Ministry of Science and Technology of China (Grant No.2006BAJ01A04)
文摘As the existing heating load forecasting methods are almostly point forecasting,an interval forecasting approach based on Support Vector Regression (SVR) and interval estimation of relative error is proposed in this paper.The forecasting output can be defined as energy saving control setting value of heating supply substation;meanwhile,it can also provide a practical basis for heating dispatching and peak load regulating operation.By means of the proposed approach,SVR model is used to point forecasting and the error interval can be gained by using nonparametric kernel estimation to the forecast error,which avoid the distributional assumptions.Combining the point forecasting results and error interval,the forecast confidence interval is obtained.Finally,the proposed model is performed through simulations by applying it to the data from a heating supply network in Harbin,and the results show that the method can meet the demands of energy saving control and heating dispatching.
基金TheNationalNaturalScienceandChinesePhysicsResearchInstituteFoundationofChina (No .10 0 76 0 14 )andtheSWJTUFoundation (No .2 0 0 2B0 8) .
文摘A random parameter can be transformed into an interval number in the structural analysis with the concept of the confidence interval. Hence, analyses of uncertain structural systems can be used in the traditional FEM software. In some cases, the amount of solutions in stochastic structures is nearly as many as that in the traditional structural problems. In addition, a new method to evaluate the failure probability of structures is presented for the needs of the modern engineering design.
文摘Comparing two samples about corresponding parameters of their respective populations is an old and classical statistical problem. In this paper, we present a simple yet effective tool to compare two samples through their medians. We calculate the confidence of the statement “the median of the first population is strictly smaller (larger) than the median of the second.” We analyze two real data sets and empirically demonstrate the quality of the confidence for such a statement. This confidence in the order of the medians is to be seen as a pre-analysis tool that can provide useful insights for comparing two or more populations. The method is entirely based on their exact distribution with no need for asymptotic considerations. We also provide the Quor statistical software, an R package that implements the ideas discussed in this work.