期刊文献+
共找到35篇文章
< 1 2 >
每页显示 20 50 100
Low-Carbon Dispatch of an Integrated Energy System Considering Confidence Intervals for Renewable Energy Generation
1
作者 Yan Shi Wenjie Li +2 位作者 Gongbo Fan Luxi Zhang Fengjiu Yang 《Energy Engineering》 EI 2024年第2期461-482,共22页
Addressing the insufficiency in down-regulation leeway within integrated energy systems stemming from the erratic and volatile nature of wind and solar renewable energy generation,this study focuses on formulating a c... Addressing the insufficiency in down-regulation leeway within integrated energy systems stemming from the erratic and volatile nature of wind and solar renewable energy generation,this study focuses on formulating a coordinated strategy involving the carbon capture unit of the integrated energy system and the resources on the load storage side.A scheduling model is devised that takes into account the confidence interval associated with renewable energy generation,with the overarching goal of optimizing the system for low-carbon operation.To begin with,an in-depth analysis is conducted on the temporal energy-shifting attributes and the low-carbon modulation mechanisms exhibited by the source-side carbon capture power plant within the context of integrated and adaptable operational paradigms.Drawing from this analysis,a model is devised to represent the adjustable resources on the charge-storage side,predicated on the principles of electro-thermal coupling within the energy system.Subsequently,the dissimilarities in the confidence intervals of renewable energy generation are considered,leading to the proposition of a flexible upper threshold for the confidence interval.Building on this,a low-carbon dispatch model is established for the integrated energy system,factoring in the margin allowed by the adjustable resources.In the final phase,a simulation is performed on a regional electric heating integrated energy system.This simulation seeks to assess the impact of source-load-storage coordination on the system’s low-carbon operation across various scenarios of reduction margin reserves.The findings underscore that the proactive scheduling model incorporating confidence interval considerations for reduction margin reserves effectively mitigates the uncertainties tied to renewable energy generation.Through harmonized orchestration of source,load,and storage elements,it expands the utilization scope for renewable energy,safeguards the economic efficiency of system operations under low-carbon emission conditions,and empirically validates the soundness and efficacy of the proposed approach. 展开更多
关键词 Integrated energy system carbon capture power plant confidence interval optimized scheduling
下载PDF
Deep learning-based evaluation of factor of safety with confidence interval for tunnel deformation in spatially variable soil 被引量:2
2
作者 Jinzhang Zhang Kok Kwang Phoon +2 位作者 Dongming Zhang Hongwei Huang Chong Tang 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2021年第6期1358-1367,共10页
The random finite difference method(RFDM) is a popular approach to quantitatively evaluate the influence of inherent spatial variability of soil on the deformation of embedded tunnels.However,the high computational co... The random finite difference method(RFDM) is a popular approach to quantitatively evaluate the influence of inherent spatial variability of soil on the deformation of embedded tunnels.However,the high computational cost is an ongoing challenge for its application in complex scenarios.To address this limitation,a deep learning-based method for efficient prediction of tunnel deformation in spatially variable soil is proposed.The proposed method uses one-dimensional convolutional neural network(CNN) to identify the pattern between random field input and factor of safety of tunnel deformation output.The mean squared error and correlation coefficient of the CNN model applied to the newly untrained dataset was less than 0.02 and larger than 0.96,respectively.It means that the trained CNN model can replace RFDM analysis for Monte Carlo simulations with a small but sufficient number of random field samples(about 40 samples for each case in this study).It is well known that the machine learning or deep learning model has a common limitation that the confidence of predicted result is unknown and only a deterministic outcome is given.This calls for an approach to gauge the model’s confidence interval.It is achieved by applying dropout to all layers of the original model to retrain the model and using the dropout technique when performing inference.The excellent agreement between the CNN model prediction and the RFDM calculated results demonstrated that the proposed deep learning-based method has potential for tunnel performance analysis in spatially variable soils. 展开更多
关键词 Deep learning Convolutional neural network(CNN) Tunnel safety confidence interval Random field
下载PDF
Confidence Intervals for Relative Intensity of Collaboration(RIC)Indicators
3
作者 Joel Emanuel Fuchs Lawrence Smolinsky Ronald Rousseau 《Journal of Data and Information Science》 CSCD 2022年第4期5-15,共11页
Purpose:We aim to extend our investigations related to the Relative Intensity of Collaboration(RIC)indicator,by constructing a confidence interval for the obtained values.Design/methodology/approach:We use Mantel-Haen... Purpose:We aim to extend our investigations related to the Relative Intensity of Collaboration(RIC)indicator,by constructing a confidence interval for the obtained values.Design/methodology/approach:We use Mantel-Haenszel statistics as applied recently by Smolinsky,Klingenberg,and Marx.Findings:We obtain confidence intervals for the RIC indicatorResearch limitations:It is not obvious that data obtained from the Web of Science(or any other database)can be considered a random sample.Practical implications:We explain how to calculate confidence intervals.Bibliometric indicators are more often than not presented as precise values instead of an approximation depending on the database and the time of measurement.Our approach presents a suggestion to solve this problem.Originality/value:Our approach combines the statistics of binary categorical data and bibliometric studies of collaboration. 展开更多
关键词 Contingency tables confidence intervals Relative intensity of collaboration(RIC) Mantel-Haenszel statistics Science of science
下载PDF
Confidence Interval Estimation of the Correlation in the Presence of Non-Detects
4
作者 Courtney E. McCracken Stephen W. Looney 《Open Journal of Statistics》 2021年第3期463-475,共13页
This article deals with correlating two variables that have values that fall below the known limit of detection (LOD) of the measuring device;these values are known as non-detects (NDs). We use simulation to compare s... This article deals with correlating two variables that have values that fall below the known limit of detection (LOD) of the measuring device;these values are known as non-detects (NDs). We use simulation to compare several methods for estimating the association between two such variables. The most commonly used method, simple substitution, consists of replacing each ND with some representative value such as LOD/2. Spearman’s correlation, in which all NDs are assumed to be tied at some value just smaller than the LOD, is also used. We evaluate each method under several scenarios, including small to moderate sample size, moderate to large censoring proportions, extr</span><span style="font-family:Verdana;">eme imbalance in censoring proportions, and non-bivariate nor</span><span style="font-family:Verdana;">mal (BVN) data. In this article, we focus on the coverage probability of 95% confidence intervals obtained using each method. Confidence intervals using a maximum likelihood approach based on the assumption of BVN data have acceptable performance under most scenarios, even with non-BVN data. Intervals based on Spearman’s coefficient also perform well under many conditions. The methods are illustrated using real data taken from the biomarker literature. 展开更多
关键词 confidence interval Coverage Probability Left Censoring Limit of Detection Maximum Likelihood Spearman Correlation
下载PDF
Computing Confidence Intervals for the Postal Service’s Cost-Elasticity Estimates
5
作者 Bzhilyanskaya Y. Lyudmila Margaret M. Cigno Soiliou D. Namoro 《Open Journal of Statistics》 2021年第5期607-619,共13页
This paper provides methods for assessing the precision of cost elasticity estimates when the underlying regression function is assumed to be polynomial. Specifically, the paper adapts two well-known methods for compu... This paper provides methods for assessing the precision of cost elasticity estimates when the underlying regression function is assumed to be polynomial. Specifically, the paper adapts two well-known methods for computing confidential intervals for ratios: the delta-method and the Fieller method. We show that performing the estimation with mean-centered explanatory variables provides a straightforward way to estimate the elasticity and compute a confidence interval for it. A theoretical discussion of the proposed methods is provided, as well as an empirical example based on publicly available postal data. Possible areas of application include postal service providers worldwide, transportation and electricity. 展开更多
关键词 Volume Variability confidence interval Ratio Parameter Delta Method Fieller Method
下载PDF
Confidence Intervals for the Binomial Proportion: A Comparison of Four Methods
6
作者 Luke Akong’o Orawo 《Open Journal of Statistics》 2021年第5期806-816,共11页
This paper presents four methods of constructing the confidence interval for the proportion <i><span style="font-family:Verdana;">p</span></i><span style="font-family:;" ... This paper presents four methods of constructing the confidence interval for the proportion <i><span style="font-family:Verdana;">p</span></i><span style="font-family:;" "=""><span style="font-family:Verdana;"> of the binomial distribution. Evidence in the literature indicates the standard Wald confidence interval for the binomial proportion is inaccurate, especially for extreme values of </span><i><span style="font-family:Verdana;">p</span></i><span style="font-family:Verdana;">. Even for moderately large sample sizes, the coverage probabilities of the Wald confidence interval prove to be erratic for extreme values of </span><i><span style="font-family:Verdana;">p</span></i><span style="font-family:Verdana;">. Three alternative confidence intervals, namely, Wilson confidence interval, Clopper-Pearson interval, and likelihood interval</span></span><span style="font-family:Verdana;">,</span><span style="font-family:Verdana;"> are compared to the Wald confidence interval on the basis of coverage probability and expected length by means of simulation.</span> 展开更多
关键词 Binomial Distribution confidence interval Coverage Probability Expected Length Relative Likelihood Function
下载PDF
DEA Scores’ Confidence Intervals with Past-Present and Past-Present-Future Based Resampling
7
作者 Kaoru Tone Jamal Ouenniche 《American Journal of Operations Research》 2016年第2期121-135,共15页
In data envelopment analysis (DEA), input and output values are subject to change for several reasons. Such variations differ in their input/output items and their decision-making units (DMUs). Hence, DEA efficiency s... In data envelopment analysis (DEA), input and output values are subject to change for several reasons. Such variations differ in their input/output items and their decision-making units (DMUs). Hence, DEA efficiency scores need to be examined by considering these factors. In this paper, we propose new resampling models based on these variations for gauging the confidence intervals of DEA scores. The first model utilizes past-present data for estimating data variations imposing chronological order weights which are supplied by Lucas series (a variant of Fibonacci series). The second model deals with future prospects. This model aims at forecasting the future efficiency score and its confidence interval for each DMU. We applied our models to a dataset composed of Japanese municipal hospitals. 展开更多
关键词 Data Variation RESAMPLING confidence interval Past-Present-Future DEA Hospital
下载PDF
Confidence intervals for Markov chain transition probabilities based on next generation sequencing reads data
8
作者 Lin Wan Xin Kang +1 位作者 Jie Ren Fengzhu Sun 《Quantitative Biology》 CAS CSCD 2020年第2期143-154,共12页
Background:Markov chains(MC)have been widely used to model molecular sequences.The estimations of MC transition matrix and confidence intervals of the transition probabilities from long sequence data have been intensi... Background:Markov chains(MC)have been widely used to model molecular sequences.The estimations of MC transition matrix and confidence intervals of the transition probabilities from long sequence data have been intensively studied in the past decades.In next generation sequencing(NGS),a large amount of short reads are generated.These short reads can overlap and some regions of the genome may not be sequenced resulting in a new type of data.Based on NGS data,the transition probabilities of MC can be estimated by moment estimators.However,the classical asymptotic distribution theory for MC transition probability estimators based on long sequences is no longer valid.Methods:In this study,we present the asymptotic distributions of several statistics related to MC based on NGS data.We show that,after scaling by the effective coverage d defined in a previous study by the authors,these statistics based on NGS data approximate to the same distributions as the corresponding statistics for long sequences.Results:We apply the asymptotic properties of these statistics for finding the theoretical confidence regions for MC transition probabilities based on NGS short reads data.We validate our theoretical confidence intervals using both simulated data and real data sets,and compare the results with those by the parametric bootstrap method.Conclusions:We find that the asymptotic distributions of these statistics and the theoretical confidence intervals of transition probabilities based on NGS data given in this study are highly accurate,providing a powerful tool for NGS data analysis. 展开更多
关键词 Markov chains next generation sequencing transition probabilities confidence intervals
原文传递
Bootstrap generated confidence interval for time averaged measure
9
作者 Jinsoo Park Haneul Lee Yun Bae Kim 《International Journal of Modeling, Simulation, and Scientific Computing》 EI 2015年第3期107-115,共9页
In the simulation output analysis,there are some measures that should be calculated by time average concept such as the mean queue length.Especially,the confidence interval of those measures might be required for stat... In the simulation output analysis,there are some measures that should be calculated by time average concept such as the mean queue length.Especially,the confidence interval of those measures might be required for statistical analysis.In this situation,the traditional method that utilizes the central limit theorem(CLT)is inapplicable if the output data set has autocorrelation structure.The bootstrap is one of the most suitable methods which can reflect the autocorrelated phenomena in statistical analysis.Therefore,the confidence interval for a time averaged measure having autocorrelation structure can also be calculated by the bootstrap methods.This study introduces the method that constructs these confidence intervals applying the bootstraps.The bootstraps proposed are the threshold bootstrap(TB),the moving block bootstrap(MBB)and stationary bootstrap(SB).Finally,some numerical examples will be provided for verification. 展开更多
关键词 Simulation output analysis confidence interval time averaged measure bootstrap
原文传递
Constructing Confidence Regions for Autoregressive-Model Parameters
10
作者 Jan Vrbik 《Applied Mathematics》 2023年第10期704-717,共14页
We discuss formulas and techniques for finding maximum-likelihood estimators of parameters of autoregressive (with particular emphasis on Markov and Yule) models, computing their asymptotic variance-covariance matrix ... We discuss formulas and techniques for finding maximum-likelihood estimators of parameters of autoregressive (with particular emphasis on Markov and Yule) models, computing their asymptotic variance-covariance matrix and displaying the resulting confidence regions;Monte Carlo simulation is then used to establish the accuracy of the corresponding level of confidence. The results indicate that a direct application of the Central Limit Theorem yields errors too large to be acceptable;instead, we recommend using a technique based directly on the natural logarithm of the likelihood function, verifying its substantially higher accuracy. Our study is then extended to the case of estimating only a subset of a model’s parameters, when the remaining ones (called nuisance) are of no interest to us. 展开更多
关键词 MARKOV Yule and Autoregressive Models Maximum Likelihood Function Asymptotic Variance-Covariance Matrix confidence intervals Nuisance Parameters
下载PDF
Traffic Flow Data Forecasting Based on Interval Type-2 Fuzzy Sets Theory 被引量:4
11
作者 Runmei Li Chaoyang Jiang +1 位作者 Fenghua Zhu Xiaolong Chen 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI 2016年第2期141-148,共8页
This paper proposes a long-term forecasting scheme and implementation method based on the interval type-2 fuzzy sets theory for traffic flow data. The type-2 fuzzy sets have advantages in modeling uncertainties becaus... This paper proposes a long-term forecasting scheme and implementation method based on the interval type-2 fuzzy sets theory for traffic flow data. The type-2 fuzzy sets have advantages in modeling uncertainties because their membership functions are fuzzy. The scheme includes traffic flow data preprocessing module, type-2 fuzzification operation module and long-term traffic flow data forecasting output module, in which the Interval Approach acts as the core algorithm. The central limit theorem is adopted to convert point data of mass traffic flow in some time range into interval data of the same time range(also called confidence interval data) which is being used as the input of interval approach. The confidence interval data retain the uncertainty and randomness of traffic flow, meanwhile reduce the influence of noise from the detection data. The proposed scheme gets not only the traffic flow forecasting result but also can show the possible range of traffic flow variation with high precision using upper and lower limit forecasting result. The effectiveness of the proposed scheme is verified using the actual sample application. 展开更多
关键词 interval type-2 fuzzy sets central limit theorem confidence interval long-term prediction
下载PDF
Heating load interval forecasting approach based on support vector regression and error estimation
12
作者 张永明 于德亮 齐维贵 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2011年第4期94-98,共5页
As the existing heating load forecasting methods are almostly point forecasting,an interval forecasting approach based on Support Vector Regression (SVR) and interval estimation of relative error is proposed in this p... As the existing heating load forecasting methods are almostly point forecasting,an interval forecasting approach based on Support Vector Regression (SVR) and interval estimation of relative error is proposed in this paper.The forecasting output can be defined as energy saving control setting value of heating supply substation;meanwhile,it can also provide a practical basis for heating dispatching and peak load regulating operation.By means of the proposed approach,SVR model is used to point forecasting and the error interval can be gained by using nonparametric kernel estimation to the forecast error,which avoid the distributional assumptions.Combining the point forecasting results and error interval,the forecast confidence interval is obtained.Finally,the proposed model is performed through simulations by applying it to the data from a heating supply network in Harbin,and the results show that the method can meet the demands of energy saving control and heating dispatching. 展开更多
关键词 heating supply energy-saving load forecasting support vector regression nonparametric kernel estimation confidence interval
下载PDF
Bayesian Computation for the Parameters of a Zero-Inflated Cosine Geometric Distribution with Application to COVID-19 Pandemic Data
13
作者 Sunisa Junnumtuam Sa-Aat Niwitpong Suparat Niwitpong 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第5期1229-1254,共26页
A new three-parameter discrete distribution called the zero-inflated cosine geometric(ZICG)distribution is proposed for the first time herein.It can be used to analyze over-dispersed count data with excess zeros.The b... A new three-parameter discrete distribution called the zero-inflated cosine geometric(ZICG)distribution is proposed for the first time herein.It can be used to analyze over-dispersed count data with excess zeros.The basic statistical properties of the new distribution,such as the moment generating function,mean,and variance are presented.Furthermore,confidence intervals are constructed by using the Wald,Bayesian,and highest posterior density(HPD)methods to estimate the true confidence intervals for the parameters of the ZICG distribution.Their efficacies were investigated by using both simulation and real-world data comprising the number of daily COVID-19 positive cases at the Olympic Games in Tokyo 2020.The results show that the HPD interval performed better than the other methods in terms of coverage probability and average length in most cases studied. 展开更多
关键词 Bayesian analysis confidence interval gibbs sampling random-walk metropolis zero-inflated count data
下载PDF
Role of IL-17 gene polymorphisms in osteoarthritis:A meta-analysis based on observational studies 被引量:3
14
作者 Hao-Yu Yang Yu-Zhou Liu +2 位作者 Xin-Die Zhou Yong Huang Nan-Wei Xu 《World Journal of Clinical Cases》 SCIE 2020年第11期2280-2293,共14页
BACKGROUND Osteoarthritis(OA)is a chronic complex multifactorial joint disease,and a major degenerative form of arthritis.Existing studies on the association between polymorphisms of the IL-17 gene and the risk of OA ... BACKGROUND Osteoarthritis(OA)is a chronic complex multifactorial joint disease,and a major degenerative form of arthritis.Existing studies on the association between polymorphisms of the IL-17 gene and the risk of OA in different populations have yielded conflicting findings.AIM To investigate the association between polymorphisms of the IL-17 gene and the risk of OA.METHODS We conducted a meta-analysis by systematically searching databases,including PubMed,EMBASE,MEDLINE,Cochrane Library,and Google Scholar to evaluate this association by calculating pooled odds ratios with 95%confidence intervals.Moreover,subgroup analyses stratified by ethnicity and OA type were also conducted.RESULTS In a total of 6 citations involving 8 studies(2131 cases and 2299 controls),4 single nucleotide polymorphisms were identified.Of these 4 polymorphisms,2(rs2275913,rs763780)were common in five case-control studies.Together,the pooled results revealed that the A allele and genotype AA/GA of the rs2275913 polymorphism,and the C allele and genotype CC of the rs763780 polymorphism in the IL-17 gene increased the risk of OA.Furthermore,stratification analyses by ethnicity and OA type showed that the rs2275913 polymorphism increased the risk of OA among Asians and in knee/hip OA,respectively.In addition,stratification analyses also revealed that the rs763780 polymorphism increased OA risk among both Asians and Caucasians in knee/hip OA.CONCLUSION The rs763780 polymorphism of the IL-17F gene increased the risk of OA,whereas the rs2275913 polymorphism of the IL-17A gene increased the risk of OA only among Asians.Due to the limitations of this study,these findings should be validated in future studies. 展开更多
关键词 INTERLEUKIN-17 POLYMORPHISM OSTEOARTHRITIS META-ANALYSIS Odds ratio confidence interval
下载PDF
Sources of inaccuracy when estimating economically optimum N fertilizer rates 被引量:1
15
作者 Martin Bachmaier 《Agricultural Sciences》 2012年第3期331-338,共8页
Nitrogen rate trials are often performed to determine the economically optimum N application rate. For this purpose, the yield is modeled as a function of the N application. The regression analysis provides an estimat... Nitrogen rate trials are often performed to determine the economically optimum N application rate. For this purpose, the yield is modeled as a function of the N application. The regression analysis provides an estimate of the modeled function and thus also an estimate of the economic optimum, Nopt. Obtaining the accuracy of such estimates by confidence intervals for Nopt is subject to the model assumptions. The dependence of these assumptions is a further source of inaccuracy. The Nopt estimate also strongly depends on the N level design, i.e., the area on which the model is fitted. A small area around the supposed Nopt diminishes the dependence of the model assumptions, but prolongs the confidence interval. The investigations of the impact of the mentioned sources on the inaccuracy of the Nopt estimate rely on N rate trials on the experimental field Sieblerfeld (Bavaria). The models applied are the quadratic and the linear-plus-plateau yield regression model. 展开更多
关键词 confidence interval Economic Optimum N Rate Trials Quadratic Model Linear-plus-Plateau Model
下载PDF
Revaluation of Clopidogrel:Let the Data Speak for Themselves
16
作者 刘力 曾繁典 +18 位作者 曾晓华 薛清梅 聂绍平 康彩练 吴健鸿 康庆云 王新高 刘小青 李涛 陈军 李青 徐戎 杨晓燕 康辉 姜发刚 李宗桃 汪绪武 张力 龙玉 《Journal of Huazhong University of Science and Technology(Medical Sciences)》 SCIE CAS 2010年第3期299-306,共8页
Clopidogrel was believed to be superior to aspirin by the well-known CAPRIE trial.However,no other large clinical trials demonstrated the same results,but all focused on the combination use of clopidogrel with aspirin... Clopidogrel was believed to be superior to aspirin by the well-known CAPRIE trial.However,no other large clinical trials demonstrated the same results,but all focused on the combination use of clopidogrel with aspirin,and combination therapy in CREDO was called the "Emperor's New Clothes".However,no one overturned the results of these clinical trials by quantitatively analyzing them.We reviewed ten large-scale clinical trials about clopidogrel.On the basis of results of CAPRIE,CREDO and CHARISMA trials,we re-estimated their minimal sample sizes and their powers by three well-established statistical methodologies.From the results of CAPRIE,we inferred that the minimal sample size should be 85 086 or 84 968 but its power was only 30.70%.A huge gap existed.The same was also true of CREDO and CHARISMA trials.Moreover,in CAPRIE trial,0 was included in the 95% confidence interval and 1 was included in the 95% confidence interval for the relative risk.There were some paradoxical data in CAPRIE trial.We are led to conclude that the results in CAPRIE,CREDO,and from the subgroup analysis in CHARISMA trials were questionable.These results failed to demonstrate that clopidogrel was superior to aspirin or that clopidogrel used in combination with aspirin was better than aspirin alone.The cost-effectiveness analyses by some previous studies were not reliable. 展开更多
关键词 CLOPIDOGREL ASPIRIN antiplatelet therapy randomized blinded trial sample size power confidence interval
下载PDF
Topp-Leone Odd Fréchet Generated Family of Distributions with Applications to COVID-19 Data Sets
17
作者 Sanaa Al-Marzouki Farrukh Jamal +1 位作者 Christophe Chesneau Mohammed Elgarhy 《Computer Modeling in Engineering & Sciences》 SCIE EI 2020年第10期437-458,共22页
Recent studies have pointed out the potential of the odd Fréchet family(or class)of continuous distributions in fitting data of all kinds.In this article,we propose an extension of this family through the so-cal... Recent studies have pointed out the potential of the odd Fréchet family(or class)of continuous distributions in fitting data of all kinds.In this article,we propose an extension of this family through the so-called“Topp-Leone strategy”,aiming to improve its overall flexibility by adding a shape parameter.The main objective is to offer original distributions with modifiable properties,from which adaptive and pliant statistical models can be derived.For the new family,these aspects are illustrated by the means of comprehensive mathematical and numerical results.In particular,we emphasize a special distribution with three parameters based on the exponential distribution.The related model is shown to be skillful to the fitting of various lifetime data,more or less heterogeneous.Among all the possible applications,we consider two data sets of current interest,linked to the COVID-19 pandemic.They concern daily cases confirmed and recovered in Pakistan from March 24 to April 28,2020.As a result of our analyzes,the proposed model has the best fitting results in comparison to serious challengers,including the former odd Fréchet model. 展开更多
关键词 General family of distributions asymmetric distributions probabilistic properties parametric estimation confidence intervals COVID-19 pandemic data analysis
下载PDF
Analysis of Naval Ship Evacuation Using Stochastic Simulation Models and Experimental Data Sets
18
作者 Roberto Bellas Javier Martínez +3 位作者 Ignacio Rivera Ramón Touza MiguelGómez Rafael Carreño 《Computer Modeling in Engineering & Sciences》 SCIE EI 2020年第3期971-995,共25页
The study of emergency evacuation in public spaces,buildings and large ships may present parallel characteristic in terms of complexity of the layout but there are also significant differences that can hinder passenge... The study of emergency evacuation in public spaces,buildings and large ships may present parallel characteristic in terms of complexity of the layout but there are also significant differences that can hinder passengers to reach muster stations or the lifeboats.There are many hazards on a ship that can cause an emergency evacuation,the most severe result in loss of lives.Providing safe and effective evacuation of passengers from ships in an emergency situation becomes critical.Recently,computer simulation has become an indispensable technology in various fields,among them,the evacuation models that recently evolved incorporating human behavioral factors.In this work,an analysis of evacuation in a Landing Helicopter Dock(LHD)ship was conducted.Escape routes specified by the ship’s procedures were introduced in the model and the six emergency scenarios of the Naval Ship Code were simulated.The crew and embarked troops were introduced with their different evacuation behavior,in addition,walking speeds were extracted from data set collected in experiments conducted at other warships.From the results of the simulations,the longest time was chosen and confidence intervals constructed to determine the total evacuation time.Finally,results show that evacuation time meets regulatory requirements and the usefulness and low cost of the evacuation simulation for testing and refining possible ships’layouts and emergency scenarios. 展开更多
关键词 Evacuation modeling confidence intervals naval ship IMO NATO NSC.
下载PDF
Bayesian Analysis in Partially Accelerated Life Tests for Weighted Lomax Distribution
19
作者 Rashad Bantan Amal S.Hassan +4 位作者 Ehab Almetwally M.Elgarhy Farrukh Jamal Christophe Chesneau Mahmoud Elsehetry 《Computers, Materials & Continua》 SCIE EI 2021年第9期2859-2875,共17页
Accelerated life testing has been widely used in product life testing experiments because it can quickly provide information on the lifetime distributions by testing products or materials at higher than basic conditio... Accelerated life testing has been widely used in product life testing experiments because it can quickly provide information on the lifetime distributions by testing products or materials at higher than basic conditional levels of stress,such as pressure,temperature,vibration,voltage,or load to induce early failures.In this paper,a step stress partially accelerated life test(SSPALT)is regarded under the progressive type-II censored data with random removals.The removals from the test are considered to have the binomial distribution.The life times of the testing items are assumed to follow lengthbiased weighted Lomax distribution.The maximum likelihood method is used for estimating the model parameters of length-biased weighted Lomax.The asymptotic confidence interval estimates of the model parameters are evaluated using the Fisher information matrix.The Bayesian estimators cannot be obtained in the explicit form,so the Markov chain Monte Carlo method is employed to address this problem,which ensures both obtaining the Bayesian estimates as well as constructing the credible interval of the involved parameters.The precision of the Bayesian estimates and the maximum likelihood estimates are compared by simulations.In addition,to compare the performance of the considered confidence intervals for different parameter values and sample sizes.The Bootstrap confidence intervals give more accurate results than the approximate confidence intervals since the lengths of the former are less than the lengths of latter,for different sample sizes,observed failures,and censoring schemes,in most cases.Also,the percentile Bootstrap confidence intervals give more accurate results than Bootstrap-t since the lengths of the former are less than the lengths of latter for different sample sizes,observed failures,and censoring schemes,in most cases.Further performance comparison is conducted by the experiments with real data. 展开更多
关键词 Partially accelerated life testing progressive type-II censoring length-biased weighted Lomax Bayesian and bootstrap confidence intervals
下载PDF
An Alternative to the Marshall-Olkin Family of Distributions:Bootstrap,Regression and Applications
20
作者 Christophe Chesneau Kadir Karakaya +1 位作者 Hassan S.Bakouch Coskun Kus 《Communications on Applied Mathematics and Computation》 2022年第4期1229-1257,共29页
This paper introduces a new rich family of distributions based on mixtures and the so-called Marshall-Olkin family of distributions.It includes a wide variety of well-established mixture distributions,ensuring a high ... This paper introduces a new rich family of distributions based on mixtures and the so-called Marshall-Olkin family of distributions.It includes a wide variety of well-established mixture distributions,ensuring a high ability for data fitting.Some distributional properties are derived for the general family.The Weibull distribution is then considered as the base-line,exhibiting a pliant four-parameter lifetime distribution.Five estimation methods for the related parameters are discussed.Bootstrap confidence intervals are also considered for these parameters.The distribution is reparametrized with location-scale parameters and it is used for a lifetime regression analysis.An extensive simulation is carried out on the esti-mation methods for distribution parameters and regression model parameters.Applications are given to two practical data sets to illustrate the applicability of the new family. 展开更多
关键词 Marshall-Olkin family of distributions ESTIMATION confidence intervals BOOTSTRAP Data analysis
下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部