New armament systems are subjected to the method for dealing with multi-stage system reliability-growth statistical problems of diverse population in order to improve reliability before starting mass production. Aimin...New armament systems are subjected to the method for dealing with multi-stage system reliability-growth statistical problems of diverse population in order to improve reliability before starting mass production. Aiming at the test process which is high expense and small sample-size in the development of complex system, the specific methods are studied on how to process the statistical information of Bayesian reliability growth regarding diverse populations. Firstly, according to the characteristics of reliability growth during product development, the Bayesian method is used to integrate the testing information of multi-stage and the order relations of distribution parameters. And then a Gamma-Beta prior distribution is proposed based on non-homogeneous Poisson process(NHPP) corresponding to the reliability growth process. The posterior distribution of reliability parameters is obtained regarding different stages of product, and the reliability parameters are evaluated based on the posterior distribution. Finally, Bayesian approach proposed in this paper for multi-stage reliability growth test is applied to the test process which is small sample-size in the astronautics filed. The results of a numerical example show that the presented model can make use of the diverse information synthetically, and pave the way for the application of the Bayesian model for multi-stage reliability growth test evaluation with small sample-size. The method is useful for evaluating multi-stage system reliability and making reliability growth plan rationally.展开更多
Aiming at the solving problem of improved nonhomogeneous Poisson process( NHPP) model in engineering application,the immune clone maximum likelihood estimation( MLE)method for solving model parameters was proposed. Th...Aiming at the solving problem of improved nonhomogeneous Poisson process( NHPP) model in engineering application,the immune clone maximum likelihood estimation( MLE)method for solving model parameters was proposed. The minimum negative log-likelihood function was used as the objective function to optimize instead of using iterative method to solve complex system of equations,and the problem of parameter estimation of improved NHPP model was solved by immune clone algorithm. And the interval estimation of reliability indices was given by using fisher information matrix method and delta method. An example of failure truncated data from multiple numerical control( NC) machine tools was taken to prove the method. and the results show that the algorithm has a higher convergence rate and computational accuracy, which demonstrates the feasibility of the method.展开更多
The PPNH (non-homogenous Poisson processes) are frequently used as models for events that come about randomly in a given time period, for example, failure times, time of accidents occurrences, etc. In this work, PPN...The PPNH (non-homogenous Poisson processes) are frequently used as models for events that come about randomly in a given time period, for example, failure times, time of accidents occurrences, etc. In this work, PPNH is used to model monthly maximum observations of urban ozone corresponding to a period of five years from the meteorological stations of Merced, Pedregal and Plateros, located in the metropolitan area of Mexico City. The interest data are the times in which the observations surpassed the permissible level of ozone of 0.11 ppm, settled by the Mexican Official Norm (NOM-020-SSA 1-1993) to preserve public health.展开更多
This article discusses the Bayesian approach for count data using non-homogeneous Poisson processes, considering different prior distributions for the model parameters. A Bayesian approach using Markov Chain Monte Car...This article discusses the Bayesian approach for count data using non-homogeneous Poisson processes, considering different prior distributions for the model parameters. A Bayesian approach using Markov Chain Monte Carlo (MCMC) simulation methods for this model was first introduced by [1], taking into account software reliability data and considering non-informative prior distributions for the parameters of the model. With the non-informative prior distributions presented by these authors, computational difficulties may occur when using MCMC methods. This article considers different prior distributions for the parameters of the proposed model, and studies the effect of such prior distributions on the convergence and accuracy of the results. In order to illustrate the proposed methodology, two examples are considered: the first one has simulated data, and the second has a set of data for pollution issues at a region in Mexico City.展开更多
Studying the propagation of cascading failures through the transmission network is key to asses and mitigate the risk faced the energy system. As complex systems the power grid failure is often studied using some prob...Studying the propagation of cascading failures through the transmission network is key to asses and mitigate the risk faced the energy system. As complex systems the power grid failure is often studied using some probability distributions. We apply 4 well-known probabilistic models, Poisson model, Power Law model, Generalized Poisson Branching process model and Borel-Tanner Branching process model, to a 14-year utility historical outage data from a regional power grid in China, computing probabilities of cascading line outages. For this data, the empirical distribution of the total number of line outages is well approximated by the initial line outages propagating according to a Borel-Tanner branching process. Also for this data, Power law model overestimates, while Generalized Possion branching process and Possion model underestimate, the probability of larger outages. Especially, the probability distribution generated by the Poisson model deviates heavily from the observed data, underestimating the probability of large events (total no. of outages over 5) by roughly a factor of 10-2 to 10-5. The observation is confirmed by a statistical test of model fitness. The results of this work indicate that further testing of Borel-Tanner branching process models of cascading failure is appropriate, and should be further discussed as it outperforms other more traditional models.展开更多
Due to the randomness and time dependence of the factors affecting software reliability, most software reliability models are treated as stochastic processes, and the non-homogeneous Poisson process(NHPP) is the most ...Due to the randomness and time dependence of the factors affecting software reliability, most software reliability models are treated as stochastic processes, and the non-homogeneous Poisson process(NHPP) is the most used one.However, the failure behavior of software does not follow the NHPP in a statistically rigorous manner, and the pure random method might be not enough to describe the software failure behavior. To solve these problems, this paper proposes a new integrated approach that combines stochastic process and grey system theory to describe the failure behavior of software. A grey NHPP software reliability model is put forward in a discrete form, and a grey-based approach for estimating software reliability under the NHPP is proposed as a nonlinear multi-objective programming problem. Finally, four grey NHPP software reliability models are applied to four real datasets, the dynamic R-square and predictive relative error are calculated. Comparing with the original single NHPP software reliability model, it is found that the modeling using the integrated approach has a higher prediction accuracy of software reliability. Therefore, there is the characteristics of grey uncertain information in the NHPP software reliability models, and exploiting the latent grey uncertain information might lead to more accurate software reliability estimation.展开更多
The degradation process modeling is one of research hotspots of prognostic and health management(PHM),which can be used to estimate system reliability and remaining useful life(RUL).In order to study system degradatio...The degradation process modeling is one of research hotspots of prognostic and health management(PHM),which can be used to estimate system reliability and remaining useful life(RUL).In order to study system degradation process,cumulative damage model is used for degradation modeling.Assuming that damage increment is Gamma distribution,shock counting subjects to a homogeneous Poisson process(HPP)when degradation process is linear,and shock counting is a non-homogeneous Poisson process(NHPP)when degradation process is nonlinear.A two-stage degradation system is considered in this paper,for which the degradation process is linear in the first stage and the degradation process is nonlinear in the second stage.A nonlinear modeling method for considered system is put forward,and reliability model and remaining useful life model are established.A case study is given to validate the veracities of established models.展开更多
The delayed S-shaped software reliability growth model (SRGM) is one of the non-homogeneous Poisson process (NHPP) models which have been proposed for software reliability assessment. The model is distinctive because ...The delayed S-shaped software reliability growth model (SRGM) is one of the non-homogeneous Poisson process (NHPP) models which have been proposed for software reliability assessment. The model is distinctive because it has a mean value function that reflects the delay in failure reporting: there is a delay between failure detection and reporting time. The model captures error detection, isolation, and removal processes, thus is appropriate for software reliability analysis. Predictive analysis in software testing is useful in modifying, debugging, and determining when to terminate software development testing processes. However, Bayesian predictive analyses on the delayed S-shaped model have not been extensively explored. This paper uses the delayed S-shaped SRGM to address four issues in one-sample prediction associated with the software development testing process. Bayesian approach based on non-informative priors was used to derive explicit solutions for the four issues, and the developed methodologies were illustrated using real data.展开更多
Because of the inevitable debugging lag,imperfect debugging process is used to replace perfect debugging process in the analysis of software reliability growth model.Considering neither testing-effort nor testing cove...Because of the inevitable debugging lag,imperfect debugging process is used to replace perfect debugging process in the analysis of software reliability growth model.Considering neither testing-effort nor testing coverage can describe software reliability for imperfect debugging completely,by hybridizing testing-effort with testing coverage under imperfect debugging,this paper proposes a new model named GMW-LO-ID.Under the assumption that the number of faults is proportional to the current number of detected faults,this model combines generalized modified Weibull(GMW)testing-effort function with logistic(LO)testing coverage function,and inherits GMW's amazing flexibility and LO's high fitting precision.Furthermore,the fitting accuracy and predictive power are verified by two series of experiments and we can draw a conclusion that our model fits the actual failure data better and predicts the software future behavior better than other ten traditional models,which only consider one or two points of testing-effort,testing coverage and imperfect debugging.展开更多
The current-mode-counting method is a new approach to observing transient processes,especially in transient nuclear fusion,based on the non-homogeneous Poisson process(NHPP)model.In this paper,a new measurement proces...The current-mode-counting method is a new approach to observing transient processes,especially in transient nuclear fusion,based on the non-homogeneous Poisson process(NHPP)model.In this paper,a new measurement process model of the pulsed radiation field produced by transient nuclear fusion is built based on the NHPP.A simulated measurement is performed using the model,and the current signal from the detector is obtained by simulation based on Poisson process thinning.The neutron time spectrum is reconstructed and is in good agreement with the theoretical value,with its maximum error of a characteristic parameter less than 2.3%.Verification experiments were carried out on a CPNG-6 device at the China Institute of Atomic Energy,with a detection system with a nanosecond response time.The experimental charge amplitude spectra are in good agreement with those obtained by the traditional counting mode,and the characteristic parameters of the time spectrum are in good agreement with the theoretical values.This shows that the current-mode-counting method is effective for the observation of transient nuclear fusion processes.展开更多
The aim of this study is to propose an estimation approach to non-life insurance claim counts related to the insurance claim counting process, including the non-homogeneous Poisson process (NHPP) with a bell-shaped in...The aim of this study is to propose an estimation approach to non-life insurance claim counts related to the insurance claim counting process, including the non-homogeneous Poisson process (NHPP) with a bell-shaped intensity and a beta-shaped intensity. The estimating function, such as the zero mean martingale (ZMM), is used as a procedure for parameter estimation of the insurance claim counting process, and the parameters of model claim intensity are estimated by the Bayesian method. Then,Λ(t), the compensator of N(t) is proposed for the number of claims in a time interval (0,t]. Given the process over the time interval (0,t]., the situations are presented through a simulation study and some examples of these situations are also depicted by a sample path relating N(t) to its compensatorΛ(t).展开更多
Software reliability growth models (SRGMs) incorporating the imperfect debugging and learning phenomenon of developers have recently been developed by many researchers to estimate software reliability measures such ...Software reliability growth models (SRGMs) incorporating the imperfect debugging and learning phenomenon of developers have recently been developed by many researchers to estimate software reliability measures such as the number of remaining faults and software reliability. However, the model parameters of both the fault content rate function and fault detection rate function of the SRGMs are often considered to be independent from each other. In practice, this assumption may not be the case and it is worth to investigate what if it is not. In this paper, we aim for such study and propose a software reliability model connecting the imperfect debugging and learning phenomenon by a common parameter among the two functions, called the imperfect-debugging fault-detection dependent-parameter model. Software testing data collected from real applications are utilized to illustrate the proposed model for both the descriptive and predictive power by determining the non-zero initial debugging process.展开更多
Testing-effort(TE) and imperfect debugging(ID) in the reliability modeling process may further improve the fitting and prediction results of software reliability growth models(SRGMs). For describing the S-shaped...Testing-effort(TE) and imperfect debugging(ID) in the reliability modeling process may further improve the fitting and prediction results of software reliability growth models(SRGMs). For describing the S-shaped varying trend of TE increasing rate more accurately, first, two S-shaped testing-effort functions(TEFs), i.e.,delayed S-shaped TEF(DS-TEF) and inflected S-shaped TEF(IS-TEF), are proposed. Then these two TEFs are incorporated into various types(exponential-type, delayed S-shaped and inflected S-shaped) of non-homogeneous Poisson process(NHPP)SRGMs with two forms of ID respectively for obtaining a series of new NHPP SRGMs which consider S-shaped TEFs as well as ID. Finally these new SRGMs and several comparison NHPP SRGMs are applied into four real failure data-sets respectively for investigating the fitting and prediction power of these new SRGMs.The experimental results show that:(i) the proposed IS-TEF is more suitable and flexible for describing the consumption of TE than the previous TEFs;(ii) incorporating TEFs into the inflected S-shaped NHPP SRGM may be more effective and appropriate compared with the exponential-type and the delayed S-shaped NHPP SRGMs;(iii) the inflected S-shaped NHPP SRGM considering both IS-TEF and ID yields the most accurate fitting and prediction results than the other comparison NHPP SRGMs.展开更多
Several software reliability growth models (SRGM) have been developed to monitor the reliability growth during the testing phase of software development. In most of the existing research available in the literatures...Several software reliability growth models (SRGM) have been developed to monitor the reliability growth during the testing phase of software development. In most of the existing research available in the literatures, it is considered that a similar testing effort is required on each debugging effort. However, in practice, different types of faults may require different amounts of testing efforts for their detection and removal. Consequently, faults are classified into three categories on the basis of severity: simple, hard and complex. This categorization may be extended to r type of faults on the basis of severity. Although some existing research in the literatures has incorporated this concept that fault removal rate (FRR) is different for different types of faults, they assume that the FRR remains constant during the overall testing period. On the contrary, it has been observed that as testing progresses, FRR changes due to changing testing strategy, skill, environment and personnel resources. In this paper, a general discrete SRGM is proposed for errors of different severity in software systems using the change-point concept. Then, the models are formulated for two particular environments. The models were validated on two real-life data sets. The results show better fit and wider applicability of the proposed models as to different types of failure datasets.展开更多
Since the early 1970s tremendous growth has been seen in the research of software reliability growth modeling. In general, software reliability growth models (SRGMs) are applicable to the late stages of testing in s...Since the early 1970s tremendous growth has been seen in the research of software reliability growth modeling. In general, software reliability growth models (SRGMs) are applicable to the late stages of testing in software development and they can provide useful information about how to improve the reliability of software products. A number of SRGMs have been proposed in the literature to represent time-dependent fault identification / removal phenomenon; still new models are being proposed that could fit a greater number of reliability growth curves. Often, it is assumed that detected faults are immediately corrected when mathematical models are developed. This assumption may not be realistic in practice because the time to remove a detected fault depends on the complexity of the fault, the skill and experience of the personnel, the size of the debugging team, the technique, and so on. Thus, the detected fault need not be immediately removed, and it may lag the fault detection process by a delay effect factor. In this paper, we first review how different software reliability growth models have been developed, where fault detection process is dependent not only on the number of residual fault content but also on the testing time, and see how these models can be reinterpreted as the delayed fault detection model by using a delay effect factor. Based on the power function of the testing time concept, we propose four new SRGMs that assume the presence of two types of faults in the software: leading and dependent faults. Leading faults are those that can be removed upon a failure being observed. However, dependent faults are masked by leading faults and can only be removed after the corresponding leading fault has been removed with a debugging time lag. These models have been tested on real software error data to show its goodness of fit, predictive validity and applicability.展开更多
Masked data are the system failure data when exact component causing system failure might be unknown.In this paper,the mathematical description of general masked data was presented in software reliability engineering....Masked data are the system failure data when exact component causing system failure might be unknown.In this paper,the mathematical description of general masked data was presented in software reliability engineering.Furthermore,a general maskedbased additive non-homogeneous Poisson process(NHPP) model was considered to analyze component reliability.However,the problem of masked-based additive model lies in the difficulty of estimating parameters.The maximum likelihood estimation procedure was derived to estimate parameters.Finally,a numerical example was given to illustrate the applicability of proposed model,and the immune particle swarm optimization(IPSO) algorithm was used in maximize log-likelihood function.展开更多
Software reliability was estimated based on NHPP software reliability growth models. Testing reliability and operational reliability may be essentially different. On the basis of analyzing similarities and differences...Software reliability was estimated based on NHPP software reliability growth models. Testing reliability and operational reliability may be essentially different. On the basis of analyzing similarities and differences of the testing phase and the operational phase, using the concept of operational reliability and the testing reliability, different forms of the comparison between the operational failure ratio and the predicted testing failure ratio were conducted, and the mathematical discussion and analysis were performed in detail. Finally, software optimal release was studied using software failure data. The results show that two kinds of conclusions can be derived by applying this method, one conclusion is to continue testing to meet the required reliability level of users, and the other is that testing stops when the required operational reliability is met, thus the testing cost can be reduced.展开更多
基金supported by Sustentation Program of National Ministries and Commissions of China (Grant No. 51319030302 and Grant No. 9140A19030506KG0166)
文摘New armament systems are subjected to the method for dealing with multi-stage system reliability-growth statistical problems of diverse population in order to improve reliability before starting mass production. Aiming at the test process which is high expense and small sample-size in the development of complex system, the specific methods are studied on how to process the statistical information of Bayesian reliability growth regarding diverse populations. Firstly, according to the characteristics of reliability growth during product development, the Bayesian method is used to integrate the testing information of multi-stage and the order relations of distribution parameters. And then a Gamma-Beta prior distribution is proposed based on non-homogeneous Poisson process(NHPP) corresponding to the reliability growth process. The posterior distribution of reliability parameters is obtained regarding different stages of product, and the reliability parameters are evaluated based on the posterior distribution. Finally, Bayesian approach proposed in this paper for multi-stage reliability growth test is applied to the test process which is small sample-size in the astronautics filed. The results of a numerical example show that the presented model can make use of the diverse information synthetically, and pave the way for the application of the Bayesian model for multi-stage reliability growth test evaluation with small sample-size. The method is useful for evaluating multi-stage system reliability and making reliability growth plan rationally.
基金National CNC Special Project,China(No.2010ZX04001-032)the Youth Science and Technology Foundation of Gansu Province,China(No.145RJYA307)
文摘Aiming at the solving problem of improved nonhomogeneous Poisson process( NHPP) model in engineering application,the immune clone maximum likelihood estimation( MLE)method for solving model parameters was proposed. The minimum negative log-likelihood function was used as the objective function to optimize instead of using iterative method to solve complex system of equations,and the problem of parameter estimation of improved NHPP model was solved by immune clone algorithm. And the interval estimation of reliability indices was given by using fisher information matrix method and delta method. An example of failure truncated data from multiple numerical control( NC) machine tools was taken to prove the method. and the results show that the algorithm has a higher convergence rate and computational accuracy, which demonstrates the feasibility of the method.
文摘The PPNH (non-homogenous Poisson processes) are frequently used as models for events that come about randomly in a given time period, for example, failure times, time of accidents occurrences, etc. In this work, PPNH is used to model monthly maximum observations of urban ozone corresponding to a period of five years from the meteorological stations of Merced, Pedregal and Plateros, located in the metropolitan area of Mexico City. The interest data are the times in which the observations surpassed the permissible level of ozone of 0.11 ppm, settled by the Mexican Official Norm (NOM-020-SSA 1-1993) to preserve public health.
基金partially supported by grants from Capes,CNPq and FAPESP.
文摘This article discusses the Bayesian approach for count data using non-homogeneous Poisson processes, considering different prior distributions for the model parameters. A Bayesian approach using Markov Chain Monte Carlo (MCMC) simulation methods for this model was first introduced by [1], taking into account software reliability data and considering non-informative prior distributions for the parameters of the model. With the non-informative prior distributions presented by these authors, computational difficulties may occur when using MCMC methods. This article considers different prior distributions for the parameters of the proposed model, and studies the effect of such prior distributions on the convergence and accuracy of the results. In order to illustrate the proposed methodology, two examples are considered: the first one has simulated data, and the second has a set of data for pollution issues at a region in Mexico City.
文摘Studying the propagation of cascading failures through the transmission network is key to asses and mitigate the risk faced the energy system. As complex systems the power grid failure is often studied using some probability distributions. We apply 4 well-known probabilistic models, Poisson model, Power Law model, Generalized Poisson Branching process model and Borel-Tanner Branching process model, to a 14-year utility historical outage data from a regional power grid in China, computing probabilities of cascading line outages. For this data, the empirical distribution of the total number of line outages is well approximated by the initial line outages propagating according to a Borel-Tanner branching process. Also for this data, Power law model overestimates, while Generalized Possion branching process and Possion model underestimate, the probability of larger outages. Especially, the probability distribution generated by the Poisson model deviates heavily from the observed data, underestimating the probability of large events (total no. of outages over 5) by roughly a factor of 10-2 to 10-5. The observation is confirmed by a statistical test of model fitness. The results of this work indicate that further testing of Borel-Tanner branching process models of cascading failure is appropriate, and should be further discussed as it outperforms other more traditional models.
基金supported by the National Natural Science Foundation of China (71671090)the Fundamental Research Funds for the Central Universities (NP2020022)the Qinglan Project of Excellent Youth or Middle-Aged Academic Leaders in Jiangsu Province。
文摘Due to the randomness and time dependence of the factors affecting software reliability, most software reliability models are treated as stochastic processes, and the non-homogeneous Poisson process(NHPP) is the most used one.However, the failure behavior of software does not follow the NHPP in a statistically rigorous manner, and the pure random method might be not enough to describe the software failure behavior. To solve these problems, this paper proposes a new integrated approach that combines stochastic process and grey system theory to describe the failure behavior of software. A grey NHPP software reliability model is put forward in a discrete form, and a grey-based approach for estimating software reliability under the NHPP is proposed as a nonlinear multi-objective programming problem. Finally, four grey NHPP software reliability models are applied to four real datasets, the dynamic R-square and predictive relative error are calculated. Comparing with the original single NHPP software reliability model, it is found that the modeling using the integrated approach has a higher prediction accuracy of software reliability. Therefore, there is the characteristics of grey uncertain information in the NHPP software reliability models, and exploiting the latent grey uncertain information might lead to more accurate software reliability estimation.
基金National Outstanding Youth Science Fund Project,China(No.71401173)
文摘The degradation process modeling is one of research hotspots of prognostic and health management(PHM),which can be used to estimate system reliability and remaining useful life(RUL).In order to study system degradation process,cumulative damage model is used for degradation modeling.Assuming that damage increment is Gamma distribution,shock counting subjects to a homogeneous Poisson process(HPP)when degradation process is linear,and shock counting is a non-homogeneous Poisson process(NHPP)when degradation process is nonlinear.A two-stage degradation system is considered in this paper,for which the degradation process is linear in the first stage and the degradation process is nonlinear in the second stage.A nonlinear modeling method for considered system is put forward,and reliability model and remaining useful life model are established.A case study is given to validate the veracities of established models.
文摘The delayed S-shaped software reliability growth model (SRGM) is one of the non-homogeneous Poisson process (NHPP) models which have been proposed for software reliability assessment. The model is distinctive because it has a mean value function that reflects the delay in failure reporting: there is a delay between failure detection and reporting time. The model captures error detection, isolation, and removal processes, thus is appropriate for software reliability analysis. Predictive analysis in software testing is useful in modifying, debugging, and determining when to terminate software development testing processes. However, Bayesian predictive analyses on the delayed S-shaped model have not been extensively explored. This paper uses the delayed S-shaped SRGM to address four issues in one-sample prediction associated with the software development testing process. Bayesian approach based on non-informative priors was used to derive explicit solutions for the four issues, and the developed methodologies were illustrated using real data.
基金supported by the National Natural Science Foundation of China(No.U1433116)the Aviation Science Foundation of China(No.20145752033)
文摘Because of the inevitable debugging lag,imperfect debugging process is used to replace perfect debugging process in the analysis of software reliability growth model.Considering neither testing-effort nor testing coverage can describe software reliability for imperfect debugging completely,by hybridizing testing-effort with testing coverage under imperfect debugging,this paper proposes a new model named GMW-LO-ID.Under the assumption that the number of faults is proportional to the current number of detected faults,this model combines generalized modified Weibull(GMW)testing-effort function with logistic(LO)testing coverage function,and inherits GMW's amazing flexibility and LO's high fitting precision.Furthermore,the fitting accuracy and predictive power are verified by two series of experiments and we can draw a conclusion that our model fits the actual failure data better and predicts the software future behavior better than other ten traditional models,which only consider one or two points of testing-effort,testing coverage and imperfect debugging.
基金National Natural Science Foundation of China(1435010,11575145,11922507)。
文摘The current-mode-counting method is a new approach to observing transient processes,especially in transient nuclear fusion,based on the non-homogeneous Poisson process(NHPP)model.In this paper,a new measurement process model of the pulsed radiation field produced by transient nuclear fusion is built based on the NHPP.A simulated measurement is performed using the model,and the current signal from the detector is obtained by simulation based on Poisson process thinning.The neutron time spectrum is reconstructed and is in good agreement with the theoretical value,with its maximum error of a characteristic parameter less than 2.3%.Verification experiments were carried out on a CPNG-6 device at the China Institute of Atomic Energy,with a detection system with a nanosecond response time.The experimental charge amplitude spectra are in good agreement with those obtained by the traditional counting mode,and the characteristic parameters of the time spectrum are in good agreement with the theoretical values.This shows that the current-mode-counting method is effective for the observation of transient nuclear fusion processes.
文摘The aim of this study is to propose an estimation approach to non-life insurance claim counts related to the insurance claim counting process, including the non-homogeneous Poisson process (NHPP) with a bell-shaped intensity and a beta-shaped intensity. The estimating function, such as the zero mean martingale (ZMM), is used as a procedure for parameter estimation of the insurance claim counting process, and the parameters of model claim intensity are estimated by the Bayesian method. Then,Λ(t), the compensator of N(t) is proposed for the number of claims in a time interval (0,t]. Given the process over the time interval (0,t]., the situations are presented through a simulation study and some examples of these situations are also depicted by a sample path relating N(t) to its compensatorΛ(t).
文摘Software reliability growth models (SRGMs) incorporating the imperfect debugging and learning phenomenon of developers have recently been developed by many researchers to estimate software reliability measures such as the number of remaining faults and software reliability. However, the model parameters of both the fault content rate function and fault detection rate function of the SRGMs are often considered to be independent from each other. In practice, this assumption may not be the case and it is worth to investigate what if it is not. In this paper, we aim for such study and propose a software reliability model connecting the imperfect debugging and learning phenomenon by a common parameter among the two functions, called the imperfect-debugging fault-detection dependent-parameter model. Software testing data collected from real applications are utilized to illustrate the proposed model for both the descriptive and predictive power by determining the non-zero initial debugging process.
基金supported by the Pre-research Foundation of CPLA General Equipment Department
文摘Testing-effort(TE) and imperfect debugging(ID) in the reliability modeling process may further improve the fitting and prediction results of software reliability growth models(SRGMs). For describing the S-shaped varying trend of TE increasing rate more accurately, first, two S-shaped testing-effort functions(TEFs), i.e.,delayed S-shaped TEF(DS-TEF) and inflected S-shaped TEF(IS-TEF), are proposed. Then these two TEFs are incorporated into various types(exponential-type, delayed S-shaped and inflected S-shaped) of non-homogeneous Poisson process(NHPP)SRGMs with two forms of ID respectively for obtaining a series of new NHPP SRGMs which consider S-shaped TEFs as well as ID. Finally these new SRGMs and several comparison NHPP SRGMs are applied into four real failure data-sets respectively for investigating the fitting and prediction power of these new SRGMs.The experimental results show that:(i) the proposed IS-TEF is more suitable and flexible for describing the consumption of TE than the previous TEFs;(ii) incorporating TEFs into the inflected S-shaped NHPP SRGM may be more effective and appropriate compared with the exponential-type and the delayed S-shaped NHPP SRGMs;(iii) the inflected S-shaped NHPP SRGM considering both IS-TEF and ID yields the most accurate fitting and prediction results than the other comparison NHPP SRGMs.
文摘Several software reliability growth models (SRGM) have been developed to monitor the reliability growth during the testing phase of software development. In most of the existing research available in the literatures, it is considered that a similar testing effort is required on each debugging effort. However, in practice, different types of faults may require different amounts of testing efforts for their detection and removal. Consequently, faults are classified into three categories on the basis of severity: simple, hard and complex. This categorization may be extended to r type of faults on the basis of severity. Although some existing research in the literatures has incorporated this concept that fault removal rate (FRR) is different for different types of faults, they assume that the FRR remains constant during the overall testing period. On the contrary, it has been observed that as testing progresses, FRR changes due to changing testing strategy, skill, environment and personnel resources. In this paper, a general discrete SRGM is proposed for errors of different severity in software systems using the change-point concept. Then, the models are formulated for two particular environments. The models were validated on two real-life data sets. The results show better fit and wider applicability of the proposed models as to different types of failure datasets.
文摘Since the early 1970s tremendous growth has been seen in the research of software reliability growth modeling. In general, software reliability growth models (SRGMs) are applicable to the late stages of testing in software development and they can provide useful information about how to improve the reliability of software products. A number of SRGMs have been proposed in the literature to represent time-dependent fault identification / removal phenomenon; still new models are being proposed that could fit a greater number of reliability growth curves. Often, it is assumed that detected faults are immediately corrected when mathematical models are developed. This assumption may not be realistic in practice because the time to remove a detected fault depends on the complexity of the fault, the skill and experience of the personnel, the size of the debugging team, the technique, and so on. Thus, the detected fault need not be immediately removed, and it may lag the fault detection process by a delay effect factor. In this paper, we first review how different software reliability growth models have been developed, where fault detection process is dependent not only on the number of residual fault content but also on the testing time, and see how these models can be reinterpreted as the delayed fault detection model by using a delay effect factor. Based on the power function of the testing time concept, we propose four new SRGMs that assume the presence of two types of faults in the software: leading and dependent faults. Leading faults are those that can be removed upon a failure being observed. However, dependent faults are masked by leading faults and can only be removed after the corresponding leading fault has been removed with a debugging time lag. These models have been tested on real software error data to show its goodness of fit, predictive validity and applicability.
基金Technology Foundation of Guizhou Province,China(No.QianKeHeJZi[2015]2064)Scientific Research Foundation for Advanced Talents in Guizhou Institue of Technology and Science,China(No.XJGC20150106)Joint Foundation of Guizhou Province,China(No.QianKeHeLHZi[2015]7105)
文摘Masked data are the system failure data when exact component causing system failure might be unknown.In this paper,the mathematical description of general masked data was presented in software reliability engineering.Furthermore,a general maskedbased additive non-homogeneous Poisson process(NHPP) model was considered to analyze component reliability.However,the problem of masked-based additive model lies in the difficulty of estimating parameters.The maximum likelihood estimation procedure was derived to estimate parameters.Finally,a numerical example was given to illustrate the applicability of proposed model,and the immune particle swarm optimization(IPSO) algorithm was used in maximize log-likelihood function.
基金the PhD Programs Foundation for Young Researchers of Ministry of Education of China (Grant No.20070217051)Major Program of National Natural Science Foundation of China (Grant No.90718003)
文摘Software reliability was estimated based on NHPP software reliability growth models. Testing reliability and operational reliability may be essentially different. On the basis of analyzing similarities and differences of the testing phase and the operational phase, using the concept of operational reliability and the testing reliability, different forms of the comparison between the operational failure ratio and the predicted testing failure ratio were conducted, and the mathematical discussion and analysis were performed in detail. Finally, software optimal release was studied using software failure data. The results show that two kinds of conclusions can be derived by applying this method, one conclusion is to continue testing to meet the required reliability level of users, and the other is that testing stops when the required operational reliability is met, thus the testing cost can be reduced.