Redundancy,correlation,feature irrelevance,and missing samples are just a few problems that make it difficult to analyze software defect data.Additionally,it might be challenging to maintain an even distribution of da...Redundancy,correlation,feature irrelevance,and missing samples are just a few problems that make it difficult to analyze software defect data.Additionally,it might be challenging to maintain an even distribution of data relating to both defective and non-defective software.The latter software class’s data are predominately present in the dataset in the majority of experimental situations.The objective of this review study is to demonstrate the effectiveness of combining ensemble learning and feature selection in improving the performance of defect classification.Besides the successful feature selection approach,a novel variant of the ensemble learning technique is analyzed to address the challenges of feature redundancy and data imbalance,providing robustness in the classification process.To overcome these problems and lessen their impact on the fault classification performance,authors carefully integrate effective feature selection with ensemble learning models.Forward selection demonstrates that a significant area under the receiver operating curve(ROC)can be attributed to only a small subset of features.The Greedy forward selection(GFS)technique outperformed Pearson’s correlation method when evaluating feature selection techniques on the datasets.Ensemble learners,such as random forests(RF)and the proposed average probability ensemble(APE),demonstrate greater resistance to the impact of weak features when compared to weighted support vector machines(W-SVMs)and extreme learning machines(ELM).Furthermore,in the case of the NASA and Java datasets,the enhanced average probability ensemble model,which incorporates the Greedy forward selection technique with the average probability ensemble model,achieved remarkably high accuracy for the area under the ROC.It approached a value of 1.0,indicating exceptional performance.This review emphasizes the importance of meticulously selecting attributes in a software dataset to accurately classify damaged components.In addition,the suggested ensemble learning model successfully addressed the aforementioned problems with software data and produced outstanding classification performance.展开更多
Testing-effort(TE) and imperfect debugging(ID) in the reliability modeling process may further improve the fitting and prediction results of software reliability growth models(SRGMs). For describing the S-shaped...Testing-effort(TE) and imperfect debugging(ID) in the reliability modeling process may further improve the fitting and prediction results of software reliability growth models(SRGMs). For describing the S-shaped varying trend of TE increasing rate more accurately, first, two S-shaped testing-effort functions(TEFs), i.e.,delayed S-shaped TEF(DS-TEF) and inflected S-shaped TEF(IS-TEF), are proposed. Then these two TEFs are incorporated into various types(exponential-type, delayed S-shaped and inflected S-shaped) of non-homogeneous Poisson process(NHPP)SRGMs with two forms of ID respectively for obtaining a series of new NHPP SRGMs which consider S-shaped TEFs as well as ID. Finally these new SRGMs and several comparison NHPP SRGMs are applied into four real failure data-sets respectively for investigating the fitting and prediction power of these new SRGMs.The experimental results show that:(i) the proposed IS-TEF is more suitable and flexible for describing the consumption of TE than the previous TEFs;(ii) incorporating TEFs into the inflected S-shaped NHPP SRGM may be more effective and appropriate compared with the exponential-type and the delayed S-shaped NHPP SRGMs;(iii) the inflected S-shaped NHPP SRGM considering both IS-TEF and ID yields the most accurate fitting and prediction results than the other comparison NHPP SRGMs.展开更多
Several software reliability growth models (SRGM) have been developed to monitor the reliability growth during the testing phase of software development. In most of the existing research available in the literatures...Several software reliability growth models (SRGM) have been developed to monitor the reliability growth during the testing phase of software development. In most of the existing research available in the literatures, it is considered that a similar testing effort is required on each debugging effort. However, in practice, different types of faults may require different amounts of testing efforts for their detection and removal. Consequently, faults are classified into three categories on the basis of severity: simple, hard and complex. This categorization may be extended to r type of faults on the basis of severity. Although some existing research in the literatures has incorporated this concept that fault removal rate (FRR) is different for different types of faults, they assume that the FRR remains constant during the overall testing period. On the contrary, it has been observed that as testing progresses, FRR changes due to changing testing strategy, skill, environment and personnel resources. In this paper, a general discrete SRGM is proposed for errors of different severity in software systems using the change-point concept. Then, the models are formulated for two particular environments. The models were validated on two real-life data sets. The results show better fit and wider applicability of the proposed models as to different types of failure datasets.展开更多
With the rapid progress of component technology,the software development methodology of gathering a large number of components for designing complex software systems has matured.But,how to assess the application relia...With the rapid progress of component technology,the software development methodology of gathering a large number of components for designing complex software systems has matured.But,how to assess the application reliability accurately with the information of system architecture and the components reliabilities together has become a knotty problem.In this paper,the defects in formal description of software architecture and the limitations in existed model assumptions are both analyzed.Moreover,a new software reliability model called Component Interaction Mode(CIM) is proposed.With this model,the problem for existed component-based software reliability analysis models that cannot deal with the cases of component interaction with non-failure independent and non-random control transition is resolved.At last,the practice examples are presented to illustrate the effectiveness of this model.展开更多
According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out...According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.展开更多
As the web-server based business is rapidly developed and popularized, how to evaluate and improve the reliability of web-servers has been extremely important. Although a large num- ber of software reliability growth ...As the web-server based business is rapidly developed and popularized, how to evaluate and improve the reliability of web-servers has been extremely important. Although a large num- ber of software reliability growth models (SRGMs), including those combined with multiple change-points (CPs), have been available, these conventional SRGMs cannot be directly applied to web soft- ware reliability analysis because of the complex web operational profile. To characterize the web operational profile precisely, it should be realized that the workload of a web server is normally non-homogeneous and often observed with the pattern of random impulsive shocks. A web software reliability model with random im- pulsive shocks and its statistical analysis method are developed. In the proposed model, the web server workload is characterized by a geometric Brownian motion process. Based on a real data set from IIS server logs of ICRMS website (www.icrms.cn), the proposed model is demonstrated to be powerful for estimating impulsive shocks and web software reliability.展开更多
In recent decades,many software reliability growth models(SRGMs) have been proposed for the engineers and testers in measuring the software reliability precisely.Most of them is established based on the non-homogene...In recent decades,many software reliability growth models(SRGMs) have been proposed for the engineers and testers in measuring the software reliability precisely.Most of them is established based on the non-homogeneous Poisson process(NHPP),and it is proved that the prediction accuracy of such models could be improved by adding the describing of characterization of testing effort.However,some research work indicates that the fault detection rate(FDR) is another key factor affects final software quality.Most early NHPPbased models deal with the FDR as constant or piecewise function,which does not fit the different testing stages well.Thus,this paper first incorporates a multivariate function of FDR,which is bathtub-shaped,into the NHPP-based SRGMs considering testing effort in order to further improve performance.A new model framework is proposed,and a stepwise method is used to apply the framework with real data sets to find the optimal model.Experimental studies show that the obtained new model can provide better performance of fitting and prediction compared with other traditional SRGMs.展开更多
This paper presents software reliability growth models(SRGMs) with change-point based on the stochastic differential equation(SDE).Although SRGMs based on SDE have been developed in a large scale software system,consi...This paper presents software reliability growth models(SRGMs) with change-point based on the stochastic differential equation(SDE).Although SRGMs based on SDE have been developed in a large scale software system,considering the variation of failure distribution in the existing models during testing time is limited.These SDE SRGMs assume that failures have the same distribution.However,in practice,the fault detection rate can be affected by some factors and may be changed at certain point as time proceeds.With respect to this issue,in this paper,SDE SRGMs with changepoint are proposed to precisely reflect the variations of the failure distribution.A real data set is used to evaluate the new models.The experimental results show that the proposed models have a fairly accurate prediction capability.展开更多
Testing-time when a change of a stochastic characteristic of the software failure-occurrence time or software failure-occurrence time-interval is observed is called change-point. It is said that effect of the change-p...Testing-time when a change of a stochastic characteristic of the software failure-occurrence time or software failure-occurrence time-interval is observed is called change-point. It is said that effect of the change-point on the software reliability growth process influences on accuracy for software reliability assessment based on a software reliability growth model (SRGM). We propose an SRGM with the effect of the change-point based on a bivariate SRGM, in which the software reliability growth process is assumed to depend on the testing-time and testing-effort factors simultaneously, for accurate software reliability assessment. And we discuss an optimal software release problem for deriving optimal testing-effort expenditures based on our model. Further, we show numerical examples of software reliability assessment based on our bivariate SRGM and estimation of optimal testing-effort expenditures by using actual data.展开更多
This paper analyses the effect of censoring on the estimation of failure rate, and presents a framework of a censored nonparametric software reliability model. The model is based on nonparametric testing of failure ra...This paper analyses the effect of censoring on the estimation of failure rate, and presents a framework of a censored nonparametric software reliability model. The model is based on nonparametric testing of failure rate monotonically decreasing and weighted kernel failure rate estimation under the constraint of failure rate monotonically decreasing. Not only does the model have the advantages of little assumptions and weak constraints, but also the residual defects number of the software system can be estimated. The numerical experiment and real data analysis show that the model performs wdl with censored data.展开更多
According to the principle, “The failure data is the basis of software reliabilityanalysis”, we built a software reliability expert system (SRES) by adopting the artificialtechnology. By reasoning out the conclusion...According to the principle, “The failure data is the basis of software reliabilityanalysis”, we built a software reliability expert system (SRES) by adopting the artificialtechnology. By reasoning out the conclusion from the fitting results of failure data of asoftware project, the SRES can recommend users “the most suitable model” as a softwarereliability measurement model. We believe that the SRES can overcome the inconsistency inapplications of software reliability models well. We report investigation results of singularity and parameter estimation methods of models, LVLM and LVQM.展开更多
Software reliability and maintainability evaluation tool (SRMET 3.0) is introducted in detail in this paper, which was developed by Software Evaluation and Test Center of China Aerospace Mechanical Corporation. SRMET ...Software reliability and maintainability evaluation tool (SRMET 3.0) is introducted in detail in this paper, which was developed by Software Evaluation and Test Center of China Aerospace Mechanical Corporation. SRMET 3.0 is supported by seven software reliability models and four software maintainability models. Numerical characteristics for all those models are deeply studied in this paper, and corresponding numerical algorithms for each model are also given in the paper.展开更多
In traditional Bayesian software reliability models, it was assume that all probabilities are precise. In practical applications the parameters of the probability distributions are often under uncertainty due to stron...In traditional Bayesian software reliability models, it was assume that all probabilities are precise. In practical applications the parameters of the probability distributions are often under uncertainty due to strong dependence on subjective information of experts' judgments on sparse statistical data. In this paper, a quasi-Bayesian software reliability model using interval-valued probabilities to clearly quantify experts' prior beliefs on possible intervals of the parameters of the probability distributions is presented. The model integrates experts' judgments with statistical data to obtain more convincible assessments of software reliability with small samples. For some actual data sets, the presented model yields better predictions than the Jelinski-Moranda (JM) model using maximum likelihood (ML).展开更多
Because of the inevitable debugging lag,imperfect debugging process is used to replace perfect debugging process in the analysis of software reliability growth model.Considering neither testing-effort nor testing cove...Because of the inevitable debugging lag,imperfect debugging process is used to replace perfect debugging process in the analysis of software reliability growth model.Considering neither testing-effort nor testing coverage can describe software reliability for imperfect debugging completely,by hybridizing testing-effort with testing coverage under imperfect debugging,this paper proposes a new model named GMW-LO-ID.Under the assumption that the number of faults is proportional to the current number of detected faults,this model combines generalized modified Weibull(GMW)testing-effort function with logistic(LO)testing coverage function,and inherits GMW's amazing flexibility and LO's high fitting precision.Furthermore,the fitting accuracy and predictive power are verified by two series of experiments and we can draw a conclusion that our model fits the actual failure data better and predicts the software future behavior better than other ten traditional models,which only consider one or two points of testing-effort,testing coverage and imperfect debugging.展开更多
Reliability engineering implemented early in the development process has a significant impact on improving software quality.It can assist in the design of architecture and guide later testing,which is beyond the scope...Reliability engineering implemented early in the development process has a significant impact on improving software quality.It can assist in the design of architecture and guide later testing,which is beyond the scope of traditional reliability analysis methods.Structural reliability models work for this,but most of them remain tested in only simulation case studies due to lack of actual data.Here we use software metrics for reliability modeling which are collected from source codes of post versions.Through the proposed strategy,redundant metric elements are filtered out and the rest are aggregated to represent the module reliability.We further propose a framework to automatically apply the module value and calculate overall reliability by introducing formal methods.The experimental results from an actual project show that reliability analysis at the design and development stage can be close to the validity of analysis at the test stage through reasonable application of metric data.The study also demonstrates that the proposed methods have good applicability.展开更多
Electromechanical product's reliability is affected by uncertainty as well as performance degeneration during its life cycle.The present reliability and performance integrating modeling methods have obvious defici...Electromechanical product's reliability is affected by uncertainty as well as performance degeneration during its life cycle.The present reliability and performance integrating modeling methods have obvious deficiencies in long period reliability analysis and assessment when applied to such system.A novel integrating modeling method based on physics of failure(PoF)and a simulation algorithm that considers uncertainty and degeneration are proposed in this paper to compute maintenance free operation period or maintenance free operation period survivability which is used to assess long period reliability of system.Furthermore,the concept design of this kind of software based on the above theory is also introduced.A case study of servo valve demonstrates the feasibility of the method and usability of the software in this research.展开更多
Software reliability is the primary concern of software developmentorganizations, and the exponentially increasing demand for reliable softwarerequires modeling techniques to be developed in the present era. Small unn...Software reliability is the primary concern of software developmentorganizations, and the exponentially increasing demand for reliable softwarerequires modeling techniques to be developed in the present era. Small unnoticeable drifts in the software can culminate into a disaster. Early removal of theseerrors helps the organization improve and enhance the software’s reliability andsave money, time, and effort. Many soft computing techniques are available toget solutions for critical problems but selecting the appropriate technique is abig challenge. This paper proposed an efficient algorithm that can be used forthe prediction of software reliability. The proposed algorithm is implementedusing a hybrid approach named Neuro-Fuzzy Inference System and has also beenapplied to test data. In this work, a comparison among different techniques of softcomputing has been performed. After testing and training the real time data withthe reliability prediction in terms of mean relative error and mean absolute relativeerror as 0.0060 and 0.0121, respectively, the claim has been verified. The resultsclaim that the proposed algorithm predicts attractive outcomes in terms of meanabsolute relative error plus mean relative error compared to the other existingmodels that justify the reliability prediction of the proposed model. Thus, thisnovel technique intends to make this model as simple as possible to improvethe software reliability.展开更多
In view of the problems and the weaknesses of component-based software ( CBS ) reliability modeling and analysis, and a lack of consideration for real debugging circumstance of integration tes- ting, a CBS reliabili...In view of the problems and the weaknesses of component-based software ( CBS ) reliability modeling and analysis, and a lack of consideration for real debugging circumstance of integration tes- ting, a CBS reliability process analysis model is proposed incorporating debugging time delay, im- perfect debugging and limited debugging resources. CBS integration testing is formulated as a multi- queue muhichannel and finite server queuing model (MMFSQM) to illustrate fault detection process (FDP) and fault correction process (FCP). A unified FCP is sketched, given debugging delay, the diversities of faults processing and the limitations of debugging resources. Furthermore, the impacts of imperfect debugging on fault detection and correction are explicitly elaborated, and the expres- sions of the cumulative number of fault detected and corrected are illustrated. Finally, the results of numerical experiments verify the effectiveness and rationality of the proposed model. By comparison, the proposed model is superior to the other models. The proposed model is closer to real CBS testing process and facilitates software engineer' s quantitatively analyzing, measuring and predicting CBS reliability. K展开更多
Since most of the available component-based software reliability models consume high computational cost and suffer from the evaluating complexity for the software system with complex structures,a component-based back-...Since most of the available component-based software reliability models consume high computational cost and suffer from the evaluating complexity for the software system with complex structures,a component-based back-propagation reliability model(CBPRM)with low complexity for the complex software system reliability evaluation is presented in this paper.The proposed model is based on the artificial neural networks and the component reliability sensitivity analyses.These analyses are performed dynamically and assigned to the neurons to optimize the reliability evaluation.CBPRM has a linear increasing complexity and outperforms the state-based and the path-based reliability models.Another advantage of CBPRM over others is its robustness.CBPRM depends on the component reliabilities and the correlative sensitivities,which are independent from the software system structure.Based on the theory analysis and experiment results,it shows that the complexity of CBPRM is evidently lower than the contrast models and the reliability evaluating accuracy is acceptable when the software system structure is complex.展开更多
Cleanroom software engineering has been proven effective in improving software development quality while at the same time increasing reliability. To adapt to large software system development, the paper presents an ex...Cleanroom software engineering has been proven effective in improving software development quality while at the same time increasing reliability. To adapt to large software system development, the paper presents an extended the Cleanroom model, which integrates object-oriented method based on stimulus history, reversed engineering idea, automatic testing and reliability assessment into software development. The paper discusses the architecture and realizing technology of ECM.展开更多
文摘Redundancy,correlation,feature irrelevance,and missing samples are just a few problems that make it difficult to analyze software defect data.Additionally,it might be challenging to maintain an even distribution of data relating to both defective and non-defective software.The latter software class’s data are predominately present in the dataset in the majority of experimental situations.The objective of this review study is to demonstrate the effectiveness of combining ensemble learning and feature selection in improving the performance of defect classification.Besides the successful feature selection approach,a novel variant of the ensemble learning technique is analyzed to address the challenges of feature redundancy and data imbalance,providing robustness in the classification process.To overcome these problems and lessen their impact on the fault classification performance,authors carefully integrate effective feature selection with ensemble learning models.Forward selection demonstrates that a significant area under the receiver operating curve(ROC)can be attributed to only a small subset of features.The Greedy forward selection(GFS)technique outperformed Pearson’s correlation method when evaluating feature selection techniques on the datasets.Ensemble learners,such as random forests(RF)and the proposed average probability ensemble(APE),demonstrate greater resistance to the impact of weak features when compared to weighted support vector machines(W-SVMs)and extreme learning machines(ELM).Furthermore,in the case of the NASA and Java datasets,the enhanced average probability ensemble model,which incorporates the Greedy forward selection technique with the average probability ensemble model,achieved remarkably high accuracy for the area under the ROC.It approached a value of 1.0,indicating exceptional performance.This review emphasizes the importance of meticulously selecting attributes in a software dataset to accurately classify damaged components.In addition,the suggested ensemble learning model successfully addressed the aforementioned problems with software data and produced outstanding classification performance.
基金supported by the Pre-research Foundation of CPLA General Equipment Department
文摘Testing-effort(TE) and imperfect debugging(ID) in the reliability modeling process may further improve the fitting and prediction results of software reliability growth models(SRGMs). For describing the S-shaped varying trend of TE increasing rate more accurately, first, two S-shaped testing-effort functions(TEFs), i.e.,delayed S-shaped TEF(DS-TEF) and inflected S-shaped TEF(IS-TEF), are proposed. Then these two TEFs are incorporated into various types(exponential-type, delayed S-shaped and inflected S-shaped) of non-homogeneous Poisson process(NHPP)SRGMs with two forms of ID respectively for obtaining a series of new NHPP SRGMs which consider S-shaped TEFs as well as ID. Finally these new SRGMs and several comparison NHPP SRGMs are applied into four real failure data-sets respectively for investigating the fitting and prediction power of these new SRGMs.The experimental results show that:(i) the proposed IS-TEF is more suitable and flexible for describing the consumption of TE than the previous TEFs;(ii) incorporating TEFs into the inflected S-shaped NHPP SRGM may be more effective and appropriate compared with the exponential-type and the delayed S-shaped NHPP SRGMs;(iii) the inflected S-shaped NHPP SRGM considering both IS-TEF and ID yields the most accurate fitting and prediction results than the other comparison NHPP SRGMs.
文摘Several software reliability growth models (SRGM) have been developed to monitor the reliability growth during the testing phase of software development. In most of the existing research available in the literatures, it is considered that a similar testing effort is required on each debugging effort. However, in practice, different types of faults may require different amounts of testing efforts for their detection and removal. Consequently, faults are classified into three categories on the basis of severity: simple, hard and complex. This categorization may be extended to r type of faults on the basis of severity. Although some existing research in the literatures has incorporated this concept that fault removal rate (FRR) is different for different types of faults, they assume that the FRR remains constant during the overall testing period. On the contrary, it has been observed that as testing progresses, FRR changes due to changing testing strategy, skill, environment and personnel resources. In this paper, a general discrete SRGM is proposed for errors of different severity in software systems using the change-point concept. Then, the models are formulated for two particular environments. The models were validated on two real-life data sets. The results show better fit and wider applicability of the proposed models as to different types of failure datasets.
基金Supported by the National Natural Science Foundation of China (No. 60873195,60873003,and 61070220)the Doctoral Foundation of Ministry of Education (No.20090111110002)
文摘With the rapid progress of component technology,the software development methodology of gathering a large number of components for designing complex software systems has matured.But,how to assess the application reliability accurately with the information of system architecture and the components reliabilities together has become a knotty problem.In this paper,the defects in formal description of software architecture and the limitations in existed model assumptions are both analyzed.Moreover,a new software reliability model called Component Interaction Mode(CIM) is proposed.With this model,the problem for existed component-based software reliability analysis models that cannot deal with the cases of component interaction with non-failure independent and non-random control transition is resolved.At last,the practice examples are presented to illustrate the effectiveness of this model.
基金the National Natural Science Foundation of China
文摘According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.
基金supported by the International Technology Cooperation Project of Guizhou Province(QianKeHeWaiGZi[2012]7052)the National Scientific Research Project for Statistics(2012LZ054)
文摘As the web-server based business is rapidly developed and popularized, how to evaluate and improve the reliability of web-servers has been extremely important. Although a large num- ber of software reliability growth models (SRGMs), including those combined with multiple change-points (CPs), have been available, these conventional SRGMs cannot be directly applied to web soft- ware reliability analysis because of the complex web operational profile. To characterize the web operational profile precisely, it should be realized that the workload of a web server is normally non-homogeneous and often observed with the pattern of random impulsive shocks. A web software reliability model with random im- pulsive shocks and its statistical analysis method are developed. In the proposed model, the web server workload is characterized by a geometric Brownian motion process. Based on a real data set from IIS server logs of ICRMS website (www.icrms.cn), the proposed model is demonstrated to be powerful for estimating impulsive shocks and web software reliability.
基金supported by the National Natural Science Foundation of China(61070220)the Anhui Provincial Natural Science Foundation(1408085MKL79)
文摘In recent decades,many software reliability growth models(SRGMs) have been proposed for the engineers and testers in measuring the software reliability precisely.Most of them is established based on the non-homogeneous Poisson process(NHPP),and it is proved that the prediction accuracy of such models could be improved by adding the describing of characterization of testing effort.However,some research work indicates that the fault detection rate(FDR) is another key factor affects final software quality.Most early NHPPbased models deal with the FDR as constant or piecewise function,which does not fit the different testing stages well.Thus,this paper first incorporates a multivariate function of FDR,which is bathtub-shaped,into the NHPP-based SRGMs considering testing effort in order to further improve performance.A new model framework is proposed,and a stepwise method is used to apply the framework with real data sets to find the optimal model.Experimental studies show that the obtained new model can provide better performance of fitting and prediction compared with other traditional SRGMs.
基金Supported by the International Science&Technology Cooperation Program of China(No.2010DFA14400)the National Natural Science Foundation of China(No.60503015)the National High Technology Research and Development Programme of China(No.2008AA01A201)
文摘This paper presents software reliability growth models(SRGMs) with change-point based on the stochastic differential equation(SDE).Although SRGMs based on SDE have been developed in a large scale software system,considering the variation of failure distribution in the existing models during testing time is limited.These SDE SRGMs assume that failures have the same distribution.However,in practice,the fault detection rate can be affected by some factors and may be changed at certain point as time proceeds.With respect to this issue,in this paper,SDE SRGMs with changepoint are proposed to precisely reflect the variations of the failure distribution.A real data set is used to evaluate the new models.The experimental results show that the proposed models have a fairly accurate prediction capability.
文摘Testing-time when a change of a stochastic characteristic of the software failure-occurrence time or software failure-occurrence time-interval is observed is called change-point. It is said that effect of the change-point on the software reliability growth process influences on accuracy for software reliability assessment based on a software reliability growth model (SRGM). We propose an SRGM with the effect of the change-point based on a bivariate SRGM, in which the software reliability growth process is assumed to depend on the testing-time and testing-effort factors simultaneously, for accurate software reliability assessment. And we discuss an optimal software release problem for deriving optimal testing-effort expenditures based on our model. Further, we show numerical examples of software reliability assessment based on our bivariate SRGM and estimation of optimal testing-effort expenditures by using actual data.
文摘This paper analyses the effect of censoring on the estimation of failure rate, and presents a framework of a censored nonparametric software reliability model. The model is based on nonparametric testing of failure rate monotonically decreasing and weighted kernel failure rate estimation under the constraint of failure rate monotonically decreasing. Not only does the model have the advantages of little assumptions and weak constraints, but also the residual defects number of the software system can be estimated. The numerical experiment and real data analysis show that the model performs wdl with censored data.
基金Supported by the National Natural Science Foundation of China
文摘According to the principle, “The failure data is the basis of software reliabilityanalysis”, we built a software reliability expert system (SRES) by adopting the artificialtechnology. By reasoning out the conclusion from the fitting results of failure data of asoftware project, the SRES can recommend users “the most suitable model” as a softwarereliability measurement model. We believe that the SRES can overcome the inconsistency inapplications of software reliability models well. We report investigation results of singularity and parameter estimation methods of models, LVLM and LVQM.
文摘Software reliability and maintainability evaluation tool (SRMET 3.0) is introducted in detail in this paper, which was developed by Software Evaluation and Test Center of China Aerospace Mechanical Corporation. SRMET 3.0 is supported by seven software reliability models and four software maintainability models. Numerical characteristics for all those models are deeply studied in this paper, and corresponding numerical algorithms for each model are also given in the paper.
基金supported by the National High-Technology Research and Development Program of China (Grant Nos.2006AA01Z187,2007AA040605)
文摘In traditional Bayesian software reliability models, it was assume that all probabilities are precise. In practical applications the parameters of the probability distributions are often under uncertainty due to strong dependence on subjective information of experts' judgments on sparse statistical data. In this paper, a quasi-Bayesian software reliability model using interval-valued probabilities to clearly quantify experts' prior beliefs on possible intervals of the parameters of the probability distributions is presented. The model integrates experts' judgments with statistical data to obtain more convincible assessments of software reliability with small samples. For some actual data sets, the presented model yields better predictions than the Jelinski-Moranda (JM) model using maximum likelihood (ML).
基金supported by the National Natural Science Foundation of China(No.U1433116)the Aviation Science Foundation of China(No.20145752033)
文摘Because of the inevitable debugging lag,imperfect debugging process is used to replace perfect debugging process in the analysis of software reliability growth model.Considering neither testing-effort nor testing coverage can describe software reliability for imperfect debugging completely,by hybridizing testing-effort with testing coverage under imperfect debugging,this paper proposes a new model named GMW-LO-ID.Under the assumption that the number of faults is proportional to the current number of detected faults,this model combines generalized modified Weibull(GMW)testing-effort function with logistic(LO)testing coverage function,and inherits GMW's amazing flexibility and LO's high fitting precision.Furthermore,the fitting accuracy and predictive power are verified by two series of experiments and we can draw a conclusion that our model fits the actual failure data better and predicts the software future behavior better than other ten traditional models,which only consider one or two points of testing-effort,testing coverage and imperfect debugging.
基金This work was supported by the National Natural Science Foundation of China(61572167)the National Key Research and Development Program of China(2016YFC0801804)the Natural Science Foundation for Anhui Higher Education Institutions of China(KJ2019A0482).
文摘Reliability engineering implemented early in the development process has a significant impact on improving software quality.It can assist in the design of architecture and guide later testing,which is beyond the scope of traditional reliability analysis methods.Structural reliability models work for this,but most of them remain tested in only simulation case studies due to lack of actual data.Here we use software metrics for reliability modeling which are collected from source codes of post versions.Through the proposed strategy,redundant metric elements are filtered out and the rest are aggregated to represent the module reliability.We further propose a framework to automatically apply the module value and calculate overall reliability by introducing formal methods.The experimental results from an actual project show that reliability analysis at the design and development stage can be close to the validity of analysis at the test stage through reasonable application of metric data.The study also demonstrates that the proposed methods have good applicability.
基金National Natural Science Foundation of China(No.61304218)Beijing Natural Science Foundation,China(No.3153027)
文摘Electromechanical product's reliability is affected by uncertainty as well as performance degeneration during its life cycle.The present reliability and performance integrating modeling methods have obvious deficiencies in long period reliability analysis and assessment when applied to such system.A novel integrating modeling method based on physics of failure(PoF)and a simulation algorithm that considers uncertainty and degeneration are proposed in this paper to compute maintenance free operation period or maintenance free operation period survivability which is used to assess long period reliability of system.Furthermore,the concept design of this kind of software based on the above theory is also introduced.A case study of servo valve demonstrates the feasibility of the method and usability of the software in this research.
文摘Software reliability is the primary concern of software developmentorganizations, and the exponentially increasing demand for reliable softwarerequires modeling techniques to be developed in the present era. Small unnoticeable drifts in the software can culminate into a disaster. Early removal of theseerrors helps the organization improve and enhance the software’s reliability andsave money, time, and effort. Many soft computing techniques are available toget solutions for critical problems but selecting the appropriate technique is abig challenge. This paper proposed an efficient algorithm that can be used forthe prediction of software reliability. The proposed algorithm is implementedusing a hybrid approach named Neuro-Fuzzy Inference System and has also beenapplied to test data. In this work, a comparison among different techniques of softcomputing has been performed. After testing and training the real time data withthe reliability prediction in terms of mean relative error and mean absolute relativeerror as 0.0060 and 0.0121, respectively, the claim has been verified. The resultsclaim that the proposed algorithm predicts attractive outcomes in terms of meanabsolute relative error plus mean relative error compared to the other existingmodels that justify the reliability prediction of the proposed model. Thus, thisnovel technique intends to make this model as simple as possible to improvethe software reliability.
基金Supported by the National High Technology Research and Development Program of China(No.2008AA01A201)the National Natural Science Foundation of China(No.60503015)+1 种基金the National Key R&D Program of China(No.2013BA17F02)the Shandong Province Science and Technology Program of China(No.2011GGX10108,2010GGX10104)
文摘In view of the problems and the weaknesses of component-based software ( CBS ) reliability modeling and analysis, and a lack of consideration for real debugging circumstance of integration tes- ting, a CBS reliability process analysis model is proposed incorporating debugging time delay, im- perfect debugging and limited debugging resources. CBS integration testing is formulated as a multi- queue muhichannel and finite server queuing model (MMFSQM) to illustrate fault detection process (FDP) and fault correction process (FCP). A unified FCP is sketched, given debugging delay, the diversities of faults processing and the limitations of debugging resources. Furthermore, the impacts of imperfect debugging on fault detection and correction are explicitly elaborated, and the expres- sions of the cumulative number of fault detected and corrected are illustrated. Finally, the results of numerical experiments verify the effectiveness and rationality of the proposed model. By comparison, the proposed model is superior to the other models. The proposed model is closer to real CBS testing process and facilitates software engineer' s quantitatively analyzing, measuring and predicting CBS reliability. K
基金Supported by the National Natural Science Foundation of China(No.60973118,60873075)
文摘Since most of the available component-based software reliability models consume high computational cost and suffer from the evaluating complexity for the software system with complex structures,a component-based back-propagation reliability model(CBPRM)with low complexity for the complex software system reliability evaluation is presented in this paper.The proposed model is based on the artificial neural networks and the component reliability sensitivity analyses.These analyses are performed dynamically and assigned to the neurons to optimize the reliability evaluation.CBPRM has a linear increasing complexity and outperforms the state-based and the path-based reliability models.Another advantage of CBPRM over others is its robustness.CBPRM depends on the component reliabilities and the correlative sensitivities,which are independent from the software system structure.Based on the theory analysis and experiment results,it shows that the complexity of CBPRM is evidently lower than the contrast models and the reliability evaluating accuracy is acceptable when the software system structure is complex.
文摘Cleanroom software engineering has been proven effective in improving software development quality while at the same time increasing reliability. To adapt to large software system development, the paper presents an extended the Cleanroom model, which integrates object-oriented method based on stimulus history, reversed engineering idea, automatic testing and reliability assessment into software development. The paper discusses the architecture and realizing technology of ECM.