To generate a test set for a given circuit (including both combinational and sequential circuits), choice of an algorithm within a number of existing test generation algorithms to apply is bound to vary from circuit t...To generate a test set for a given circuit (including both combinational and sequential circuits), choice of an algorithm within a number of existing test generation algorithms to apply is bound to vary from circuit to circuit. In this paper, the genetic algorithms are used to construct the models of existing test generation algorithms in making such choice more easily. Therefore, we may forecast the testability parameters of a circuit before using the real test generation algorithm. The results also can be used to evaluate the efficiency of the existing test generation algorithms. Experimental results are given to convince the readers of the truth and the usefulness of this approach.展开更多
To study the diagnostic problem of Wire-OR (W-O) interconnect fault of PCB (Printed Circuit Board), five modified boundary scan adaptive algorithms for interconnect test are put forward. These algorithms apply Glo...To study the diagnostic problem of Wire-OR (W-O) interconnect fault of PCB (Printed Circuit Board), five modified boundary scan adaptive algorithms for interconnect test are put forward. These algorithms apply Global-diagnosis sequence algorithm to replace the equal weight algorithm of primary test, and the test time is shortened without changing the fault diagnostic capability. The descriptions of five modified adaptive test algorithms are presented, and the capability comparison between the modified algorithm and the original algorithm is made to prove the validity of these algorithms.展开更多
Software testing has been attracting a lot of attention for effective software development.In model driven approach,Unified Modelling Language(UML)is a conceptual modelling approach for obligations and other features ...Software testing has been attracting a lot of attention for effective software development.In model driven approach,Unified Modelling Language(UML)is a conceptual modelling approach for obligations and other features of the system in a model-driven methodology.Specialized tools interpret these models into other software artifacts such as code,test data and documentation.The generation of test cases permits the appropriate test data to be determined that have the aptitude to ascertain the requirements.This paper focuses on optimizing the test data obtained from UML activity and state chart diagrams by using Basic Genetic Algorithm(BGA).For generating the test cases,both diagrams were converted into their corresponding intermediate graphical forms namely,Activity Diagram Graph(ADG)and State Chart Diagram Graph(SCDG).Then both graphs will be combined to form a single graph called,Activity State Chart Diagram Graph(ASCDG).Both graphs were then joined to create a single graph known as the Activity State Chart Diagram Graph(ASCDG).Next,the ASCDG will be optimized using BGA to generate the test data.A case study involving a withdrawal from the automated teller machine(ATM)of a bank was employed to demonstrate the approach.The approach successfully identified defects in various ATM functions such as messaging and operation.展开更多
The main objective of software testing is to have the highest likelihood of finding the most faults with a minimum amount of time and effort. Genetic Algorithm (GA) has been successfully used by researchers in softwar...The main objective of software testing is to have the highest likelihood of finding the most faults with a minimum amount of time and effort. Genetic Algorithm (GA) has been successfully used by researchers in software testing to automatically generate test data. In this paper, a GA is applied using branch coverage criterion to generate the least possible set of test data to test JSC applications. Results show that applying GA achieves better performance in terms of average number of test data?generations, execution time, and percentage of branch coverage.展开更多
A new efficient protocol-proving algorithm was proposed for verifying security protocols. This algorithm is based on the improved authentication tests model, which enhances the original model by formalizing the messag...A new efficient protocol-proving algorithm was proposed for verifying security protocols. This algorithm is based on the improved authentication tests model, which enhances the original model by formalizing the message reply attack. With exact causal dependency relations between messages in this model, the protocol-proving algorithm can avoid the state explosion caused by asynchronous. In order to get the straight proof of security protocols, three authentication theorems are exploited for evaluating the agreement and distinction properties. When the algorithm terminates, it outputs either the proof results or the potential flaws of the security protocol. The experiment shows that the protocol-proving algorithm can detect the type flaw attack on Neuman-Stubblebine protocol, and prove the correctness of NSL protocol by exploring only 10 states.展开更多
To solve the problem of time-awarc test case prioritization,a hybrid algorithm composed of integer linear programming and the genetic algorithm(ILP-GA)is proposed.First,the test case suite which cm maximize the number...To solve the problem of time-awarc test case prioritization,a hybrid algorithm composed of integer linear programming and the genetic algorithm(ILP-GA)is proposed.First,the test case suite which cm maximize the number of covered program entities a d satisfy time constraints is selected by integer linea progamming.Secondly,the individual is encoded according to the cover matrices of entities,and the coverage rate of program entities is used as the fitness function and the genetic algorithm is used to prioritize the selected test cases.Five typical open source projects are selected as benchmark programs.Branch and method are selected as program entities,and time constraint percentages a e 25%and 75%.The experimental results show that the ILP-GA convergence has faster speed and better stability than ILP-additional and IP-total in most cases,which contributes to the detection of software defects as early as possible and reduces the software testing costs.展开更多
Technical debt(TD)happens when project teams carry out technical decisions in favor of a short-term goal(s)in their projects,whether deliberately or unknowingly.TD must be properly managed to guarantee that its negati...Technical debt(TD)happens when project teams carry out technical decisions in favor of a short-term goal(s)in their projects,whether deliberately or unknowingly.TD must be properly managed to guarantee that its negative implications do not outweigh its advantages.A lot of research has been conducted to show that TD has evolved into a common problem with considerable financial burden.Test technical debt is the technical debt aspect of testing(or test debt).Test debt is a relatively new concept that has piqued the curiosity of the software industry in recent years.In this article,we assume that the organization selects the testing artifacts at the start of every sprint.Implementing the latest features in consideration of expected business value and repaying technical debt are among candidate tasks in terms of the testing process(test cases increments).To gain the maximum benefit for the organization in terms of software testing optimization,there is a need to select the artifacts(i.e.,test cases)with maximum feature coverage within the available resources.The management of testing optimization for large projects is complicated and can also be treated as a multi-objective problem that entails a trade-off between the agile software’s short-term and long-term value.In this article,we implement a multi-objective indicatorbased evolutionary algorithm(IBEA)for fixing such optimization issues.The capability of the algorithm is evidenced by adding it to a real case study of a university registration process.展开更多
Two new regularization algorithms for solving the first-kind Volterra integral equation, which describes the pressure-rate deconvolution problem in well test data interpretation, are developed in this paper. The main ...Two new regularization algorithms for solving the first-kind Volterra integral equation, which describes the pressure-rate deconvolution problem in well test data interpretation, are developed in this paper. The main features of the problem are the strong nonuniform scale of the solution and large errors (up to 15%) in the input data. In both algorithms, the solution is represented as decomposition on special basic functions, which satisfy given a priori information on solution, and this idea allow us significantly to improve the quality of approximate solution and simplify solving the minimization problem. The theoretical details of the algorithms, as well as the results of numerical experiments for proving robustness of the algorithms, are presented.展开更多
PL/SQL is the most common language for ORACLE database application. It allows the developer to create stored program units (Procedures, Functions, and Packages) to improve software reusability and hide the complexity ...PL/SQL is the most common language for ORACLE database application. It allows the developer to create stored program units (Procedures, Functions, and Packages) to improve software reusability and hide the complexity of the execution of a specific operation behind a name. Also, it acts as an interface between SQL database and DEVELOPER. Therefore, it is important to test these modules that consist of procedures and functions. In this paper, a new genetic algorithm (GA), as search technique, is used in order to find the required test data according to branch criteria to test stored PL/SQL program units. The experimental results show that this was not fully achieved, such that the test target in some branches is not reached and the coverage percentage is 98%. A problem rises when target branch is depending on data retrieved from tables;in this case, GA is not able to generate test cases for this branch.展开更多
The methods and strategies used to screen for syp-hilis and to confirm initially reactive results can vary significantly across clinical laboratories. While the performance characteristics of these different appro-ach...The methods and strategies used to screen for syp-hilis and to confirm initially reactive results can vary significantly across clinical laboratories. While the performance characteristics of these different appro-aches have been evaluated by multiple studies, there is not, as of yet, a single, universally recommendedalgorithm for syphilis testing. To clarify the currently available options for syphilis testing, this update will summarize the clinical challenges to diagnosis, review the specific performance characteristics of treponemal and non-treponemal tests, and fnally, summarize select studies published over the past decade which have evaluated these approaches. Specifcally, this review will discuss the traditional and reverse sequence syphilis screening algorithms commonly used in the United States, alongside a discussion of the European Centre for Disease Prevention and Control syphilis algorithm. Ultimately, in the United States, the decision of which algorithm to use is largely dependent on laboratory resources, the local incidence of syphilis and patient demographics. Key words: Syphilis; Treponemal infection; Immuno-assay; Reverse sequence screening; Rapid plasma regain; Treponema pallidum particle agglutination test; Automation; Algorithm; Primary infection; Late latent infection展开更多
This paper deals with the target-fault-oriented test generation of sequential circuits using genetic algorithms. We adopted the concept of multiple phases and proposed four sub-procedures which consist of activation, ...This paper deals with the target-fault-oriented test generation of sequential circuits using genetic algorithms. We adopted the concept of multiple phases and proposed four sub-procedures which consist of activation, propagation and justification phases. The paper focuses on the design of genetic operators and construction of fitness functions which are based on the structure information of circuits. Using ISCAS89 benchmarks, the experiment results of GA were given.展开更多
A novel multi-chip module(MCM) interconnect test generation scheme based on ant algorithm(AA) with mutation operator was presented.By combing the characteristics of MCM interconnect test generation,the pheromone updat...A novel multi-chip module(MCM) interconnect test generation scheme based on ant algorithm(AA) with mutation operator was presented.By combing the characteristics of MCM interconnect test generation,the pheromone updating rule and state transition rule of AA is designed.Using mutation operator,this scheme overcomes ordinary AA’s defects of slow convergence speed,easy to get stagnate,and low ability of full search.The international standard MCM benchmark circuit provided by the MCNC group was used to verify the approach.The results of simulation experiments,which compare to the results of standard ant algorithm,genetic algorithm(GA) and other deterministic interconnecting algorithms,show that the proposed scheme can achieve high fault coverage,compact test set and short CPU time,that it is a newer optimized method deserving research.展开更多
A novel interoperability test sequences optimization scheme is proposed in which the genetic algorithm (GA) is used to obtain the minimal-length interoperability test sequences. During our work, the basic interopera...A novel interoperability test sequences optimization scheme is proposed in which the genetic algorithm (GA) is used to obtain the minimal-length interoperability test sequences. During our work, the basic interoperability test sequences are generated based on the minimal-complete-coverage criterion, which removes the redundancy from conformance test sequences. Then interoperability sequences minimization problem can be considered as an instance of the set covering problem, and the GA is applied to remove redundancy in interoperability transitions. The results show that compared to conventional algorithm, the proposed algorithm is more practical to avoid the state space explosion problem, for it can reduce the length of the test sequences and maintain the same transition coverage.展开更多
The dominant error source of mobile terminal location in wireless sensor networks (WSNs) is the non-line-of-sight (NLOS) propagation error. Among the algorithms proposed to mitigate the influence of NLOS propagati...The dominant error source of mobile terminal location in wireless sensor networks (WSNs) is the non-line-of-sight (NLOS) propagation error. Among the algorithms proposed to mitigate the influence of NLOS propagation error, residual test (RT) is an efficient one, however with high computational complexity (CC). An improved algorithm that memorizes the light of sight (LOS) range measurements (RMs) identified memorize LOS range measurements identified residual test (MLSI-RT) is presented in this paper to address this problem. The MLSI-RT is based on the assumption that when all RMs are from LOS propagations, the normalized residual follows the central Chi-Square distribution while for NLOS cases it is non-central. This study can reduce the CC by more than 90%.展开更多
Regression testing(RT)is an essential but an expensive activity in software development.RT confirms that new faults/errors will not have occurred in the modified program.RT efficiency can be improved through an effect...Regression testing(RT)is an essential but an expensive activity in software development.RT confirms that new faults/errors will not have occurred in the modified program.RT efficiency can be improved through an effective technique of selected only modified test cases that appropriate to the modifications within the given time frame.Earlier,several test case selection approaches have been introduced,but either these techniques were not sufficient according to the requirements of software tester experts or they are ineffective and cannot be used for available test suite specifications and architecture.To address these limitations,we recommend an improved and efficient test case selection(TCS)algorithm for RT.Our proposed technique decreases the execution time and redundancy of the duplicate test cases(TC)and detects onlymodified changes that appropriate to themodifications in test cases.To reduce execution time for TCS,evaluation results of our proposed approach are established on fault detection,redundancy and already executed test case.Results indicate that proposed technique decreases the inclusive testing time of TCS to execute modified test cases by,on average related to a method of Hybrid Whale Algorithm(HWOA),which is a progressive TCS approach in regression testing for a single product.展开更多
Test Case Prioritization(TCP)techniques perform better than other regression test optimization techniques including Test Suite Reduction(TSR)and Test Case Selection(TCS).Many TCP techniques are available,and their per...Test Case Prioritization(TCP)techniques perform better than other regression test optimization techniques including Test Suite Reduction(TSR)and Test Case Selection(TCS).Many TCP techniques are available,and their performance is usually measured through a metric Average Percentage of Fault Detection(APFD).This metric is value-neutral because it only works well when all test cases have the same cost,and all faults have the same severity.Using APFD for performance evaluation of test case orders where test cases cost or faults severity varies is prone to produce false results.Therefore,using the right metric for performance evaluation of TCP techniques is very important to get reliable and correct results.In this paper,two value-based TCP techniques have been introduced using Genetic Algorithm(GA)including Value-Cognizant Fault Detection-Based TCP(VCFDB-TCP)and Value-Cognizant Requirements Coverage-Based TCP(VCRCB-TCP).Two novel value-based performance evaluation metrics are also introduced for value-based TCP including Average Percentage of Fault Detection per value(APFDv)and Average Percentage of Requirements Coverage per value(APRCv).Two case studies are performed to validate proposed techniques and performance evaluation metrics.The proposed GA-based techniques outperformed the existing state-of-the-art TCP techniques including Original Order(OO),Reverse Order(REV-O),Random Order(RO),and Greedy algorithm.展开更多
Regression testing is a widely used approach to confirm the correct functionality of the software in incremental development.The use of test cases makes it easier to test the ripple effect of changed requirements.Rigo...Regression testing is a widely used approach to confirm the correct functionality of the software in incremental development.The use of test cases makes it easier to test the ripple effect of changed requirements.Rigorous testingmay help in meeting the quality criteria that is based on the conformance to the requirements as given by the intended stakeholders.However,a minimized and prioritized set of test cases may reduce the efforts and time required for testingwhile focusing on the timely delivery of the software application.In this research,a technique named Test Reduce has been presented to get a minimal set of test cases based on high priority to ensure that the web applicationmeets the required quality criteria.A new technique TestReduce is proposed with a blend of genetic algorithm to find an optimized and minimal set of test cases.The ultimate objective associated with this study is to provide a technique that may solve the minimization problem of regression test cases in the case of linked requirements.In this research,the 100-Dollar prioritization approach is used to define the priority of the new requirements.展开更多
The automatic generation of test data is a key step in realizing automated testing.Most automated testing tools for unit testing only provide test case execution drivers and cannot generate test data that meets covera...The automatic generation of test data is a key step in realizing automated testing.Most automated testing tools for unit testing only provide test case execution drivers and cannot generate test data that meets coverage requirements.This paper presents an improved Whale Genetic Algorithm for generating test data re-quired for unit testing MC/DC coverage.The proposed algorithm introduces an elite retention strategy to avoid the genetic algorithm from falling into iterative degradation.At the same time,the mutation threshold of the whale algorithm is introduced to balance the global exploration and local search capabilities of the genetic al-gorithm.The threshold is dynamically adjusted according to the diversity and evolution stage of current popu-lation,which positively guides the evolution of the population.Finally,an improved crossover strategy is pro-posed to accelerate the convergence of the algorithm.The improved whale genetic algorithm is compared with genetic algorithm,whale algorithm and particle swarm algorithm on two benchmark programs.The results show that the proposed algorithm is faster for test data generation than comparison methods and can provide better coverage with fewer evaluations,and has great advantages in generating test data.展开更多
Much of our daily tasks have been computerized by machines and sensors communicating with each other in real-time.There is a reasonable risk that something could go wrong because there are a lot of sensors producing a...Much of our daily tasks have been computerized by machines and sensors communicating with each other in real-time.There is a reasonable risk that something could go wrong because there are a lot of sensors producing a lot of data.Combinatorial testing(CT)can be used in this case to reduce risks and ensure conformance to specifications.Numerous existing metaheuristic-based solutions aim to assist the test suite generation for combinatorial testing,also known as t-way testing(where t indicates the interaction strength),viewed as an optimization problem.Much previous research,while helpful,only investigated a small number of interaction strengths up to t=6.For lightweight applications,research has demonstrated good fault-finding ability.However,the number of interaction strengths considered must be higher in the case of interactions that generate large amounts of data.Due to resource restrictions and the combinatorial explosion challenge,little work has been done to produce high-order interaction strength.In this context,the Whale Optimization Algorithm(WOA)is proposed to generate high-order interaction strength.To ensure that WOA conquers premature convergence and avoids local optima for large search spaces(owing to high-order interaction),three variants of WOA have been developed,namely Structurally Modified Whale Optimization Algorithm(SWOA),Tolerance Whale Optimization Algorithm(TWOA),and Tolerance Structurally Modified Whale Optimization Algorithm(TSWOA).Our experiments show that the third strategy gives the best performance and is comparable to existing state-of-thearts based strategies.展开更多
Many search-based algorithms have been successfully applied in sev-eral software engineering activities.Genetic algorithms(GAs)are the most used in the scientific domains by scholars to solve software testing problems....Many search-based algorithms have been successfully applied in sev-eral software engineering activities.Genetic algorithms(GAs)are the most used in the scientific domains by scholars to solve software testing problems.They imi-tate the theory of natural selection and evolution.The harmony search algorithm(HSA)is one of the most recent search algorithms in the last years.It imitates the behavior of a musician tofind the best harmony.Scholars have estimated the simi-larities and the differences between genetic algorithms and the harmony search algorithm in diverse research domains.The test data generation process represents a critical task in software validation.Unfortunately,there is no work comparing the performance of genetic algorithms and the harmony search algorithm in the test data generation process.This paper studies the similarities and the differences between genetic algorithms and the harmony search algorithm based on the ability and speed offinding the required test data.The current research performs an empirical comparison of the HSA and the GAs,and then the significance of the results is estimated using the t-Test.The study investigates the efficiency of the harmony search algorithm and the genetic algorithms according to(1)the time performance,(2)the significance of the generated test data,and(3)the adequacy of the generated test data to satisfy a given testing criterion.The results showed that the harmony search algorithm is significantly faster than the genetic algo-rithms because the t-Test showed that the p-value of the time values is 0.026<α(αis the significance level=0.05 at 95%confidence level).In contrast,there is no significant difference between the two algorithms in generating the adequate test data because the t-Test showed that the p-value of thefitness values is 0.25>α.展开更多
基金This work was supported by National Natural Science Foundation of China (NSFC) under the grant !No. 69873030
文摘To generate a test set for a given circuit (including both combinational and sequential circuits), choice of an algorithm within a number of existing test generation algorithms to apply is bound to vary from circuit to circuit. In this paper, the genetic algorithms are used to construct the models of existing test generation algorithms in making such choice more easily. Therefore, we may forecast the testability parameters of a circuit before using the real test generation algorithm. The results also can be used to evaluate the efficiency of the existing test generation algorithms. Experimental results are given to convince the readers of the truth and the usefulness of this approach.
文摘To study the diagnostic problem of Wire-OR (W-O) interconnect fault of PCB (Printed Circuit Board), five modified boundary scan adaptive algorithms for interconnect test are put forward. These algorithms apply Global-diagnosis sequence algorithm to replace the equal weight algorithm of primary test, and the test time is shortened without changing the fault diagnostic capability. The descriptions of five modified adaptive test algorithms are presented, and the capability comparison between the modified algorithm and the original algorithm is made to prove the validity of these algorithms.
基金support from the Deanship of Scientific Research,University of Hail,Saudi Arabia through the project Ref.(RG-191315).
文摘Software testing has been attracting a lot of attention for effective software development.In model driven approach,Unified Modelling Language(UML)is a conceptual modelling approach for obligations and other features of the system in a model-driven methodology.Specialized tools interpret these models into other software artifacts such as code,test data and documentation.The generation of test cases permits the appropriate test data to be determined that have the aptitude to ascertain the requirements.This paper focuses on optimizing the test data obtained from UML activity and state chart diagrams by using Basic Genetic Algorithm(BGA).For generating the test cases,both diagrams were converted into their corresponding intermediate graphical forms namely,Activity Diagram Graph(ADG)and State Chart Diagram Graph(SCDG).Then both graphs will be combined to form a single graph called,Activity State Chart Diagram Graph(ASCDG).Both graphs were then joined to create a single graph known as the Activity State Chart Diagram Graph(ASCDG).Next,the ASCDG will be optimized using BGA to generate the test data.A case study involving a withdrawal from the automated teller machine(ATM)of a bank was employed to demonstrate the approach.The approach successfully identified defects in various ATM functions such as messaging and operation.
文摘The main objective of software testing is to have the highest likelihood of finding the most faults with a minimum amount of time and effort. Genetic Algorithm (GA) has been successfully used by researchers in software testing to automatically generate test data. In this paper, a GA is applied using branch coverage criterion to generate the least possible set of test data to test JSC applications. Results show that applying GA achieves better performance in terms of average number of test data?generations, execution time, and percentage of branch coverage.
基金The National High Technology Research and Development Program of China(863Pro-gram)(No.2005AA145110)
文摘A new efficient protocol-proving algorithm was proposed for verifying security protocols. This algorithm is based on the improved authentication tests model, which enhances the original model by formalizing the message reply attack. With exact causal dependency relations between messages in this model, the protocol-proving algorithm can avoid the state explosion caused by asynchronous. In order to get the straight proof of security protocols, three authentication theorems are exploited for evaluating the agreement and distinction properties. When the algorithm terminates, it outputs either the proof results or the potential flaws of the security protocol. The experiment shows that the protocol-proving algorithm can detect the type flaw attack on Neuman-Stubblebine protocol, and prove the correctness of NSL protocol by exploring only 10 states.
基金The Natural Science Foundation of Education Ministry of Shaanxi Province(No.15JK1672)the Industrial Research Project of Shaanxi Province(No.2017GY-092)Special Fund for Key Discipline Construction of General Institutions of Higher Education in Shaanxi Province
文摘To solve the problem of time-awarc test case prioritization,a hybrid algorithm composed of integer linear programming and the genetic algorithm(ILP-GA)is proposed.First,the test case suite which cm maximize the number of covered program entities a d satisfy time constraints is selected by integer linea progamming.Secondly,the individual is encoded according to the cover matrices of entities,and the coverage rate of program entities is used as the fitness function and the genetic algorithm is used to prioritize the selected test cases.Five typical open source projects are selected as benchmark programs.Branch and method are selected as program entities,and time constraint percentages a e 25%and 75%.The experimental results show that the ILP-GA convergence has faster speed and better stability than ILP-additional and IP-total in most cases,which contributes to the detection of software defects as early as possible and reduces the software testing costs.
基金The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQUyouracademicnumberDSRxx).
文摘Technical debt(TD)happens when project teams carry out technical decisions in favor of a short-term goal(s)in their projects,whether deliberately or unknowingly.TD must be properly managed to guarantee that its negative implications do not outweigh its advantages.A lot of research has been conducted to show that TD has evolved into a common problem with considerable financial burden.Test technical debt is the technical debt aspect of testing(or test debt).Test debt is a relatively new concept that has piqued the curiosity of the software industry in recent years.In this article,we assume that the organization selects the testing artifacts at the start of every sprint.Implementing the latest features in consideration of expected business value and repaying technical debt are among candidate tasks in terms of the testing process(test cases increments).To gain the maximum benefit for the organization in terms of software testing optimization,there is a need to select the artifacts(i.e.,test cases)with maximum feature coverage within the available resources.The management of testing optimization for large projects is complicated and can also be treated as a multi-objective problem that entails a trade-off between the agile software’s short-term and long-term value.In this article,we implement a multi-objective indicatorbased evolutionary algorithm(IBEA)for fixing such optimization issues.The capability of the algorithm is evidenced by adding it to a real case study of a university registration process.
文摘Two new regularization algorithms for solving the first-kind Volterra integral equation, which describes the pressure-rate deconvolution problem in well test data interpretation, are developed in this paper. The main features of the problem are the strong nonuniform scale of the solution and large errors (up to 15%) in the input data. In both algorithms, the solution is represented as decomposition on special basic functions, which satisfy given a priori information on solution, and this idea allow us significantly to improve the quality of approximate solution and simplify solving the minimization problem. The theoretical details of the algorithms, as well as the results of numerical experiments for proving robustness of the algorithms, are presented.
文摘PL/SQL is the most common language for ORACLE database application. It allows the developer to create stored program units (Procedures, Functions, and Packages) to improve software reusability and hide the complexity of the execution of a specific operation behind a name. Also, it acts as an interface between SQL database and DEVELOPER. Therefore, it is important to test these modules that consist of procedures and functions. In this paper, a new genetic algorithm (GA), as search technique, is used in order to find the required test data according to branch criteria to test stored PL/SQL program units. The experimental results show that this was not fully achieved, such that the test target in some branches is not reached and the coverage percentage is 98%. A problem rises when target branch is depending on data retrieved from tables;in this case, GA is not able to generate test cases for this branch.
文摘The methods and strategies used to screen for syp-hilis and to confirm initially reactive results can vary significantly across clinical laboratories. While the performance characteristics of these different appro-aches have been evaluated by multiple studies, there is not, as of yet, a single, universally recommendedalgorithm for syphilis testing. To clarify the currently available options for syphilis testing, this update will summarize the clinical challenges to diagnosis, review the specific performance characteristics of treponemal and non-treponemal tests, and fnally, summarize select studies published over the past decade which have evaluated these approaches. Specifcally, this review will discuss the traditional and reverse sequence syphilis screening algorithms commonly used in the United States, alongside a discussion of the European Centre for Disease Prevention and Control syphilis algorithm. Ultimately, in the United States, the decision of which algorithm to use is largely dependent on laboratory resources, the local incidence of syphilis and patient demographics. Key words: Syphilis; Treponemal infection; Immuno-assay; Reverse sequence screening; Rapid plasma regain; Treponema pallidum particle agglutination test; Automation; Algorithm; Primary infection; Late latent infection
文摘This paper deals with the target-fault-oriented test generation of sequential circuits using genetic algorithms. We adopted the concept of multiple phases and proposed four sub-procedures which consist of activation, propagation and justification phases. The paper focuses on the design of genetic operators and construction of fitness functions which are based on the structure information of circuits. Using ISCAS89 benchmarks, the experiment results of GA were given.
文摘A novel multi-chip module(MCM) interconnect test generation scheme based on ant algorithm(AA) with mutation operator was presented.By combing the characteristics of MCM interconnect test generation,the pheromone updating rule and state transition rule of AA is designed.Using mutation operator,this scheme overcomes ordinary AA’s defects of slow convergence speed,easy to get stagnate,and low ability of full search.The international standard MCM benchmark circuit provided by the MCNC group was used to verify the approach.The results of simulation experiments,which compare to the results of standard ant algorithm,genetic algorithm(GA) and other deterministic interconnecting algorithms,show that the proposed scheme can achieve high fault coverage,compact test set and short CPU time,that it is a newer optimized method deserving research.
文摘A novel interoperability test sequences optimization scheme is proposed in which the genetic algorithm (GA) is used to obtain the minimal-length interoperability test sequences. During our work, the basic interoperability test sequences are generated based on the minimal-complete-coverage criterion, which removes the redundancy from conformance test sequences. Then interoperability sequences minimization problem can be considered as an instance of the set covering problem, and the GA is applied to remove redundancy in interoperability transitions. The results show that compared to conventional algorithm, the proposed algorithm is more practical to avoid the state space explosion problem, for it can reduce the length of the test sequences and maintain the same transition coverage.
基金supported by the State Key Program of National Natural Science of China (Grant No.60532030)the New Century Excellent Talents in University (Grant No.NCET-08-0333)the Natural Science Foundation of Shandong Province (Grant No.Y2007G10)
文摘The dominant error source of mobile terminal location in wireless sensor networks (WSNs) is the non-line-of-sight (NLOS) propagation error. Among the algorithms proposed to mitigate the influence of NLOS propagation error, residual test (RT) is an efficient one, however with high computational complexity (CC). An improved algorithm that memorizes the light of sight (LOS) range measurements (RMs) identified memorize LOS range measurements identified residual test (MLSI-RT) is presented in this paper to address this problem. The MLSI-RT is based on the assumption that when all RMs are from LOS propagations, the normalized residual follows the central Chi-Square distribution while for NLOS cases it is non-central. This study can reduce the CC by more than 90%.
基金This work was supported in part by the Research Management Center(RMC),Universiti Teknologi Malaysia(UTM)and Ministry of Higher Education Malaysia(MOHE)through the UTM High Impact Research(UTMHR)grant scheme under(Vot Number Q.J130000.2451.08G55).
文摘Regression testing(RT)is an essential but an expensive activity in software development.RT confirms that new faults/errors will not have occurred in the modified program.RT efficiency can be improved through an effective technique of selected only modified test cases that appropriate to the modifications within the given time frame.Earlier,several test case selection approaches have been introduced,but either these techniques were not sufficient according to the requirements of software tester experts or they are ineffective and cannot be used for available test suite specifications and architecture.To address these limitations,we recommend an improved and efficient test case selection(TCS)algorithm for RT.Our proposed technique decreases the execution time and redundancy of the duplicate test cases(TC)and detects onlymodified changes that appropriate to themodifications in test cases.To reduce execution time for TCS,evaluation results of our proposed approach are established on fault detection,redundancy and already executed test case.Results indicate that proposed technique decreases the inclusive testing time of TCS to execute modified test cases by,on average related to a method of Hybrid Whale Algorithm(HWOA),which is a progressive TCS approach in regression testing for a single product.
文摘Test Case Prioritization(TCP)techniques perform better than other regression test optimization techniques including Test Suite Reduction(TSR)and Test Case Selection(TCS).Many TCP techniques are available,and their performance is usually measured through a metric Average Percentage of Fault Detection(APFD).This metric is value-neutral because it only works well when all test cases have the same cost,and all faults have the same severity.Using APFD for performance evaluation of test case orders where test cases cost or faults severity varies is prone to produce false results.Therefore,using the right metric for performance evaluation of TCP techniques is very important to get reliable and correct results.In this paper,two value-based TCP techniques have been introduced using Genetic Algorithm(GA)including Value-Cognizant Fault Detection-Based TCP(VCFDB-TCP)and Value-Cognizant Requirements Coverage-Based TCP(VCRCB-TCP).Two novel value-based performance evaluation metrics are also introduced for value-based TCP including Average Percentage of Fault Detection per value(APFDv)and Average Percentage of Requirements Coverage per value(APRCv).Two case studies are performed to validate proposed techniques and performance evaluation metrics.The proposed GA-based techniques outperformed the existing state-of-the-art TCP techniques including Original Order(OO),Reverse Order(REV-O),Random Order(RO),and Greedy algorithm.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups,Project under grant number RGP.2/49/43.
文摘Regression testing is a widely used approach to confirm the correct functionality of the software in incremental development.The use of test cases makes it easier to test the ripple effect of changed requirements.Rigorous testingmay help in meeting the quality criteria that is based on the conformance to the requirements as given by the intended stakeholders.However,a minimized and prioritized set of test cases may reduce the efforts and time required for testingwhile focusing on the timely delivery of the software application.In this research,a technique named Test Reduce has been presented to get a minimal set of test cases based on high priority to ensure that the web applicationmeets the required quality criteria.A new technique TestReduce is proposed with a blend of genetic algorithm to find an optimized and minimal set of test cases.The ultimate objective associated with this study is to provide a technique that may solve the minimization problem of regression test cases in the case of linked requirements.In this research,the 100-Dollar prioritization approach is used to define the priority of the new requirements.
文摘The automatic generation of test data is a key step in realizing automated testing.Most automated testing tools for unit testing only provide test case execution drivers and cannot generate test data that meets coverage requirements.This paper presents an improved Whale Genetic Algorithm for generating test data re-quired for unit testing MC/DC coverage.The proposed algorithm introduces an elite retention strategy to avoid the genetic algorithm from falling into iterative degradation.At the same time,the mutation threshold of the whale algorithm is introduced to balance the global exploration and local search capabilities of the genetic al-gorithm.The threshold is dynamically adjusted according to the diversity and evolution stage of current popu-lation,which positively guides the evolution of the population.Finally,an improved crossover strategy is pro-posed to accelerate the convergence of the algorithm.The improved whale genetic algorithm is compared with genetic algorithm,whale algorithm and particle swarm algorithm on two benchmark programs.The results show that the proposed algorithm is faster for test data generation than comparison methods and can provide better coverage with fewer evaluations,and has great advantages in generating test data.
基金This work was supported by the Ministry of Education,Malaysia(FRGS/1/2019/ICT02/UKM/01/1)the Universiti Kebangsaan Malaysia(DIP-2016-024).
文摘Much of our daily tasks have been computerized by machines and sensors communicating with each other in real-time.There is a reasonable risk that something could go wrong because there are a lot of sensors producing a lot of data.Combinatorial testing(CT)can be used in this case to reduce risks and ensure conformance to specifications.Numerous existing metaheuristic-based solutions aim to assist the test suite generation for combinatorial testing,also known as t-way testing(where t indicates the interaction strength),viewed as an optimization problem.Much previous research,while helpful,only investigated a small number of interaction strengths up to t=6.For lightweight applications,research has demonstrated good fault-finding ability.However,the number of interaction strengths considered must be higher in the case of interactions that generate large amounts of data.Due to resource restrictions and the combinatorial explosion challenge,little work has been done to produce high-order interaction strength.In this context,the Whale Optimization Algorithm(WOA)is proposed to generate high-order interaction strength.To ensure that WOA conquers premature convergence and avoids local optima for large search spaces(owing to high-order interaction),three variants of WOA have been developed,namely Structurally Modified Whale Optimization Algorithm(SWOA),Tolerance Whale Optimization Algorithm(TWOA),and Tolerance Structurally Modified Whale Optimization Algorithm(TSWOA).Our experiments show that the third strategy gives the best performance and is comparable to existing state-of-thearts based strategies.
文摘Many search-based algorithms have been successfully applied in sev-eral software engineering activities.Genetic algorithms(GAs)are the most used in the scientific domains by scholars to solve software testing problems.They imi-tate the theory of natural selection and evolution.The harmony search algorithm(HSA)is one of the most recent search algorithms in the last years.It imitates the behavior of a musician tofind the best harmony.Scholars have estimated the simi-larities and the differences between genetic algorithms and the harmony search algorithm in diverse research domains.The test data generation process represents a critical task in software validation.Unfortunately,there is no work comparing the performance of genetic algorithms and the harmony search algorithm in the test data generation process.This paper studies the similarities and the differences between genetic algorithms and the harmony search algorithm based on the ability and speed offinding the required test data.The current research performs an empirical comparison of the HSA and the GAs,and then the significance of the results is estimated using the t-Test.The study investigates the efficiency of the harmony search algorithm and the genetic algorithms according to(1)the time performance,(2)the significance of the generated test data,and(3)the adequacy of the generated test data to satisfy a given testing criterion.The results showed that the harmony search algorithm is significantly faster than the genetic algo-rithms because the t-Test showed that the p-value of the time values is 0.026<α(αis the significance level=0.05 at 95%confidence level).In contrast,there is no significant difference between the two algorithms in generating the adequate test data because the t-Test showed that the p-value of thefitness values is 0.25>α.