Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount impo...Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount importance in the emerging field of edge AI.One widely used testing method for this purpose is fuzz testing,which detects bugs by inputting random test cases into the target program.However,this process consumes significant time and resources.To improve the efficiency of compiler fuzz testing,it is common practice to utilize test case prioritization techniques.Some researchers use machine learning to predict the code coverage of test cases,aiming to maximize the test capability for the target compiler by increasing the overall predicted coverage of the test cases.Nevertheless,these methods can only forecast the code coverage of the compiler at a specific optimization level,potentially missing many optimization-related bugs.In this paper,we introduce C-CORE(short for Clustering by Code Representation),the first framework to prioritize test cases according to their code representations,which are derived directly from the source codes.This approach avoids being limited to specific compiler states and extends to a broader range of compiler bugs.Specifically,we first train a scaled pre-trained programming language model to capture as many common features as possible from the test cases generated by a fuzzer.Using this pre-trained model,we then train two downstream models:one for predicting the likelihood of triggering a bug and another for identifying code representations associated with bugs.Subsequently,we cluster the test cases according to their code representations and select the highest-scoring test case from each cluster as the high-quality test case.This reduction in redundant testing cases leads to time savings.Comprehensive evaluation results reveal that code representations are better at distinguishing test capabilities,and C-CORE significantly enhances testing efficiency.Across four datasets,C-CORE increases the average of the percentage of faults detected(APFD)value by 0.16 to 0.31 and reduces test time by over 50% in 46% of cases.When compared to the best results from approaches using predicted code coverage,C-CORE improves the APFD value by 1.1% to 12.3% and achieves an overall time-saving of 159.1%.展开更多
Test case prioritization(TCP) technique is an efficient approach to improve regression testing activities. With the continuous improvement of industrial testing requirements, traditional single-objective TCP is limite...Test case prioritization(TCP) technique is an efficient approach to improve regression testing activities. With the continuous improvement of industrial testing requirements, traditional single-objective TCP is limited greatly, and multi-objective test case prioritization(MOTCP) technique becomes one of the hot topics in the field of software testing in recent years. Considering the problems of traditional genetic algorithm(GA) and swarm intelligence algorithm in solving MOTCP problems, such as falling into local optimum quickly and weak stability of the algorithm, a MOTCP algorithm based on multi-population cooperative particle swarm optimization(MPPSO) was proposed in this paper. Empirical studies were conducted to study the influence of iteration times on the proposed MOTCP algorithm, and compare the performances of MOTCP based on single-population particle swarm optimization(PSO) and MOTCP based on non-dominated sorting genetic algorithm Ⅱ(NSGA-Ⅱ) with the MOTCP algorithm proposed in this paper. The results of experiments show that the TCP algorithm based on MPPSO has stronger global optimization ability, is not easy to fall into local optimum, and can solve the MOTCP problem better than TCP algorithm based on the single-population PSO and NSGA-Ⅱ.展开更多
Test Case Prioritization(TCP)techniques perform better than other regression test optimization techniques including Test Suite Reduction(TSR)and Test Case Selection(TCS).Many TCP techniques are available,and their per...Test Case Prioritization(TCP)techniques perform better than other regression test optimization techniques including Test Suite Reduction(TSR)and Test Case Selection(TCS).Many TCP techniques are available,and their performance is usually measured through a metric Average Percentage of Fault Detection(APFD).This metric is value-neutral because it only works well when all test cases have the same cost,and all faults have the same severity.Using APFD for performance evaluation of test case orders where test cases cost or faults severity varies is prone to produce false results.Therefore,using the right metric for performance evaluation of TCP techniques is very important to get reliable and correct results.In this paper,two value-based TCP techniques have been introduced using Genetic Algorithm(GA)including Value-Cognizant Fault Detection-Based TCP(VCFDB-TCP)and Value-Cognizant Requirements Coverage-Based TCP(VCRCB-TCP).Two novel value-based performance evaluation metrics are also introduced for value-based TCP including Average Percentage of Fault Detection per value(APFDv)and Average Percentage of Requirements Coverage per value(APRCv).Two case studies are performed to validate proposed techniques and performance evaluation metrics.The proposed GA-based techniques outperformed the existing state-of-the-art TCP techniques including Original Order(OO),Reverse Order(REV-O),Random Order(RO),and Greedy algorithm.展开更多
Software needs modifications and requires revisions regularly.Owing to these revisions,retesting software becomes essential to ensure that the enhancements made,have not affected its bug-free functioning.The time and ...Software needs modifications and requires revisions regularly.Owing to these revisions,retesting software becomes essential to ensure that the enhancements made,have not affected its bug-free functioning.The time and cost incurred in this process,need to be reduced by the method of test case selection and prioritization.It is observed that many nature-inspired techniques are applied in this area.African Buffalo Optimization is one such approach,applied to regression test selection and prioritization.In this paper,the proposed work explains and proves the applicability of the African Buffalo Optimization approach to test case selection and prioritization.The proposed algorithm converges in polynomial time(O(n^(2))).In this paper,the empirical evaluation of applying African Buffalo Optimization for test case prioritization is done on sample data set with multiple iterations.An astounding 62.5%drop in size and a 48.57%drop in the runtime of the original test suite were recorded.The obtained results are compared with Ant Colony Optimization.The comparative analysis indicates that African Buffalo Optimization and Ant Colony Optimization exhibit similar fault detection capabilities(80%),and a reduction in the overall execution time and size of the resultant test suite.The results and analysis,hence,advocate and encourages the use of African Buffalo Optimization in the area of test case selection and prioritization.展开更多
Both unit and integration testing are incredibly crucial for almost any software application because each of them operates a distinct process to examine the product.Due to resource constraints,when software is subject...Both unit and integration testing are incredibly crucial for almost any software application because each of them operates a distinct process to examine the product.Due to resource constraints,when software is subjected to modifications,the drastic increase in the count of test cases forces the testers to opt for a test optimization strategy.One such strategy is test case prioritization(TCP).Existing works have propounded various methodologies that re-order the system-level test cases intending to boost either the fault detection capabilities or the coverage efficacy at the earliest.Nonetheless,singularity in objective functions and the lack of dissimilitude among the re-ordered test sequences have degraded the cogency of their approaches.Considering such gaps and scenarios when the meteoric and continuous updations in the software make the intensive unit and integration testing process more fragile,this study has introduced a memetics-inspired methodology for TCP.The proposed structure is first embedded with diverse parameters,and then traditional steps of the shuffled-frog-leaping approach(SFLA)are followed to prioritize the test cases at unit and integration levels.On 5 standard test functions,a comparative analysis is conducted between the established algorithms and the proposed approach,where the latter enhances the coverage rate and fault detection of re-ordered test sets.Investigation results related to the mean average percentage of fault detection(APFD)confirmed that the proposed approach exceeds the memetic,basic multi-walk,PSO,and optimized multi-walk by 21.7%,13.99%,12.24%,and 11.51%,respectively.展开更多
Generally,software testing is considered as a proficient technique to achieve improvement in quality and reliability of the software.But,the quality of test cases has a considerable influence on fault revealing capabi...Generally,software testing is considered as a proficient technique to achieve improvement in quality and reliability of the software.But,the quality of test cases has a considerable influence on fault revealing capability of software testing activity.Test Case Prioritization(TCP)remains a challenging issue since prioritizing test cases is unsatisfactory in terms of Average Percentage of Faults Detected(APFD)and time spent upon execution results.TCP ismainly intended to design a collection of test cases that can accomplish early optimization using preferred characteristics.The studies conducted earlier focused on prioritizing the available test cases in accelerating fault detection rate during software testing.In this aspect,the current study designs aModified Harris Hawks Optimization based TCP(MHHO-TCP)technique for software testing.The aim of the proposed MHHO-TCP technique is to maximize APFD and minimize the overall execution time.In addition,MHHO algorithm is designed to boost the exploration and exploitation abilities of conventional HHO algorithm.In order to validate the enhanced efficiency of MHHO-TCP technique,a wide range of simulations was conducted on different benchmark programs and the results were examined under several aspects.The experimental outcomes highlight the improved efficiency of MHHO-TCP technique over recent approaches under different measures.展开更多
To solve the problem of time-awarc test case prioritization,a hybrid algorithm composed of integer linear programming and the genetic algorithm(ILP-GA)is proposed.First,the test case suite which cm maximize the number...To solve the problem of time-awarc test case prioritization,a hybrid algorithm composed of integer linear programming and the genetic algorithm(ILP-GA)is proposed.First,the test case suite which cm maximize the number of covered program entities a d satisfy time constraints is selected by integer linea progamming.Secondly,the individual is encoded according to the cover matrices of entities,and the coverage rate of program entities is used as the fitness function and the genetic algorithm is used to prioritize the selected test cases.Five typical open source projects are selected as benchmark programs.Branch and method are selected as program entities,and time constraint percentages a e 25%and 75%.The experimental results show that the ILP-GA convergence has faster speed and better stability than ILP-additional and IP-total in most cases,which contributes to the detection of software defects as early as possible and reduces the software testing costs.展开更多
The supreme goal of the Automatic Test case selection techniques is to guarantee systematic coverage, to recognize the usual error forms and to lessen the test of redundancy. It is unfeasible to carry out all the test...The supreme goal of the Automatic Test case selection techniques is to guarantee systematic coverage, to recognize the usual error forms and to lessen the test of redundancy. It is unfeasible to carry out all the test cases consistently. For this reason, the test cases are picked and prioritize it. The major goal of test case prioritization is to prioritize the test case sequence and finds faults as early as possible to improve the efficiency. Regression testing is used to ensure the validity and the enhancement part of the changed software. In this paper, we propose a new path compression technique (PCUA) for both old version and new version of BPEL dataset. In order to analyze the enhancement part of an application and to find an error in an enhancement part of an application, center of the tree has been calculated. Moreover in the comparative analysis, our proposed PCUA- COT technique is compared with the existing XPFG technique in terms of time consuming and error detection in the path of an enhancement part of BPEL dataset. The experimental results have been shown that our proposed work is better than the existing technique in terms of time consuming and error detection.展开更多
Regression testing is a widely used approach to confirm the correct functionality of the software in incremental development.The use of test cases makes it easier to test the ripple effect of changed requirements.Rigo...Regression testing is a widely used approach to confirm the correct functionality of the software in incremental development.The use of test cases makes it easier to test the ripple effect of changed requirements.Rigorous testingmay help in meeting the quality criteria that is based on the conformance to the requirements as given by the intended stakeholders.However,a minimized and prioritized set of test cases may reduce the efforts and time required for testingwhile focusing on the timely delivery of the software application.In this research,a technique named Test Reduce has been presented to get a minimal set of test cases based on high priority to ensure that the web applicationmeets the required quality criteria.A new technique TestReduce is proposed with a blend of genetic algorithm to find an optimized and minimal set of test cases.The ultimate objective associated with this study is to provide a technique that may solve the minimization problem of regression test cases in the case of linked requirements.In this research,the 100-Dollar prioritization approach is used to define the priority of the new requirements.展开更多
An approach to generating and optimizing test cases is proposed for Web application testing based on user sessions using genetic algorithm. A large volume of meaningful user sessions are obtained after purging their i...An approach to generating and optimizing test cases is proposed for Web application testing based on user sessions using genetic algorithm. A large volume of meaningful user sessions are obtained after purging their irrelevant information by analyzing user logs on the Web server. Most of the redundant user sessions are also removed by the reduction process. For test reuse and test concurrency, it divides the user sessions obtained into different groups, each of which is called a test suite, and then prioritizes the test suites and the test cases of each test suite. So, the initial test suites and test cases, and their initial executing sequences are achieved. However, the test scheme generated by the elementary prioritization is not much approximate to the best one. Therefore, genetic algorithm is employed to optimize the results of grouping and prioritization. Meanwhile, an approach to generating new test cases is presented using crossover. The new test cases can detect faults caused by the use of possible conflicting data shared by different users.展开更多
By analyzing the average percent of faults detected (APFD) metric and its variant versions, which are widely utilized as metrics to evaluate the fault detection efficiency of the test suite, this paper points out so...By analyzing the average percent of faults detected (APFD) metric and its variant versions, which are widely utilized as metrics to evaluate the fault detection efficiency of the test suite, this paper points out some limitations of the APFD series metrics. These limitations include APFD series metrics having inaccurate physical explanations and being unable to precisely describe the process of fault detection. To avoid the limitations of existing metrics, this paper proposes two improved metrics for evaluating fault detection efficiency of a test suite, including relative-APFD and relative-APFDc. The proposed metrics refer to both the speed of fault detection and the constraint of the testing source. The case study shows that the two proposed metrics can provide much more precise descriptions of the fault detection process and the fault detection efficiency of the test suite.展开更多
Test-case prioritization, proposed at the end of last century, aims to schedule the execution order of test cases so as to improve test effectiveness. In the past years, test-case prioritization has gained much attent...Test-case prioritization, proposed at the end of last century, aims to schedule the execution order of test cases so as to improve test effectiveness. In the past years, test-case prioritization has gained much attention, and has significant achievements in five aspects: prioritization algorithms, coverage criteria, measurement, practical concems involved, and application scenarios. In this article, we will first review the achievements of test-case prioritization from these five as- pects and then give our perspectives on its challenges.展开更多
The techniques of test case prioritization schedule the execution order of test cases to attain respective target, such as enhanced level of forecasting the fault. The requirement of the prioritization can be viewed a...The techniques of test case prioritization schedule the execution order of test cases to attain respective target, such as enhanced level of forecasting the fault. The requirement of the prioritization can be viewed as the en-route for deriving an order of relation on a given set of test cases which results from regression testing. Alteration of programs between the versions can cause more test cases which may respond differently to following versions of software. In this, a fixed approach to prioritizing test cases avoids the preceding drawbacks. The JUnit test case prioritization techniques operating in the absence of coverage information, differs from existing dynamic coverage-based test case prioritization techniques. Further, the prioritization test cases relying on coverage information were projected from fixed structures relatively other than gathered instrumentation and execution.展开更多
<div style="text-align:justify;"> <span style="font-family:Verdana;">Software systems have become complex and challenging to develop and maintain because of the large size of test cases...<div style="text-align:justify;"> <span style="font-family:Verdana;">Software systems have become complex and challenging to develop and maintain because of the large size of test cases with increased scalability issues. Test case prioritization methods have been successfully utilized in test case management. However, the prohibitively exorbitant cost of large test cases is now the mainstream in the software industry. The growth of agile test-driven development has increased the expectations for software quality. Yet, our knowledge of when to use various path testing criteria for cost-effectiveness is inadequate due to the inherent complexity in software testing. Existing researches attempted to address the issue without effectively tackling the scalability of large test suites to reduce time in regression testing. In order to provide a more accurate way of fault detection in software projects, we introduced novel coverage criteria, called Incremental Cluster-based test case Prioritization (ICP), and investigated its potentials by making a comparative evaluation with three un-clustered traditional coverage-based criteria: Prime-Path Coverage (PPC), Edge-Pair Coverage (EPC) and Edge Coverage (EC) based on mutation analysis. By clustering test suites, based on their dynamic run-time behavior, the number of pair-wise comparisons is reduced significantly. To compare, we analyzed 20 functions from 25 C programs, instrumented faults into the programs, and used the Mull mutation tool to generate mutants and perform a statistical analysis of the results. The experimental results show that ICP can lead to cost-effective improvements in fault detection.</span> </div>展开更多
Mobile applications usually can only access limited amount of memory. Improper use of the memory can cause memory leaks, which may lead to performance slowdowns or even cause applications to be unexpectedly killed. Al...Mobile applications usually can only access limited amount of memory. Improper use of the memory can cause memory leaks, which may lead to performance slowdowns or even cause applications to be unexpectedly killed. Although a large body of research has been devoted into the memory leak diagnosing techniques after leaks have been discovered, it is still challenging to find out the memory leak phenomena at first. Testing is the most widely used technique for failure discovery. However, traditional testing techniques are not directed for the discovery of memory leaks. They may spend lots of time on testing unlikely leaking executions and therefore can be inefficient. To address the problem, we propose a novel approach to prioritize test cases according to their likelihood to cause memory leaks in a given test suite. It firstly builds a prediction model to determine whether each test can potentially lead to memory leaks based on machine learning on selected code features. Then, for each input test case, we partly run it to get its code features and predict its likelihood to cause leaks. The most suspicious test cases will be suggested to run at first in order to reveal memory leak faults as soon as possible. Experimental evaluation on several Android applications shows that our approach is effective.展开更多
本研究将双种群遗传算法引入测试用例排序中以解决单一种群中过早收敛和最终解质量不稳定等问题,通过设置多样性较高的初始解,并在两个进化种群中使用不同的控制参数来协同进化,达到扩大解搜索空间的目的,以降低算法陷入局部最优的风险...本研究将双种群遗传算法引入测试用例排序中以解决单一种群中过早收敛和最终解质量不稳定等问题,通过设置多样性较高的初始解,并在两个进化种群中使用不同的控制参数来协同进化,达到扩大解搜索空间的目的,以降低算法陷入局部最优的风险;同时使用引入权重因子的平均方法覆盖率作为适应度函数,利用Boltzmann选择法实现不同进化阶段选择压力的自适应变化,期望加快算法后期收敛速度。最后在具有真实故障的数据集Defects4J上进行对比验证,结果表明:本文算法在平均故障检测率(average percentage of fault detection,APFD)方面优于单一种群遗传算法,且这种性能的提升在统计学上是显著的。展开更多
文摘Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount importance in the emerging field of edge AI.One widely used testing method for this purpose is fuzz testing,which detects bugs by inputting random test cases into the target program.However,this process consumes significant time and resources.To improve the efficiency of compiler fuzz testing,it is common practice to utilize test case prioritization techniques.Some researchers use machine learning to predict the code coverage of test cases,aiming to maximize the test capability for the target compiler by increasing the overall predicted coverage of the test cases.Nevertheless,these methods can only forecast the code coverage of the compiler at a specific optimization level,potentially missing many optimization-related bugs.In this paper,we introduce C-CORE(short for Clustering by Code Representation),the first framework to prioritize test cases according to their code representations,which are derived directly from the source codes.This approach avoids being limited to specific compiler states and extends to a broader range of compiler bugs.Specifically,we first train a scaled pre-trained programming language model to capture as many common features as possible from the test cases generated by a fuzzer.Using this pre-trained model,we then train two downstream models:one for predicting the likelihood of triggering a bug and another for identifying code representations associated with bugs.Subsequently,we cluster the test cases according to their code representations and select the highest-scoring test case from each cluster as the high-quality test case.This reduction in redundant testing cases leads to time savings.Comprehensive evaluation results reveal that code representations are better at distinguishing test capabilities,and C-CORE significantly enhances testing efficiency.Across four datasets,C-CORE increases the average of the percentage of faults detected(APFD)value by 0.16 to 0.31 and reduces test time by over 50% in 46% of cases.When compared to the best results from approaches using predicted code coverage,C-CORE improves the APFD value by 1.1% to 12.3% and achieves an overall time-saving of 159.1%.
基金supported by the National Natural Science Foundation of China (61702044)the Fundamental Research Funds for the Central Universities (2019XD-A20).
文摘Test case prioritization(TCP) technique is an efficient approach to improve regression testing activities. With the continuous improvement of industrial testing requirements, traditional single-objective TCP is limited greatly, and multi-objective test case prioritization(MOTCP) technique becomes one of the hot topics in the field of software testing in recent years. Considering the problems of traditional genetic algorithm(GA) and swarm intelligence algorithm in solving MOTCP problems, such as falling into local optimum quickly and weak stability of the algorithm, a MOTCP algorithm based on multi-population cooperative particle swarm optimization(MPPSO) was proposed in this paper. Empirical studies were conducted to study the influence of iteration times on the proposed MOTCP algorithm, and compare the performances of MOTCP based on single-population particle swarm optimization(PSO) and MOTCP based on non-dominated sorting genetic algorithm Ⅱ(NSGA-Ⅱ) with the MOTCP algorithm proposed in this paper. The results of experiments show that the TCP algorithm based on MPPSO has stronger global optimization ability, is not easy to fall into local optimum, and can solve the MOTCP problem better than TCP algorithm based on the single-population PSO and NSGA-Ⅱ.
文摘Test Case Prioritization(TCP)techniques perform better than other regression test optimization techniques including Test Suite Reduction(TSR)and Test Case Selection(TCS).Many TCP techniques are available,and their performance is usually measured through a metric Average Percentage of Fault Detection(APFD).This metric is value-neutral because it only works well when all test cases have the same cost,and all faults have the same severity.Using APFD for performance evaluation of test case orders where test cases cost or faults severity varies is prone to produce false results.Therefore,using the right metric for performance evaluation of TCP techniques is very important to get reliable and correct results.In this paper,two value-based TCP techniques have been introduced using Genetic Algorithm(GA)including Value-Cognizant Fault Detection-Based TCP(VCFDB-TCP)and Value-Cognizant Requirements Coverage-Based TCP(VCRCB-TCP).Two novel value-based performance evaluation metrics are also introduced for value-based TCP including Average Percentage of Fault Detection per value(APFDv)and Average Percentage of Requirements Coverage per value(APRCv).Two case studies are performed to validate proposed techniques and performance evaluation metrics.The proposed GA-based techniques outperformed the existing state-of-the-art TCP techniques including Original Order(OO),Reverse Order(REV-O),Random Order(RO),and Greedy algorithm.
基金This research is funded by the Deanship of Scientific Research at Umm Al-Qura University,Grant Code:22UQU4281755DSR02.
文摘Software needs modifications and requires revisions regularly.Owing to these revisions,retesting software becomes essential to ensure that the enhancements made,have not affected its bug-free functioning.The time and cost incurred in this process,need to be reduced by the method of test case selection and prioritization.It is observed that many nature-inspired techniques are applied in this area.African Buffalo Optimization is one such approach,applied to regression test selection and prioritization.In this paper,the proposed work explains and proves the applicability of the African Buffalo Optimization approach to test case selection and prioritization.The proposed algorithm converges in polynomial time(O(n^(2))).In this paper,the empirical evaluation of applying African Buffalo Optimization for test case prioritization is done on sample data set with multiple iterations.An astounding 62.5%drop in size and a 48.57%drop in the runtime of the original test suite were recorded.The obtained results are compared with Ant Colony Optimization.The comparative analysis indicates that African Buffalo Optimization and Ant Colony Optimization exhibit similar fault detection capabilities(80%),and a reduction in the overall execution time and size of the resultant test suite.The results and analysis,hence,advocate and encourages the use of African Buffalo Optimization in the area of test case selection and prioritization.
文摘Both unit and integration testing are incredibly crucial for almost any software application because each of them operates a distinct process to examine the product.Due to resource constraints,when software is subjected to modifications,the drastic increase in the count of test cases forces the testers to opt for a test optimization strategy.One such strategy is test case prioritization(TCP).Existing works have propounded various methodologies that re-order the system-level test cases intending to boost either the fault detection capabilities or the coverage efficacy at the earliest.Nonetheless,singularity in objective functions and the lack of dissimilitude among the re-ordered test sequences have degraded the cogency of their approaches.Considering such gaps and scenarios when the meteoric and continuous updations in the software make the intensive unit and integration testing process more fragile,this study has introduced a memetics-inspired methodology for TCP.The proposed structure is first embedded with diverse parameters,and then traditional steps of the shuffled-frog-leaping approach(SFLA)are followed to prioritize the test cases at unit and integration levels.On 5 standard test functions,a comparative analysis is conducted between the established algorithms and the proposed approach,where the latter enhances the coverage rate and fault detection of re-ordered test sets.Investigation results related to the mean average percentage of fault detection(APFD)confirmed that the proposed approach exceeds the memetic,basic multi-walk,PSO,and optimized multi-walk by 21.7%,13.99%,12.24%,and 11.51%,respectively.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under Grant Number(RGP.1/127/42)Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R237),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Generally,software testing is considered as a proficient technique to achieve improvement in quality and reliability of the software.But,the quality of test cases has a considerable influence on fault revealing capability of software testing activity.Test Case Prioritization(TCP)remains a challenging issue since prioritizing test cases is unsatisfactory in terms of Average Percentage of Faults Detected(APFD)and time spent upon execution results.TCP ismainly intended to design a collection of test cases that can accomplish early optimization using preferred characteristics.The studies conducted earlier focused on prioritizing the available test cases in accelerating fault detection rate during software testing.In this aspect,the current study designs aModified Harris Hawks Optimization based TCP(MHHO-TCP)technique for software testing.The aim of the proposed MHHO-TCP technique is to maximize APFD and minimize the overall execution time.In addition,MHHO algorithm is designed to boost the exploration and exploitation abilities of conventional HHO algorithm.In order to validate the enhanced efficiency of MHHO-TCP technique,a wide range of simulations was conducted on different benchmark programs and the results were examined under several aspects.The experimental outcomes highlight the improved efficiency of MHHO-TCP technique over recent approaches under different measures.
基金The Natural Science Foundation of Education Ministry of Shaanxi Province(No.15JK1672)the Industrial Research Project of Shaanxi Province(No.2017GY-092)Special Fund for Key Discipline Construction of General Institutions of Higher Education in Shaanxi Province
文摘To solve the problem of time-awarc test case prioritization,a hybrid algorithm composed of integer linear programming and the genetic algorithm(ILP-GA)is proposed.First,the test case suite which cm maximize the number of covered program entities a d satisfy time constraints is selected by integer linea progamming.Secondly,the individual is encoded according to the cover matrices of entities,and the coverage rate of program entities is used as the fitness function and the genetic algorithm is used to prioritize the selected test cases.Five typical open source projects are selected as benchmark programs.Branch and method are selected as program entities,and time constraint percentages a e 25%and 75%.The experimental results show that the ILP-GA convergence has faster speed and better stability than ILP-additional and IP-total in most cases,which contributes to the detection of software defects as early as possible and reduces the software testing costs.
文摘The supreme goal of the Automatic Test case selection techniques is to guarantee systematic coverage, to recognize the usual error forms and to lessen the test of redundancy. It is unfeasible to carry out all the test cases consistently. For this reason, the test cases are picked and prioritize it. The major goal of test case prioritization is to prioritize the test case sequence and finds faults as early as possible to improve the efficiency. Regression testing is used to ensure the validity and the enhancement part of the changed software. In this paper, we propose a new path compression technique (PCUA) for both old version and new version of BPEL dataset. In order to analyze the enhancement part of an application and to find an error in an enhancement part of an application, center of the tree has been calculated. Moreover in the comparative analysis, our proposed PCUA- COT technique is compared with the existing XPFG technique in terms of time consuming and error detection in the path of an enhancement part of BPEL dataset. The experimental results have been shown that our proposed work is better than the existing technique in terms of time consuming and error detection.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups,Project under grant number RGP.2/49/43.
文摘Regression testing is a widely used approach to confirm the correct functionality of the software in incremental development.The use of test cases makes it easier to test the ripple effect of changed requirements.Rigorous testingmay help in meeting the quality criteria that is based on the conformance to the requirements as given by the intended stakeholders.However,a minimized and prioritized set of test cases may reduce the efforts and time required for testingwhile focusing on the timely delivery of the software application.In this research,a technique named Test Reduce has been presented to get a minimal set of test cases based on high priority to ensure that the web applicationmeets the required quality criteria.A new technique TestReduce is proposed with a blend of genetic algorithm to find an optimized and minimal set of test cases.The ultimate objective associated with this study is to provide a technique that may solve the minimization problem of regression test cases in the case of linked requirements.In this research,the 100-Dollar prioritization approach is used to define the priority of the new requirements.
文摘An approach to generating and optimizing test cases is proposed for Web application testing based on user sessions using genetic algorithm. A large volume of meaningful user sessions are obtained after purging their irrelevant information by analyzing user logs on the Web server. Most of the redundant user sessions are also removed by the reduction process. For test reuse and test concurrency, it divides the user sessions obtained into different groups, each of which is called a test suite, and then prioritizes the test suites and the test cases of each test suite. So, the initial test suites and test cases, and their initial executing sequences are achieved. However, the test scheme generated by the elementary prioritization is not much approximate to the best one. Therefore, genetic algorithm is employed to optimize the results of grouping and prioritization. Meanwhile, an approach to generating new test cases is presented using crossover. The new test cases can detect faults caused by the use of possible conflicting data shared by different users.
基金The National Natural Science Foundation of China(No.61300054)the Natural Science Foundation of Jiangsu Province(No.BK2011190,BK20130879)+1 种基金the Natural Science Foundation of Higher Education Institutions of Jiangsu Province(No.13KJB520018)the Science Foundation of Nanjing University of Posts&Telecommunications(No.NY212023)
文摘By analyzing the average percent of faults detected (APFD) metric and its variant versions, which are widely utilized as metrics to evaluate the fault detection efficiency of the test suite, this paper points out some limitations of the APFD series metrics. These limitations include APFD series metrics having inaccurate physical explanations and being unable to precisely describe the process of fault detection. To avoid the limitations of existing metrics, this paper proposes two improved metrics for evaluating fault detection efficiency of a test suite, including relative-APFD and relative-APFDc. The proposed metrics refer to both the speed of fault detection and the constraint of the testing source. The case study shows that the two proposed metrics can provide much more precise descriptions of the fault detection process and the fault detection efficiency of the test suite.
文摘Test-case prioritization, proposed at the end of last century, aims to schedule the execution order of test cases so as to improve test effectiveness. In the past years, test-case prioritization has gained much attention, and has significant achievements in five aspects: prioritization algorithms, coverage criteria, measurement, practical concems involved, and application scenarios. In this article, we will first review the achievements of test-case prioritization from these five as- pects and then give our perspectives on its challenges.
文摘The techniques of test case prioritization schedule the execution order of test cases to attain respective target, such as enhanced level of forecasting the fault. The requirement of the prioritization can be viewed as the en-route for deriving an order of relation on a given set of test cases which results from regression testing. Alteration of programs between the versions can cause more test cases which may respond differently to following versions of software. In this, a fixed approach to prioritizing test cases avoids the preceding drawbacks. The JUnit test case prioritization techniques operating in the absence of coverage information, differs from existing dynamic coverage-based test case prioritization techniques. Further, the prioritization test cases relying on coverage information were projected from fixed structures relatively other than gathered instrumentation and execution.
文摘<div style="text-align:justify;"> <span style="font-family:Verdana;">Software systems have become complex and challenging to develop and maintain because of the large size of test cases with increased scalability issues. Test case prioritization methods have been successfully utilized in test case management. However, the prohibitively exorbitant cost of large test cases is now the mainstream in the software industry. The growth of agile test-driven development has increased the expectations for software quality. Yet, our knowledge of when to use various path testing criteria for cost-effectiveness is inadequate due to the inherent complexity in software testing. Existing researches attempted to address the issue without effectively tackling the scalability of large test suites to reduce time in regression testing. In order to provide a more accurate way of fault detection in software projects, we introduced novel coverage criteria, called Incremental Cluster-based test case Prioritization (ICP), and investigated its potentials by making a comparative evaluation with three un-clustered traditional coverage-based criteria: Prime-Path Coverage (PPC), Edge-Pair Coverage (EPC) and Edge Coverage (EC) based on mutation analysis. By clustering test suites, based on their dynamic run-time behavior, the number of pair-wise comparisons is reduced significantly. To compare, we analyzed 20 functions from 25 C programs, instrumented faults into the programs, and used the Mull mutation tool to generate mutants and perform a statistical analysis of the results. The experimental results show that ICP can lead to cost-effective improvements in fault detection.</span> </div>
文摘Mobile applications usually can only access limited amount of memory. Improper use of the memory can cause memory leaks, which may lead to performance slowdowns or even cause applications to be unexpectedly killed. Although a large body of research has been devoted into the memory leak diagnosing techniques after leaks have been discovered, it is still challenging to find out the memory leak phenomena at first. Testing is the most widely used technique for failure discovery. However, traditional testing techniques are not directed for the discovery of memory leaks. They may spend lots of time on testing unlikely leaking executions and therefore can be inefficient. To address the problem, we propose a novel approach to prioritize test cases according to their likelihood to cause memory leaks in a given test suite. It firstly builds a prediction model to determine whether each test can potentially lead to memory leaks based on machine learning on selected code features. Then, for each input test case, we partly run it to get its code features and predict its likelihood to cause leaks. The most suspicious test cases will be suggested to run at first in order to reveal memory leak faults as soon as possible. Experimental evaluation on several Android applications shows that our approach is effective.
文摘本研究将双种群遗传算法引入测试用例排序中以解决单一种群中过早收敛和最终解质量不稳定等问题,通过设置多样性较高的初始解,并在两个进化种群中使用不同的控制参数来协同进化,达到扩大解搜索空间的目的,以降低算法陷入局部最优的风险;同时使用引入权重因子的平均方法覆盖率作为适应度函数,利用Boltzmann选择法实现不同进化阶段选择压力的自适应变化,期望加快算法后期收敛速度。最后在具有真实故障的数据集Defects4J上进行对比验证,结果表明:本文算法在平均故障检测率(average percentage of fault detection,APFD)方面优于单一种群遗传算法,且这种性能的提升在统计学上是显著的。