Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount impo...Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount importance in the emerging field of edge AI.One widely used testing method for this purpose is fuzz testing,which detects bugs by inputting random test cases into the target program.However,this process consumes significant time and resources.To improve the efficiency of compiler fuzz testing,it is common practice to utilize test case prioritization techniques.Some researchers use machine learning to predict the code coverage of test cases,aiming to maximize the test capability for the target compiler by increasing the overall predicted coverage of the test cases.Nevertheless,these methods can only forecast the code coverage of the compiler at a specific optimization level,potentially missing many optimization-related bugs.In this paper,we introduce C-CORE(short for Clustering by Code Representation),the first framework to prioritize test cases according to their code representations,which are derived directly from the source codes.This approach avoids being limited to specific compiler states and extends to a broader range of compiler bugs.Specifically,we first train a scaled pre-trained programming language model to capture as many common features as possible from the test cases generated by a fuzzer.Using this pre-trained model,we then train two downstream models:one for predicting the likelihood of triggering a bug and another for identifying code representations associated with bugs.Subsequently,we cluster the test cases according to their code representations and select the highest-scoring test case from each cluster as the high-quality test case.This reduction in redundant testing cases leads to time savings.Comprehensive evaluation results reveal that code representations are better at distinguishing test capabilities,and C-CORE significantly enhances testing efficiency.Across four datasets,C-CORE increases the average of the percentage of faults detected(APFD)value by 0.16 to 0.31 and reduces test time by over 50% in 46% of cases.When compared to the best results from approaches using predicted code coverage,C-CORE improves the APFD value by 1.1% to 12.3% and achieves an overall time-saving of 159.1%.展开更多
In software testing,the quality of test cases is crucial,but manual generation is time-consuming.Various automatic test case generation methods exist,requiring careful selection based on program features.Current evalu...In software testing,the quality of test cases is crucial,but manual generation is time-consuming.Various automatic test case generation methods exist,requiring careful selection based on program features.Current evaluation methods compare a limited set of metrics,which does not support a larger number of metrics or consider the relative importance of each metric to the final assessment.To address this,we propose an evaluation tool,the Test Case Generation Evaluator(TCGE),based on the learning to rank(L2R)algorithm.Unlike previous approaches,our method comprehensively evaluates algorithms by considering multiple metrics,resulting in a more reasoned assessment.The main principle of the TCGE is the formation of feature vectors that are of concern by the tester.Through training,the feature vectors are sorted to generate a list,with the order of the methods on the list determined according to their effectiveness on the tested assembly.We implement TCGE using three L2R algorithms:Listnet,LambdaMART,and RFLambdaMART.Evaluation employs a dataset with features of classical test case generation algorithms and three metrics—Normalized Discounted Cumulative Gain(NDCG),Mean Average Precision(MAP),and Mean Reciprocal Rank(MRR).Results demonstrate the TCGE’s superior effectiveness in evaluating test case generation algorithms compared to other methods.Among the three L2R algorithms,RFLambdaMART proves the most effective,achieving an accuracy above 96.5%,surpassing LambdaMART by 2%and Listnet by 1.5%.Consequently,the TCGE framework exhibits significant application value in the evaluation of test case generation algorithms.展开更多
Software needs modifications and requires revisions regularly.Owing to these revisions,retesting software becomes essential to ensure that the enhancements made,have not affected its bug-free functioning.The time and ...Software needs modifications and requires revisions regularly.Owing to these revisions,retesting software becomes essential to ensure that the enhancements made,have not affected its bug-free functioning.The time and cost incurred in this process,need to be reduced by the method of test case selection and prioritization.It is observed that many nature-inspired techniques are applied in this area.African Buffalo Optimization is one such approach,applied to regression test selection and prioritization.In this paper,the proposed work explains and proves the applicability of the African Buffalo Optimization approach to test case selection and prioritization.The proposed algorithm converges in polynomial time(O(n^(2))).In this paper,the empirical evaluation of applying African Buffalo Optimization for test case prioritization is done on sample data set with multiple iterations.An astounding 62.5%drop in size and a 48.57%drop in the runtime of the original test suite were recorded.The obtained results are compared with Ant Colony Optimization.The comparative analysis indicates that African Buffalo Optimization and Ant Colony Optimization exhibit similar fault detection capabilities(80%),and a reduction in the overall execution time and size of the resultant test suite.The results and analysis,hence,advocate and encourages the use of African Buffalo Optimization in the area of test case selection and prioritization.展开更多
Test Case Prioritization(TCP)techniques perform better than other regression test optimization techniques including Test Suite Reduction(TSR)and Test Case Selection(TCS).Many TCP techniques are available,and their per...Test Case Prioritization(TCP)techniques perform better than other regression test optimization techniques including Test Suite Reduction(TSR)and Test Case Selection(TCS).Many TCP techniques are available,and their performance is usually measured through a metric Average Percentage of Fault Detection(APFD).This metric is value-neutral because it only works well when all test cases have the same cost,and all faults have the same severity.Using APFD for performance evaluation of test case orders where test cases cost or faults severity varies is prone to produce false results.Therefore,using the right metric for performance evaluation of TCP techniques is very important to get reliable and correct results.In this paper,two value-based TCP techniques have been introduced using Genetic Algorithm(GA)including Value-Cognizant Fault Detection-Based TCP(VCFDB-TCP)and Value-Cognizant Requirements Coverage-Based TCP(VCRCB-TCP).Two novel value-based performance evaluation metrics are also introduced for value-based TCP including Average Percentage of Fault Detection per value(APFDv)and Average Percentage of Requirements Coverage per value(APRCv).Two case studies are performed to validate proposed techniques and performance evaluation metrics.The proposed GA-based techniques outperformed the existing state-of-the-art TCP techniques including Original Order(OO),Reverse Order(REV-O),Random Order(RO),and Greedy algorithm.展开更多
Regression testing is a widely used approach to confirm the correct functionality of the software in incremental development.The use of test cases makes it easier to test the ripple effect of changed requirements.Rigo...Regression testing is a widely used approach to confirm the correct functionality of the software in incremental development.The use of test cases makes it easier to test the ripple effect of changed requirements.Rigorous testingmay help in meeting the quality criteria that is based on the conformance to the requirements as given by the intended stakeholders.However,a minimized and prioritized set of test cases may reduce the efforts and time required for testingwhile focusing on the timely delivery of the software application.In this research,a technique named Test Reduce has been presented to get a minimal set of test cases based on high priority to ensure that the web applicationmeets the required quality criteria.A new technique TestReduce is proposed with a blend of genetic algorithm to find an optimized and minimal set of test cases.The ultimate objective associated with this study is to provide a technique that may solve the minimization problem of regression test cases in the case of linked requirements.In this research,the 100-Dollar prioritization approach is used to define the priority of the new requirements.展开更多
Both unit and integration testing are incredibly crucial for almost any software application because each of them operates a distinct process to examine the product.Due to resource constraints,when software is subject...Both unit and integration testing are incredibly crucial for almost any software application because each of them operates a distinct process to examine the product.Due to resource constraints,when software is subjected to modifications,the drastic increase in the count of test cases forces the testers to opt for a test optimization strategy.One such strategy is test case prioritization(TCP).Existing works have propounded various methodologies that re-order the system-level test cases intending to boost either the fault detection capabilities or the coverage efficacy at the earliest.Nonetheless,singularity in objective functions and the lack of dissimilitude among the re-ordered test sequences have degraded the cogency of their approaches.Considering such gaps and scenarios when the meteoric and continuous updations in the software make the intensive unit and integration testing process more fragile,this study has introduced a memetics-inspired methodology for TCP.The proposed structure is first embedded with diverse parameters,and then traditional steps of the shuffled-frog-leaping approach(SFLA)are followed to prioritize the test cases at unit and integration levels.On 5 standard test functions,a comparative analysis is conducted between the established algorithms and the proposed approach,where the latter enhances the coverage rate and fault detection of re-ordered test sets.Investigation results related to the mean average percentage of fault detection(APFD)confirmed that the proposed approach exceeds the memetic,basic multi-walk,PSO,and optimized multi-walk by 21.7%,13.99%,12.24%,and 11.51%,respectively.展开更多
Despite the advances in automated vulnerability detection approaches,security vulnerabilities caused by design flaws in software systems are continuously appearing in real-world systems.Such security design flaws can ...Despite the advances in automated vulnerability detection approaches,security vulnerabilities caused by design flaws in software systems are continuously appearing in real-world systems.Such security design flaws can bring unrestricted and misimplemented behaviors of a system and can lead to fatal vulnerabilities such as remote code execution or sensitive data leakage.Therefore,it is an essential task to discover unrestricted and misimplemented behaviors of a system.However,it is a daunting task for security experts to discover such vulnerabilities in advance because it is timeconsuming and error-prone to analyze the whole code in detail.Also,most of the existing vulnerability detection approaches still focus on detecting memory corruption bugs because these bugs are the dominant root cause of software vulnerabilities.This paper proposes SMINER,a novel approach that discovers vulnerabilities caused by unrestricted and misimplemented behaviors.SMINER first collects unit test cases for the target system from the official repository.Next,preprocess the collected code fragments.SMINER uses pre-processed data to show the security policies that can occur on the target system and creates a test case for security policy testing.To demonstrate the effectiveness of SMINER,this paper evaluates SMINER against Robot Operating System(ROS),a real-world system used for intelligent robots in Amazon and controlling satellites in National Aeronautics and Space Administration(NASA).From the evaluation,we discovered two real-world vulnerabilities in ROS.展开更多
Intimate Partner Violence (IPV) is a form of Gender Base Violence (GBV) where an intimate partner perpetrates violence. In the HIV care continua which has the aim of achieving epidemic control based on the goals defin...Intimate Partner Violence (IPV) is a form of Gender Base Violence (GBV) where an intimate partner perpetrates violence. In the HIV care continua which has the aim of achieving epidemic control based on the goals defined by UNAIDS, 95% of people living with HIV (PLHIV) have to know their HIV status, 95% initiated ARV treatment and 95% are virally suppressed in order to achieve epidemic control. One of the evidence-based strategies used for achieving an optimal number of PLHIV who know their HIV status is the Index Case Testing Strategy (ICT). While the ICT strategy helps the achievement of epidemic control, its implementation increases the incidence of IPV among either serodiscordant or concordant couples. Tackling information about IPV is very sensitive. A review of the literature on the management of HIV patient information has shown that shifting from paper-based management of HIV patient information to computerized Electronic Medical Records (EMR) systems, using software such as OPEN MRS has significantly improved the management of HIV patient information with high-level confidentiality of patient information. The reviews showed that the EMR systems put in place to manage HIV patient information need to integrate the stages used for the management of IPV among PLHIV.展开更多
BACKGROUND For primary liver cancer,the key to conversion therapy depends on the effectiveness of drug treatment.Patient-derived tumor organoids have been demonstrated to improve the efficacy of conversion therapy by ...BACKGROUND For primary liver cancer,the key to conversion therapy depends on the effectiveness of drug treatment.Patient-derived tumor organoids have been demonstrated to improve the efficacy of conversion therapy by identifying individualtargeted effective drugs,but their clinical effects in liver cancer remain unknown.CASE SUMMARY We described a patient with hepatocellular carcinoma(HCC)who achieved pathologic complete response(pCR)to conversion therapy guided by the patientderived organoid(PDO)drug sensitivity testing.Despite insufficiency of the remaining liver volume after hepatectomy,the patient obtained tumor reduction after treatment with the PDO-sensitive drugs and successfully underwent radical surgical resection.Postoperatively,pCR was observed.CONCLUSION PDOs contributes to screening sensitive drugs for HCC patients to realize the personalized treatment and improve the conversion therapy efficacy.展开更多
BACKGROUND Benign recurrent intrahepatic cholestasis(BRIC)is a rare autosomal recessive disorder,characterized by episodes of intense pruritus,elevated serum levels of alkaline phosphatase and bilirubin,and near-norma...BACKGROUND Benign recurrent intrahepatic cholestasis(BRIC)is a rare autosomal recessive disorder,characterized by episodes of intense pruritus,elevated serum levels of alkaline phosphatase and bilirubin,and near-normal-glutamyl transferase.These episodes may persist for weeks to months before spontaneously resolving,with patients typically remaining asymptomatic between occurrences.Diagnosis entails the evaluation of clinical symptoms and targeted genetic testing.Although BRIC is recognized as a benign genetic disorder,the triggers,particularly psychosocial factors,remain poorly understood.CASE SUMMARY An 18-year-old Chinese man presented with recurrent jaundice and pruritus after a cold,which was exacerbated by self-medication involving vitamin B and paracetamol.Clinical and laboratory evaluations revealed elevated levels of bilirubin and liver enzymes,in the absence of viral or autoimmune liver disease.Imaging excluded biliary and pancreatic abnormalities,and liver biopsy demonstrated centrilobular cholestasis,culminating in a BRIC diagnosis confirmed by the identification of a novel ATP8B1 gene mutation.Psychological assessment of the patient unveiled stress attributable to academic and familial pressures,regarded as potential triggers for BRIC.Initial relief was observed with ursodeoxycholic acid and cetirizine,followed by an adjustment of the treatment regimen in response to elevated liver enzymes.The patient's condition significantly improved following a stress-related episode,thanks to a comprehensive management approach that included psychosocial support and medical treatment.CONCLUSION Our research highlights genetic and psychosocial influences on BRIC,emphasizing integrated diagnostic and management strategies.展开更多
BACKGROUND Immunoglobulin G4-related disease(IgG4-RD)is a complex immune-mediated condition that causes fibrotic inflammation in several organs.A significant clinical feature of IgG4-RD is hypertrophic pachymeningitis...BACKGROUND Immunoglobulin G4-related disease(IgG4-RD)is a complex immune-mediated condition that causes fibrotic inflammation in several organs.A significant clinical feature of IgG4-RD is hypertrophic pachymeningitis,which manifests as inflammation of the dura mater in intracranial or spinal regions.Although IgG4-RD can affect multiple areas,the spine is a relatively rare site compared to the more frequent involvement of intracranial structures.CASE SUMMARY A 70-year-old male presented to our hospital with a two-day history of fever,altered mental status,and generalized weakness.The initial brain magnetic resonance imaging(MRI)revealed multiple small infarcts across various cerebral regions.On the second day after admission,a physical examination revealed motor weakness in both lower extremities and diminished sensation in the right lower extremity.Electromyographic evaluation revealed findings consistent with acute motor sensory neuropathy.Despite initial management with intravenous immunoglobulin for presumed Guillain-Barrésyndrome,the patient exhibited progressive worsening of motor deficits.On the 45th day of hospitalization,an enhanced MRI of the entire spine,focusing specifically on the thoracic 9 to lumbar 1 vertebral level,raised the suspicion of IgG4-related spinal pachymeningitis.Subsequently,the patient was administered oral prednisolone and participated in a comprehensive rehabilitation program that included gait training and lower extremity strengthening exercises.CONCLUSION IgG4-related spinal pachymeningitis,diagnosed on MRI,was treated with corticosteroids and a structured rehabilitation regimen,leading to significant improvement.展开更多
In order to improve the efficiency of regression testing in web application,the control flow graph and the greedy algorithm are adopted.This paper considers a web page as a basic unit and introduces a test case select...In order to improve the efficiency of regression testing in web application,the control flow graph and the greedy algorithm are adopted.This paper considers a web page as a basic unit and introduces a test case selection method for web application regression testing based on the control flow graph.This method is safe enough to the test case selection.On the base of features of request sequence in web application,the minimization technique and the priority of test cases are taken into consideration in the process of execution of test cases in regression testing for web application.The improved greedy algorithm is also raised resulting in optimization of execution of test cases.The experiments indicate that the number of test cases which need to be retested is reduced,and the efficiency of execution of test cases is also improved.展开更多
This paper analyzed the reliability and put forward the reliability index of overload protection for moulded case circuit breaker. The success rate was adopted as its reliability index of overload protection. Based on...This paper analyzed the reliability and put forward the reliability index of overload protection for moulded case circuit breaker. The success rate was adopted as its reliability index of overload protection. Based on the reliability index and the reli- ability level, the reliability examination plan was analyzed and a test device for the overload protection of moulded case cir- cuit-breaker was developed. In the reliability test of overload protection, two power sources were used, which reduced the time of conversion and regulation between two different test currents in the overload protection test, which made the characteristic test more accurate. The test device was designed on the base of a Windows system, which made its operation simple and friendly.展开更多
Selection of test cases plays a key role in improving testing efficiency. Black-box testing is an important way of testing, and its validity lies on the selection of test cases in some sense. A reasonable and effectiv...Selection of test cases plays a key role in improving testing efficiency. Black-box testing is an important way of testing, and its validity lies on the selection of test cases in some sense. A reasonable and effective method about the selection and generation of test cases is urgently needed. This letter first introduces some usualmethods on black-box test case generation,then proposes a new algorithm based on interface parameters and discusses its properties, finally shows the effectiveness of the algorithm.展开更多
This paper studies the software scenario testing, which is commonly used in black-box testing at present. In the paper, the workflow model based on task-driven, which is very common in scenario testing, is analyzed. A...This paper studies the software scenario testing, which is commonly used in black-box testing at present. In the paper, the workflow model based on task-driven, which is very common in scenario testing, is analyzed. According to test adequacy criteria in scenario testing, the model is designed to correspond test cases in the light of logic block(LB). The final test cases that conform to the test adequacy criteria can be obtained through test case combination and test case reduction. In the last part of the paper, example of actual workflow is to design the efficient test case. Therefore the method is proved to be effective.展开更多
Software operational profile (SOP) is used in software reliability prediction, software quality assessment, performance analysis of software, test case allocation, determination of "when to stop testing," etc. Due...Software operational profile (SOP) is used in software reliability prediction, software quality assessment, performance analysis of software, test case allocation, determination of "when to stop testing," etc. Due to the limited data resources and large efforts required to collect and convert the gathered data into point estimates, reluctance is observed by the software professionals to develop the SOP. A framework is proposed to develop SOP using fuzzy logic, which requires usage data in the form of linguistics. The resulting profile is named fuzzy software operational profile (FSOP). Based on this work, this paper proposes a generalized approach for the allocation of test cases, in which occurrence probability of operations obtained from FSOP are combined with the criticality of the operations using fuzzy inference system (FIS). Traditional methods for the allocation of test cases do not consider the application in which software operates. This is intuitively incorrect. To solve this problem, allocation of test cases with respect to software application using the FIS model is also proposed in this paper.展开更多
Test complexity and test adequacy are frequently raised by software developers and testing agents.However,there is little research works at this aspect on specification-based testing at the use case description level....Test complexity and test adequacy are frequently raised by software developers and testing agents.However,there is little research works at this aspect on specification-based testing at the use case description level.Thus,this research proposes an automatic test cases generator approach to reduce the test complexity and to enhance the percentage of test coverage.First,to support the infrastructure for performing automatic,this proposed approach refines the use cases using use case describing template and save it in the text file.Then,the saved file is input to the Algorithm of Control Flow Diagram(ACFD)to convert use case details to a control flow diagram.After that,the Proposed Tool of Generating Test Paths(PTGTP)is used to generate test cases from the control flow diagram.Finally,the genetic algorithm associated with transition coverage is adapted to optimize and evaluate the adequacy of such test cases.A money withdrawal use case in the ATM system is used as an ongoing case study.Preliminary results show that the generated test cases achieve high coverage with an optimal test case.This automatic test case generation approach is effective and efficient.Therefore,it could promote to use other test case coverage criteria.展开更多
Software testing is an important and cost intensive activity in software development.The major contribution in cost is due to test case generations.Requirement-based testing is an approach in which test cases are deri...Software testing is an important and cost intensive activity in software development.The major contribution in cost is due to test case generations.Requirement-based testing is an approach in which test cases are derivative from requirements without considering the implementation’s internal structure.Requirement-based testing includes functional and nonfunctional requirements.The objective of this study is to explore the approaches that generate test cases from requirements.A systematic literature review based on two research questions and extensive quality assessment criteria includes studies.The study identies 30 primary studies from 410 studies spanned from 2000 to 2018.The review’s nding shows that 53%of journal papers,42%of conference papers,and 5%of book chapters’address requirementsbased testing.Most of the studies use UML,activity,and use case diagrams for test case generation from requirements.One of the signicant lessons learned is that most software testing errors are traced back to errors in natural language requirements.A substantial amount of work focuses on UML diagrams for test case generations,which cannot capture all the system’s developed attributes.Furthermore,there is a lack of UML-based models that can generate test cases from natural language requirements by rening them in context.Coverage criteria indicate how efciently the testing has been performed 12.37%of studies use requirements coverage,20%of studies cover path coverage,and 17%study basic coverage.展开更多
Generally,software testing is considered as a proficient technique to achieve improvement in quality and reliability of the software.But,the quality of test cases has a considerable influence on fault revealing capabi...Generally,software testing is considered as a proficient technique to achieve improvement in quality and reliability of the software.But,the quality of test cases has a considerable influence on fault revealing capability of software testing activity.Test Case Prioritization(TCP)remains a challenging issue since prioritizing test cases is unsatisfactory in terms of Average Percentage of Faults Detected(APFD)and time spent upon execution results.TCP ismainly intended to design a collection of test cases that can accomplish early optimization using preferred characteristics.The studies conducted earlier focused on prioritizing the available test cases in accelerating fault detection rate during software testing.In this aspect,the current study designs aModified Harris Hawks Optimization based TCP(MHHO-TCP)technique for software testing.The aim of the proposed MHHO-TCP technique is to maximize APFD and minimize the overall execution time.In addition,MHHO algorithm is designed to boost the exploration and exploitation abilities of conventional HHO algorithm.In order to validate the enhanced efficiency of MHHO-TCP technique,a wide range of simulations was conducted on different benchmark programs and the results were examined under several aspects.The experimental outcomes highlight the improved efficiency of MHHO-TCP technique over recent approaches under different measures.展开更多
文摘Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount importance in the emerging field of edge AI.One widely used testing method for this purpose is fuzz testing,which detects bugs by inputting random test cases into the target program.However,this process consumes significant time and resources.To improve the efficiency of compiler fuzz testing,it is common practice to utilize test case prioritization techniques.Some researchers use machine learning to predict the code coverage of test cases,aiming to maximize the test capability for the target compiler by increasing the overall predicted coverage of the test cases.Nevertheless,these methods can only forecast the code coverage of the compiler at a specific optimization level,potentially missing many optimization-related bugs.In this paper,we introduce C-CORE(short for Clustering by Code Representation),the first framework to prioritize test cases according to their code representations,which are derived directly from the source codes.This approach avoids being limited to specific compiler states and extends to a broader range of compiler bugs.Specifically,we first train a scaled pre-trained programming language model to capture as many common features as possible from the test cases generated by a fuzzer.Using this pre-trained model,we then train two downstream models:one for predicting the likelihood of triggering a bug and another for identifying code representations associated with bugs.Subsequently,we cluster the test cases according to their code representations and select the highest-scoring test case from each cluster as the high-quality test case.This reduction in redundant testing cases leads to time savings.Comprehensive evaluation results reveal that code representations are better at distinguishing test capabilities,and C-CORE significantly enhances testing efficiency.Across four datasets,C-CORE increases the average of the percentage of faults detected(APFD)value by 0.16 to 0.31 and reduces test time by over 50% in 46% of cases.When compared to the best results from approaches using predicted code coverage,C-CORE improves the APFD value by 1.1% to 12.3% and achieves an overall time-saving of 159.1%.
文摘In software testing,the quality of test cases is crucial,but manual generation is time-consuming.Various automatic test case generation methods exist,requiring careful selection based on program features.Current evaluation methods compare a limited set of metrics,which does not support a larger number of metrics or consider the relative importance of each metric to the final assessment.To address this,we propose an evaluation tool,the Test Case Generation Evaluator(TCGE),based on the learning to rank(L2R)algorithm.Unlike previous approaches,our method comprehensively evaluates algorithms by considering multiple metrics,resulting in a more reasoned assessment.The main principle of the TCGE is the formation of feature vectors that are of concern by the tester.Through training,the feature vectors are sorted to generate a list,with the order of the methods on the list determined according to their effectiveness on the tested assembly.We implement TCGE using three L2R algorithms:Listnet,LambdaMART,and RFLambdaMART.Evaluation employs a dataset with features of classical test case generation algorithms and three metrics—Normalized Discounted Cumulative Gain(NDCG),Mean Average Precision(MAP),and Mean Reciprocal Rank(MRR).Results demonstrate the TCGE’s superior effectiveness in evaluating test case generation algorithms compared to other methods.Among the three L2R algorithms,RFLambdaMART proves the most effective,achieving an accuracy above 96.5%,surpassing LambdaMART by 2%and Listnet by 1.5%.Consequently,the TCGE framework exhibits significant application value in the evaluation of test case generation algorithms.
基金This research is funded by the Deanship of Scientific Research at Umm Al-Qura University,Grant Code:22UQU4281755DSR02.
文摘Software needs modifications and requires revisions regularly.Owing to these revisions,retesting software becomes essential to ensure that the enhancements made,have not affected its bug-free functioning.The time and cost incurred in this process,need to be reduced by the method of test case selection and prioritization.It is observed that many nature-inspired techniques are applied in this area.African Buffalo Optimization is one such approach,applied to regression test selection and prioritization.In this paper,the proposed work explains and proves the applicability of the African Buffalo Optimization approach to test case selection and prioritization.The proposed algorithm converges in polynomial time(O(n^(2))).In this paper,the empirical evaluation of applying African Buffalo Optimization for test case prioritization is done on sample data set with multiple iterations.An astounding 62.5%drop in size and a 48.57%drop in the runtime of the original test suite were recorded.The obtained results are compared with Ant Colony Optimization.The comparative analysis indicates that African Buffalo Optimization and Ant Colony Optimization exhibit similar fault detection capabilities(80%),and a reduction in the overall execution time and size of the resultant test suite.The results and analysis,hence,advocate and encourages the use of African Buffalo Optimization in the area of test case selection and prioritization.
文摘Test Case Prioritization(TCP)techniques perform better than other regression test optimization techniques including Test Suite Reduction(TSR)and Test Case Selection(TCS).Many TCP techniques are available,and their performance is usually measured through a metric Average Percentage of Fault Detection(APFD).This metric is value-neutral because it only works well when all test cases have the same cost,and all faults have the same severity.Using APFD for performance evaluation of test case orders where test cases cost or faults severity varies is prone to produce false results.Therefore,using the right metric for performance evaluation of TCP techniques is very important to get reliable and correct results.In this paper,two value-based TCP techniques have been introduced using Genetic Algorithm(GA)including Value-Cognizant Fault Detection-Based TCP(VCFDB-TCP)and Value-Cognizant Requirements Coverage-Based TCP(VCRCB-TCP).Two novel value-based performance evaluation metrics are also introduced for value-based TCP including Average Percentage of Fault Detection per value(APFDv)and Average Percentage of Requirements Coverage per value(APRCv).Two case studies are performed to validate proposed techniques and performance evaluation metrics.The proposed GA-based techniques outperformed the existing state-of-the-art TCP techniques including Original Order(OO),Reverse Order(REV-O),Random Order(RO),and Greedy algorithm.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups,Project under grant number RGP.2/49/43.
文摘Regression testing is a widely used approach to confirm the correct functionality of the software in incremental development.The use of test cases makes it easier to test the ripple effect of changed requirements.Rigorous testingmay help in meeting the quality criteria that is based on the conformance to the requirements as given by the intended stakeholders.However,a minimized and prioritized set of test cases may reduce the efforts and time required for testingwhile focusing on the timely delivery of the software application.In this research,a technique named Test Reduce has been presented to get a minimal set of test cases based on high priority to ensure that the web applicationmeets the required quality criteria.A new technique TestReduce is proposed with a blend of genetic algorithm to find an optimized and minimal set of test cases.The ultimate objective associated with this study is to provide a technique that may solve the minimization problem of regression test cases in the case of linked requirements.In this research,the 100-Dollar prioritization approach is used to define the priority of the new requirements.
文摘Both unit and integration testing are incredibly crucial for almost any software application because each of them operates a distinct process to examine the product.Due to resource constraints,when software is subjected to modifications,the drastic increase in the count of test cases forces the testers to opt for a test optimization strategy.One such strategy is test case prioritization(TCP).Existing works have propounded various methodologies that re-order the system-level test cases intending to boost either the fault detection capabilities or the coverage efficacy at the earliest.Nonetheless,singularity in objective functions and the lack of dissimilitude among the re-ordered test sequences have degraded the cogency of their approaches.Considering such gaps and scenarios when the meteoric and continuous updations in the software make the intensive unit and integration testing process more fragile,this study has introduced a memetics-inspired methodology for TCP.The proposed structure is first embedded with diverse parameters,and then traditional steps of the shuffled-frog-leaping approach(SFLA)are followed to prioritize the test cases at unit and integration levels.On 5 standard test functions,a comparative analysis is conducted between the established algorithms and the proposed approach,where the latter enhances the coverage rate and fault detection of re-ordered test sets.Investigation results related to the mean average percentage of fault detection(APFD)confirmed that the proposed approach exceeds the memetic,basic multi-walk,PSO,and optimized multi-walk by 21.7%,13.99%,12.24%,and 11.51%,respectively.
基金This work was supported in part by the National Research Foundation of Korea(NRF)funded by the Ministry of Science and ICT(MSIT)Future Planning under Grant NRF-2020R1A2C2014336 and Grant NRF-2021R1A4A1029650.
文摘Despite the advances in automated vulnerability detection approaches,security vulnerabilities caused by design flaws in software systems are continuously appearing in real-world systems.Such security design flaws can bring unrestricted and misimplemented behaviors of a system and can lead to fatal vulnerabilities such as remote code execution or sensitive data leakage.Therefore,it is an essential task to discover unrestricted and misimplemented behaviors of a system.However,it is a daunting task for security experts to discover such vulnerabilities in advance because it is timeconsuming and error-prone to analyze the whole code in detail.Also,most of the existing vulnerability detection approaches still focus on detecting memory corruption bugs because these bugs are the dominant root cause of software vulnerabilities.This paper proposes SMINER,a novel approach that discovers vulnerabilities caused by unrestricted and misimplemented behaviors.SMINER first collects unit test cases for the target system from the official repository.Next,preprocess the collected code fragments.SMINER uses pre-processed data to show the security policies that can occur on the target system and creates a test case for security policy testing.To demonstrate the effectiveness of SMINER,this paper evaluates SMINER against Robot Operating System(ROS),a real-world system used for intelligent robots in Amazon and controlling satellites in National Aeronautics and Space Administration(NASA).From the evaluation,we discovered two real-world vulnerabilities in ROS.
文摘Intimate Partner Violence (IPV) is a form of Gender Base Violence (GBV) where an intimate partner perpetrates violence. In the HIV care continua which has the aim of achieving epidemic control based on the goals defined by UNAIDS, 95% of people living with HIV (PLHIV) have to know their HIV status, 95% initiated ARV treatment and 95% are virally suppressed in order to achieve epidemic control. One of the evidence-based strategies used for achieving an optimal number of PLHIV who know their HIV status is the Index Case Testing Strategy (ICT). While the ICT strategy helps the achievement of epidemic control, its implementation increases the incidence of IPV among either serodiscordant or concordant couples. Tackling information about IPV is very sensitive. A review of the literature on the management of HIV patient information has shown that shifting from paper-based management of HIV patient information to computerized Electronic Medical Records (EMR) systems, using software such as OPEN MRS has significantly improved the management of HIV patient information with high-level confidentiality of patient information. The reviews showed that the EMR systems put in place to manage HIV patient information need to integrate the stages used for the management of IPV among PLHIV.
基金Chongqing Natural Science Foundation Project,No.CSTB2022NSCQ-MSX0172and Incubation Project for Talented Young People,No.2022YQB031.
文摘BACKGROUND For primary liver cancer,the key to conversion therapy depends on the effectiveness of drug treatment.Patient-derived tumor organoids have been demonstrated to improve the efficacy of conversion therapy by identifying individualtargeted effective drugs,but their clinical effects in liver cancer remain unknown.CASE SUMMARY We described a patient with hepatocellular carcinoma(HCC)who achieved pathologic complete response(pCR)to conversion therapy guided by the patientderived organoid(PDO)drug sensitivity testing.Despite insufficiency of the remaining liver volume after hepatectomy,the patient obtained tumor reduction after treatment with the PDO-sensitive drugs and successfully underwent radical surgical resection.Postoperatively,pCR was observed.CONCLUSION PDOs contributes to screening sensitive drugs for HCC patients to realize the personalized treatment and improve the conversion therapy efficacy.
文摘BACKGROUND Benign recurrent intrahepatic cholestasis(BRIC)is a rare autosomal recessive disorder,characterized by episodes of intense pruritus,elevated serum levels of alkaline phosphatase and bilirubin,and near-normal-glutamyl transferase.These episodes may persist for weeks to months before spontaneously resolving,with patients typically remaining asymptomatic between occurrences.Diagnosis entails the evaluation of clinical symptoms and targeted genetic testing.Although BRIC is recognized as a benign genetic disorder,the triggers,particularly psychosocial factors,remain poorly understood.CASE SUMMARY An 18-year-old Chinese man presented with recurrent jaundice and pruritus after a cold,which was exacerbated by self-medication involving vitamin B and paracetamol.Clinical and laboratory evaluations revealed elevated levels of bilirubin and liver enzymes,in the absence of viral or autoimmune liver disease.Imaging excluded biliary and pancreatic abnormalities,and liver biopsy demonstrated centrilobular cholestasis,culminating in a BRIC diagnosis confirmed by the identification of a novel ATP8B1 gene mutation.Psychological assessment of the patient unveiled stress attributable to academic and familial pressures,regarded as potential triggers for BRIC.Initial relief was observed with ursodeoxycholic acid and cetirizine,followed by an adjustment of the treatment regimen in response to elevated liver enzymes.The patient's condition significantly improved following a stress-related episode,thanks to a comprehensive management approach that included psychosocial support and medical treatment.CONCLUSION Our research highlights genetic and psychosocial influences on BRIC,emphasizing integrated diagnostic and management strategies.
文摘BACKGROUND Immunoglobulin G4-related disease(IgG4-RD)is a complex immune-mediated condition that causes fibrotic inflammation in several organs.A significant clinical feature of IgG4-RD is hypertrophic pachymeningitis,which manifests as inflammation of the dura mater in intracranial or spinal regions.Although IgG4-RD can affect multiple areas,the spine is a relatively rare site compared to the more frequent involvement of intracranial structures.CASE SUMMARY A 70-year-old male presented to our hospital with a two-day history of fever,altered mental status,and generalized weakness.The initial brain magnetic resonance imaging(MRI)revealed multiple small infarcts across various cerebral regions.On the second day after admission,a physical examination revealed motor weakness in both lower extremities and diminished sensation in the right lower extremity.Electromyographic evaluation revealed findings consistent with acute motor sensory neuropathy.Despite initial management with intravenous immunoglobulin for presumed Guillain-Barrésyndrome,the patient exhibited progressive worsening of motor deficits.On the 45th day of hospitalization,an enhanced MRI of the entire spine,focusing specifically on the thoracic 9 to lumbar 1 vertebral level,raised the suspicion of IgG4-related spinal pachymeningitis.Subsequently,the patient was administered oral prednisolone and participated in a comprehensive rehabilitation program that included gait training and lower extremity strengthening exercises.CONCLUSION IgG4-related spinal pachymeningitis,diagnosed on MRI,was treated with corticosteroids and a structured rehabilitation regimen,leading to significant improvement.
基金The National Natural Science Foundation of China(No.60503020,60503033,60703086)Opening Foundation of Jiangsu Key Laboratory of Computer Information Processing Technology in Soochow University(No.KJS0714)
文摘In order to improve the efficiency of regression testing in web application,the control flow graph and the greedy algorithm are adopted.This paper considers a web page as a basic unit and introduces a test case selection method for web application regression testing based on the control flow graph.This method is safe enough to the test case selection.On the base of features of request sequence in web application,the minimization technique and the priority of test cases are taken into consideration in the process of execution of test cases in regression testing for web application.The improved greedy algorithm is also raised resulting in optimization of execution of test cases.The experiments indicate that the number of test cases which need to be retested is reduced,and the efficiency of execution of test cases is also improved.
基金Project (No. E2005000039) supported by the Natural Science Foun-dation of Hebei Province, China
文摘This paper analyzed the reliability and put forward the reliability index of overload protection for moulded case circuit breaker. The success rate was adopted as its reliability index of overload protection. Based on the reliability index and the reli- ability level, the reliability examination plan was analyzed and a test device for the overload protection of moulded case cir- cuit-breaker was developed. In the reliability test of overload protection, two power sources were used, which reduced the time of conversion and regulation between two different test currents in the overload protection test, which made the characteristic test more accurate. The test device was designed on the base of a Windows system, which made its operation simple and friendly.
基金Supported in part by the National Natural Science Foundation of China (NSFC)(60073012),Natural Science Foundation of Jiangsu(BK2001004)
文摘Selection of test cases plays a key role in improving testing efficiency. Black-box testing is an important way of testing, and its validity lies on the selection of test cases in some sense. A reasonable and effective method about the selection and generation of test cases is urgently needed. This letter first introduces some usualmethods on black-box test case generation,then proposes a new algorithm based on interface parameters and discusses its properties, finally shows the effectiveness of the algorithm.
基金National Torch Project, China ( No. 2009GH510068 )National High-Tech R & D Program of China ( 863 ) ( No.2007AA010401)
文摘This paper studies the software scenario testing, which is commonly used in black-box testing at present. In the paper, the workflow model based on task-driven, which is very common in scenario testing, is analyzed. According to test adequacy criteria in scenario testing, the model is designed to correspond test cases in the light of logic block(LB). The final test cases that conform to the test adequacy criteria can be obtained through test case combination and test case reduction. In the last part of the paper, example of actual workflow is to design the efficient test case. Therefore the method is proved to be effective.
文摘Software operational profile (SOP) is used in software reliability prediction, software quality assessment, performance analysis of software, test case allocation, determination of "when to stop testing," etc. Due to the limited data resources and large efforts required to collect and convert the gathered data into point estimates, reluctance is observed by the software professionals to develop the SOP. A framework is proposed to develop SOP using fuzzy logic, which requires usage data in the form of linguistics. The resulting profile is named fuzzy software operational profile (FSOP). Based on this work, this paper proposes a generalized approach for the allocation of test cases, in which occurrence probability of operations obtained from FSOP are combined with the criticality of the operations using fuzzy inference system (FIS). Traditional methods for the allocation of test cases do not consider the application in which software operates. This is intuitively incorrect. To solve this problem, allocation of test cases with respect to software application using the FIS model is also proposed in this paper.
文摘Test complexity and test adequacy are frequently raised by software developers and testing agents.However,there is little research works at this aspect on specification-based testing at the use case description level.Thus,this research proposes an automatic test cases generator approach to reduce the test complexity and to enhance the percentage of test coverage.First,to support the infrastructure for performing automatic,this proposed approach refines the use cases using use case describing template and save it in the text file.Then,the saved file is input to the Algorithm of Control Flow Diagram(ACFD)to convert use case details to a control flow diagram.After that,the Proposed Tool of Generating Test Paths(PTGTP)is used to generate test cases from the control flow diagram.Finally,the genetic algorithm associated with transition coverage is adapted to optimize and evaluate the adequacy of such test cases.A money withdrawal use case in the ATM system is used as an ongoing case study.Preliminary results show that the generated test cases achieve high coverage with an optimal test case.This automatic test case generation approach is effective and efficient.Therefore,it could promote to use other test case coverage criteria.
文摘Software testing is an important and cost intensive activity in software development.The major contribution in cost is due to test case generations.Requirement-based testing is an approach in which test cases are derivative from requirements without considering the implementation’s internal structure.Requirement-based testing includes functional and nonfunctional requirements.The objective of this study is to explore the approaches that generate test cases from requirements.A systematic literature review based on two research questions and extensive quality assessment criteria includes studies.The study identies 30 primary studies from 410 studies spanned from 2000 to 2018.The review’s nding shows that 53%of journal papers,42%of conference papers,and 5%of book chapters’address requirementsbased testing.Most of the studies use UML,activity,and use case diagrams for test case generation from requirements.One of the signicant lessons learned is that most software testing errors are traced back to errors in natural language requirements.A substantial amount of work focuses on UML diagrams for test case generations,which cannot capture all the system’s developed attributes.Furthermore,there is a lack of UML-based models that can generate test cases from natural language requirements by rening them in context.Coverage criteria indicate how efciently the testing has been performed 12.37%of studies use requirements coverage,20%of studies cover path coverage,and 17%study basic coverage.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under Grant Number(RGP.1/127/42)Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R237),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Generally,software testing is considered as a proficient technique to achieve improvement in quality and reliability of the software.But,the quality of test cases has a considerable influence on fault revealing capability of software testing activity.Test Case Prioritization(TCP)remains a challenging issue since prioritizing test cases is unsatisfactory in terms of Average Percentage of Faults Detected(APFD)and time spent upon execution results.TCP ismainly intended to design a collection of test cases that can accomplish early optimization using preferred characteristics.The studies conducted earlier focused on prioritizing the available test cases in accelerating fault detection rate during software testing.In this aspect,the current study designs aModified Harris Hawks Optimization based TCP(MHHO-TCP)technique for software testing.The aim of the proposed MHHO-TCP technique is to maximize APFD and minimize the overall execution time.In addition,MHHO algorithm is designed to boost the exploration and exploitation abilities of conventional HHO algorithm.In order to validate the enhanced efficiency of MHHO-TCP technique,a wide range of simulations was conducted on different benchmark programs and the results were examined under several aspects.The experimental outcomes highlight the improved efficiency of MHHO-TCP technique over recent approaches under different measures.