Plug-in Hybrid Electric Vehicles(PHEVs)represent an innovative breed of transportation,harnessing diverse power sources for enhanced performance.Energy management strategies(EMSs)that coordinate and control different ...Plug-in Hybrid Electric Vehicles(PHEVs)represent an innovative breed of transportation,harnessing diverse power sources for enhanced performance.Energy management strategies(EMSs)that coordinate and control different energy sources is a critical component of PHEV control technology,directly impacting overall vehicle performance.This study proposes an improved deep reinforcement learning(DRL)-based EMSthat optimizes realtime energy allocation and coordinates the operation of multiple power sources.Conventional DRL algorithms struggle to effectively explore all possible state-action combinations within high-dimensional state and action spaces.They often fail to strike an optimal balance between exploration and exploitation,and their assumption of a static environment limits their ability to adapt to changing conditions.Moreover,these algorithms suffer from low sample efficiency.Collectively,these factors contribute to convergence difficulties,low learning efficiency,and instability.To address these challenges,the Deep Deterministic Policy Gradient(DDPG)algorithm is enhanced using entropy regularization and a summation tree-based Prioritized Experience Replay(PER)method,aiming to improve exploration performance and learning efficiency from experience samples.Additionally,the correspondingMarkovDecision Process(MDP)is established.Finally,an EMSbased on the improvedDRLmodel is presented.Comparative simulation experiments are conducted against rule-based,optimization-based,andDRL-based EMSs.The proposed strategy exhibitsminimal deviation fromthe optimal solution obtained by the dynamic programming(DP)strategy that requires global information.In the typical driving scenarios based onWorld Light Vehicle Test Cycle(WLTC)and New European Driving Cycle(NEDC),the proposed method achieved a fuel consumption of 2698.65 g and an Equivalent Fuel Consumption(EFC)of 2696.77 g.Compared to the DP strategy baseline,the proposed method improved the fuel efficiency variances(FEV)by 18.13%,15.1%,and 8.37%over the Deep QNetwork(DQN),Double DRL(DDRL),and original DDPG methods,respectively.The observational outcomes demonstrate that the proposed EMS based on improved DRL framework possesses good real-time performance,stability,and reliability,effectively optimizing vehicle economy and fuel consumption.展开更多
In real life,incomplete information,inaccurate data,and the preferences of decision-makers during qualitative judgment would impact the process of decision-making.As a technical instrument that can successfully handle...In real life,incomplete information,inaccurate data,and the preferences of decision-makers during qualitative judgment would impact the process of decision-making.As a technical instrument that can successfully handle uncertain information,Fermatean fuzzy sets have recently been used to solve the multi-attribute decision-making(MADM)problems.This paper proposes a Fermatean hesitant fuzzy information aggregation method to address the problem of fusion where the membership,non-membership,and priority are considered simultaneously.Combining the Fermatean hesitant fuzzy sets with Heronian Mean operators,this paper proposes the Fermatean hesitant fuzzy Heronian mean(FHFHM)operator and the Fermatean hesitant fuzzyweighted Heronian mean(FHFWHM)operator.Then,considering the priority relationship between attributes is often easier to obtain than the weight of attributes,this paper defines a new Fermatean hesitant fuzzy prioritized Heronian mean operator(FHFPHM),and discusses its elegant properties such as idempotency,boundedness and monotonicity in detail.Later,for problems with unknown weights and the Fermatean hesitant fuzzy information,aMADM approach based on prioritized attributes is proposed,which can effectively depict the correlation between attributes and avoid the influence of subjective factors on the results.Finally,a numerical example of multi-sensor electronic surveillance is applied to verify the feasibility and validity of the method proposed in this paper.展开更多
Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount impo...Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount importance in the emerging field of edge AI.One widely used testing method for this purpose is fuzz testing,which detects bugs by inputting random test cases into the target program.However,this process consumes significant time and resources.To improve the efficiency of compiler fuzz testing,it is common practice to utilize test case prioritization techniques.Some researchers use machine learning to predict the code coverage of test cases,aiming to maximize the test capability for the target compiler by increasing the overall predicted coverage of the test cases.Nevertheless,these methods can only forecast the code coverage of the compiler at a specific optimization level,potentially missing many optimization-related bugs.In this paper,we introduce C-CORE(short for Clustering by Code Representation),the first framework to prioritize test cases according to their code representations,which are derived directly from the source codes.This approach avoids being limited to specific compiler states and extends to a broader range of compiler bugs.Specifically,we first train a scaled pre-trained programming language model to capture as many common features as possible from the test cases generated by a fuzzer.Using this pre-trained model,we then train two downstream models:one for predicting the likelihood of triggering a bug and another for identifying code representations associated with bugs.Subsequently,we cluster the test cases according to their code representations and select the highest-scoring test case from each cluster as the high-quality test case.This reduction in redundant testing cases leads to time savings.Comprehensive evaluation results reveal that code representations are better at distinguishing test capabilities,and C-CORE significantly enhances testing efficiency.Across four datasets,C-CORE increases the average of the percentage of faults detected(APFD)value by 0.16 to 0.31 and reduces test time by over 50% in 46% of cases.When compared to the best results from approaches using predicted code coverage,C-CORE improves the APFD value by 1.1% to 12.3% and achieves an overall time-saving of 159.1%.展开更多
Digital forensics aims to uncover evidence of cybercrimes within compromised systems.These cybercrimes are often perpetrated through the deployment of malware,which inevitably leaves discernible traces within the comp...Digital forensics aims to uncover evidence of cybercrimes within compromised systems.These cybercrimes are often perpetrated through the deployment of malware,which inevitably leaves discernible traces within the compromised systems.Forensic analysts are tasked with extracting and subsequently analyzing data,termed as artifacts,from these systems to gather evidence.Therefore,forensic analysts must sift through extensive datasets to isolate pertinent evidence.However,manually identifying suspicious traces among numerous artifacts is time-consuming and labor-intensive.Previous studies addressed such inefficiencies by integrating artificial intelligence(AI)technologies into digital forensics.Despite the efforts in previous studies,artifacts were analyzed without considering the nature of the data within them and failed to prove their efficiency through specific evaluations.In this study,we propose a system to prioritize suspicious artifacts from compromised systems infected with malware to facilitate efficient digital forensics.Our system introduces a double-checking method that recognizes the nature of data within target artifacts and employs algorithms ideal for anomaly detection.The key ideas of this method are:(1)prioritize suspicious artifacts and filter remaining artifacts using autoencoder and(2)further prioritize suspicious artifacts and filter remaining artifacts using logarithmic entropy.Our evaluation demonstrates that our system can identify malicious artifacts with high accuracy and that its double-checking method is more efficient than alternative approaches.Our system can significantly reduce the time required for forensic analysis and serve as a reference for future studies.展开更多
In view of the environment competencies,selecting the optimal green supplier is one of the crucial issues for enterprises,and multi-criteria decision-making(MCDM)methodologies can more easily solve this green supplier...In view of the environment competencies,selecting the optimal green supplier is one of the crucial issues for enterprises,and multi-criteria decision-making(MCDM)methodologies can more easily solve this green supplier selection(GSS)problem.In addition,prioritized aggregation(PA)operator can focus on the prioritization relationship over the criteria,Choquet integral(CI)operator can fully take account of the importance of criteria and the interactions among them,and Bonferroni mean(BM)operator can capture the interrelationships of criteria.However,most existing researches cannot simultaneously consider the interactions,interrelationships and prioritizations over the criteria,which are involved in the GSS process.Moreover,the interval type-2 fuzzy set(IT2FS)is a more effective tool to represent the fuzziness.Therefore,based on the advantages of PA,CI,BM and IT2FS,in this paper,the interval type-2 fuzzy prioritized Choquet normalized weighted BM operators with fuzzy measure and generalized prioritized measure are proposed,and some properties are discussed.Then,a novel MCDM approach for GSS based upon the presented operators is developed,and detailed decision steps are given.Finally,the applicability and practicability of the proposed methodology are demonstrated by its application in the shared-bike GSS and by comparisons with other methods.The advantages of the proposed method are that it can consider interactions,interrelationships and prioritizations over the criteria simultaneously.展开更多
Medical Internet of Things(MIoTs)is a collection of small and energyefficient wireless sensor devices that monitor the patient’s body.The healthcare networks transmit continuous data monitoring for the patients to su...Medical Internet of Things(MIoTs)is a collection of small and energyefficient wireless sensor devices that monitor the patient’s body.The healthcare networks transmit continuous data monitoring for the patients to survive them independently.There are many improvements in MIoTs,but still,there are critical issues that might affect the Quality of Service(QoS)of a network.Congestion handling is one of the critical factors that directly affect the QoS of the network.The congestion in MIoT can cause more energy consumption,delay,and important data loss.If a patient has an emergency,then the life-critical signals must transmit with minimum latency.During emergencies,the MIoTs have to monitor the patients continuously and transmit data(e.g.,ECG,BP,heart rate,etc.)with minimum delay.Therefore,there is an efficient technique required that can transmit emergency data of high-risk patients to the medical staff on time with maximum reliability.The main objective of this research is to monitor and transmit the patient’s real-time data efficiently and to prioritize the emergency data.In this paper,Emergency Prioritized and Congestion Handling Protocol for Medical IoTs(EPCP_MIoT)is proposed that efficiently monitors the patients and overcome the congestion by enabling different monitoring modes.Whereas the emergency data transmissions are prioritized and transmit at SIFS time.The proposed technique is implemented and compared with the previous technique,the comparison results show that the proposed technique outperforms the previous techniques in terms of network throughput,end to end delay,energy consumption,and packet loss ratio.展开更多
The object-based scalable coding in MPEG-4 is investigated, and a prioritized transmission scheme of MPEG-4 audio-visual objects (AVOs) over the DiffServ network with the QoS guarantee is proposed. MPEG-4 AVOs are e...The object-based scalable coding in MPEG-4 is investigated, and a prioritized transmission scheme of MPEG-4 audio-visual objects (AVOs) over the DiffServ network with the QoS guarantee is proposed. MPEG-4 AVOs are extracted and classified into different groups according to their priority values and scalable layers (visual importance). These priority values are mapped to the 1P DiffServ per hop behaviors (PHB). This scheme can selectively discard packets with low importance, in order to avoid the network congestion. Simulation results show that the quality of received video can gracefully adapt to network state, as compared with the ‘best-effort' manner. Also, by allowing the content provider to define prioritization of each audio-visual object, the adaptive transmission of object-based scalable video can be customized based on the content.展开更多
Test Case Prioritization(TCP)techniques perform better than other regression test optimization techniques including Test Suite Reduction(TSR)and Test Case Selection(TCS).Many TCP techniques are available,and their per...Test Case Prioritization(TCP)techniques perform better than other regression test optimization techniques including Test Suite Reduction(TSR)and Test Case Selection(TCS).Many TCP techniques are available,and their performance is usually measured through a metric Average Percentage of Fault Detection(APFD).This metric is value-neutral because it only works well when all test cases have the same cost,and all faults have the same severity.Using APFD for performance evaluation of test case orders where test cases cost or faults severity varies is prone to produce false results.Therefore,using the right metric for performance evaluation of TCP techniques is very important to get reliable and correct results.In this paper,two value-based TCP techniques have been introduced using Genetic Algorithm(GA)including Value-Cognizant Fault Detection-Based TCP(VCFDB-TCP)and Value-Cognizant Requirements Coverage-Based TCP(VCRCB-TCP).Two novel value-based performance evaluation metrics are also introduced for value-based TCP including Average Percentage of Fault Detection per value(APFDv)and Average Percentage of Requirements Coverage per value(APRCv).Two case studies are performed to validate proposed techniques and performance evaluation metrics.The proposed GA-based techniques outperformed the existing state-of-the-art TCP techniques including Original Order(OO),Reverse Order(REV-O),Random Order(RO),and Greedy algorithm.展开更多
Software needs modifications and requires revisions regularly.Owing to these revisions,retesting software becomes essential to ensure that the enhancements made,have not affected its bug-free functioning.The time and ...Software needs modifications and requires revisions regularly.Owing to these revisions,retesting software becomes essential to ensure that the enhancements made,have not affected its bug-free functioning.The time and cost incurred in this process,need to be reduced by the method of test case selection and prioritization.It is observed that many nature-inspired techniques are applied in this area.African Buffalo Optimization is one such approach,applied to regression test selection and prioritization.In this paper,the proposed work explains and proves the applicability of the African Buffalo Optimization approach to test case selection and prioritization.The proposed algorithm converges in polynomial time(O(n^(2))).In this paper,the empirical evaluation of applying African Buffalo Optimization for test case prioritization is done on sample data set with multiple iterations.An astounding 62.5%drop in size and a 48.57%drop in the runtime of the original test suite were recorded.The obtained results are compared with Ant Colony Optimization.The comparative analysis indicates that African Buffalo Optimization and Ant Colony Optimization exhibit similar fault detection capabilities(80%),and a reduction in the overall execution time and size of the resultant test suite.The results and analysis,hence,advocate and encourages the use of African Buffalo Optimization in the area of test case selection and prioritization.展开更多
Automation software need to be continuously updated by addressing software bugs contained in their repositories.However,bugs have different levels of importance;hence,it is essential to prioritize bug reports based on...Automation software need to be continuously updated by addressing software bugs contained in their repositories.However,bugs have different levels of importance;hence,it is essential to prioritize bug reports based on their sever-ity and importance.Manually managing the deluge of incoming bug reports faces time and resource constraints from the development team and delays the resolu-tion of critical bugs.Therefore,bug report prioritization is vital.This study pro-poses a new model for bug prioritization based on average one dependence estimator;it prioritizes bug reports based on severity,which is determined by the number of attributes.The more the number of attributes,the more the severity.The proposed model is evaluated using precision,recall,F1-Score,accuracy,G-Measure,and Matthew’s correlation coefficient.Results of the proposed model are compared with those of the support vector machine(SVM)and Naive Bayes(NB)models.Eclipse and Mozilla datasetswere used as the sources of bug reports.The proposed model improved the bug repository management and out-performed the SVM and NB models.Additionally,the proposed model used a weaker attribute independence supposition than the former models,thereby improving prediction accuracy with minimal computational cost.展开更多
Both unit and integration testing are incredibly crucial for almost any software application because each of them operates a distinct process to examine the product.Due to resource constraints,when software is subject...Both unit and integration testing are incredibly crucial for almost any software application because each of them operates a distinct process to examine the product.Due to resource constraints,when software is subjected to modifications,the drastic increase in the count of test cases forces the testers to opt for a test optimization strategy.One such strategy is test case prioritization(TCP).Existing works have propounded various methodologies that re-order the system-level test cases intending to boost either the fault detection capabilities or the coverage efficacy at the earliest.Nonetheless,singularity in objective functions and the lack of dissimilitude among the re-ordered test sequences have degraded the cogency of their approaches.Considering such gaps and scenarios when the meteoric and continuous updations in the software make the intensive unit and integration testing process more fragile,this study has introduced a memetics-inspired methodology for TCP.The proposed structure is first embedded with diverse parameters,and then traditional steps of the shuffled-frog-leaping approach(SFLA)are followed to prioritize the test cases at unit and integration levels.On 5 standard test functions,a comparative analysis is conducted between the established algorithms and the proposed approach,where the latter enhances the coverage rate and fault detection of re-ordered test sets.Investigation results related to the mean average percentage of fault detection(APFD)confirmed that the proposed approach exceeds the memetic,basic multi-walk,PSO,and optimized multi-walk by 21.7%,13.99%,12.24%,and 11.51%,respectively.展开更多
文摘Plug-in Hybrid Electric Vehicles(PHEVs)represent an innovative breed of transportation,harnessing diverse power sources for enhanced performance.Energy management strategies(EMSs)that coordinate and control different energy sources is a critical component of PHEV control technology,directly impacting overall vehicle performance.This study proposes an improved deep reinforcement learning(DRL)-based EMSthat optimizes realtime energy allocation and coordinates the operation of multiple power sources.Conventional DRL algorithms struggle to effectively explore all possible state-action combinations within high-dimensional state and action spaces.They often fail to strike an optimal balance between exploration and exploitation,and their assumption of a static environment limits their ability to adapt to changing conditions.Moreover,these algorithms suffer from low sample efficiency.Collectively,these factors contribute to convergence difficulties,low learning efficiency,and instability.To address these challenges,the Deep Deterministic Policy Gradient(DDPG)algorithm is enhanced using entropy regularization and a summation tree-based Prioritized Experience Replay(PER)method,aiming to improve exploration performance and learning efficiency from experience samples.Additionally,the correspondingMarkovDecision Process(MDP)is established.Finally,an EMSbased on the improvedDRLmodel is presented.Comparative simulation experiments are conducted against rule-based,optimization-based,andDRL-based EMSs.The proposed strategy exhibitsminimal deviation fromthe optimal solution obtained by the dynamic programming(DP)strategy that requires global information.In the typical driving scenarios based onWorld Light Vehicle Test Cycle(WLTC)and New European Driving Cycle(NEDC),the proposed method achieved a fuel consumption of 2698.65 g and an Equivalent Fuel Consumption(EFC)of 2696.77 g.Compared to the DP strategy baseline,the proposed method improved the fuel efficiency variances(FEV)by 18.13%,15.1%,and 8.37%over the Deep QNetwork(DQN),Double DRL(DDRL),and original DDPG methods,respectively.The observational outcomes demonstrate that the proposed EMS based on improved DRL framework possesses good real-time performance,stability,and reliability,effectively optimizing vehicle economy and fuel consumption.
文摘In real life,incomplete information,inaccurate data,and the preferences of decision-makers during qualitative judgment would impact the process of decision-making.As a technical instrument that can successfully handle uncertain information,Fermatean fuzzy sets have recently been used to solve the multi-attribute decision-making(MADM)problems.This paper proposes a Fermatean hesitant fuzzy information aggregation method to address the problem of fusion where the membership,non-membership,and priority are considered simultaneously.Combining the Fermatean hesitant fuzzy sets with Heronian Mean operators,this paper proposes the Fermatean hesitant fuzzy Heronian mean(FHFHM)operator and the Fermatean hesitant fuzzyweighted Heronian mean(FHFWHM)operator.Then,considering the priority relationship between attributes is often easier to obtain than the weight of attributes,this paper defines a new Fermatean hesitant fuzzy prioritized Heronian mean operator(FHFPHM),and discusses its elegant properties such as idempotency,boundedness and monotonicity in detail.Later,for problems with unknown weights and the Fermatean hesitant fuzzy information,aMADM approach based on prioritized attributes is proposed,which can effectively depict the correlation between attributes and avoid the influence of subjective factors on the results.Finally,a numerical example of multi-sensor electronic surveillance is applied to verify the feasibility and validity of the method proposed in this paper.
文摘Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount importance in the emerging field of edge AI.One widely used testing method for this purpose is fuzz testing,which detects bugs by inputting random test cases into the target program.However,this process consumes significant time and resources.To improve the efficiency of compiler fuzz testing,it is common practice to utilize test case prioritization techniques.Some researchers use machine learning to predict the code coverage of test cases,aiming to maximize the test capability for the target compiler by increasing the overall predicted coverage of the test cases.Nevertheless,these methods can only forecast the code coverage of the compiler at a specific optimization level,potentially missing many optimization-related bugs.In this paper,we introduce C-CORE(short for Clustering by Code Representation),the first framework to prioritize test cases according to their code representations,which are derived directly from the source codes.This approach avoids being limited to specific compiler states and extends to a broader range of compiler bugs.Specifically,we first train a scaled pre-trained programming language model to capture as many common features as possible from the test cases generated by a fuzzer.Using this pre-trained model,we then train two downstream models:one for predicting the likelihood of triggering a bug and another for identifying code representations associated with bugs.Subsequently,we cluster the test cases according to their code representations and select the highest-scoring test case from each cluster as the high-quality test case.This reduction in redundant testing cases leads to time savings.Comprehensive evaluation results reveal that code representations are better at distinguishing test capabilities,and C-CORE significantly enhances testing efficiency.Across four datasets,C-CORE increases the average of the percentage of faults detected(APFD)value by 0.16 to 0.31 and reduces test time by over 50% in 46% of cases.When compared to the best results from approaches using predicted code coverage,C-CORE improves the APFD value by 1.1% to 12.3% and achieves an overall time-saving of 159.1%.
基金supported by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2024-RS-2024-00437494)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation).
文摘Digital forensics aims to uncover evidence of cybercrimes within compromised systems.These cybercrimes are often perpetrated through the deployment of malware,which inevitably leaves discernible traces within the compromised systems.Forensic analysts are tasked with extracting and subsequently analyzing data,termed as artifacts,from these systems to gather evidence.Therefore,forensic analysts must sift through extensive datasets to isolate pertinent evidence.However,manually identifying suspicious traces among numerous artifacts is time-consuming and labor-intensive.Previous studies addressed such inefficiencies by integrating artificial intelligence(AI)technologies into digital forensics.Despite the efforts in previous studies,artifacts were analyzed without considering the nature of the data within them and failed to prove their efficiency through specific evaluations.In this study,we propose a system to prioritize suspicious artifacts from compromised systems infected with malware to facilitate efficient digital forensics.Our system introduces a double-checking method that recognizes the nature of data within target artifacts and employs algorithms ideal for anomaly detection.The key ideas of this method are:(1)prioritize suspicious artifacts and filter remaining artifacts using autoencoder and(2)further prioritize suspicious artifacts and filter remaining artifacts using logarithmic entropy.Our evaluation demonstrates that our system can identify malicious artifacts with high accuracy and that its double-checking method is more efficient than alternative approaches.Our system can significantly reduce the time required for forensic analysis and serve as a reference for future studies.
基金supported by the National Natural Science Foundation of China(71771140)Project of Cultural Masters and“the Four Kinds of a Batch”Talents,the Special Funds of Taishan Scholars Project of Shandong Province(ts201511045)the Major Bidding Projects of National Social Science Fund of China(19ZDA080)。
文摘In view of the environment competencies,selecting the optimal green supplier is one of the crucial issues for enterprises,and multi-criteria decision-making(MCDM)methodologies can more easily solve this green supplier selection(GSS)problem.In addition,prioritized aggregation(PA)operator can focus on the prioritization relationship over the criteria,Choquet integral(CI)operator can fully take account of the importance of criteria and the interactions among them,and Bonferroni mean(BM)operator can capture the interrelationships of criteria.However,most existing researches cannot simultaneously consider the interactions,interrelationships and prioritizations over the criteria,which are involved in the GSS process.Moreover,the interval type-2 fuzzy set(IT2FS)is a more effective tool to represent the fuzziness.Therefore,based on the advantages of PA,CI,BM and IT2FS,in this paper,the interval type-2 fuzzy prioritized Choquet normalized weighted BM operators with fuzzy measure and generalized prioritized measure are proposed,and some properties are discussed.Then,a novel MCDM approach for GSS based upon the presented operators is developed,and detailed decision steps are given.Finally,the applicability and practicability of the proposed methodology are demonstrated by its application in the shared-bike GSS and by comparisons with other methods.The advantages of the proposed method are that it can consider interactions,interrelationships and prioritizations over the criteria simultaneously.
基金the Deanship of Scientific Research(DSR),at KingAbdulaziz University,Jeddah,under grant no.G:292-612-1440.
文摘Medical Internet of Things(MIoTs)is a collection of small and energyefficient wireless sensor devices that monitor the patient’s body.The healthcare networks transmit continuous data monitoring for the patients to survive them independently.There are many improvements in MIoTs,but still,there are critical issues that might affect the Quality of Service(QoS)of a network.Congestion handling is one of the critical factors that directly affect the QoS of the network.The congestion in MIoT can cause more energy consumption,delay,and important data loss.If a patient has an emergency,then the life-critical signals must transmit with minimum latency.During emergencies,the MIoTs have to monitor the patients continuously and transmit data(e.g.,ECG,BP,heart rate,etc.)with minimum delay.Therefore,there is an efficient technique required that can transmit emergency data of high-risk patients to the medical staff on time with maximum reliability.The main objective of this research is to monitor and transmit the patient’s real-time data efficiently and to prioritize the emergency data.In this paper,Emergency Prioritized and Congestion Handling Protocol for Medical IoTs(EPCP_MIoT)is proposed that efficiently monitors the patients and overcome the congestion by enabling different monitoring modes.Whereas the emergency data transmissions are prioritized and transmit at SIFS time.The proposed technique is implemented and compared with the previous technique,the comparison results show that the proposed technique outperforms the previous techniques in terms of network throughput,end to end delay,energy consumption,and packet loss ratio.
文摘The object-based scalable coding in MPEG-4 is investigated, and a prioritized transmission scheme of MPEG-4 audio-visual objects (AVOs) over the DiffServ network with the QoS guarantee is proposed. MPEG-4 AVOs are extracted and classified into different groups according to their priority values and scalable layers (visual importance). These priority values are mapped to the 1P DiffServ per hop behaviors (PHB). This scheme can selectively discard packets with low importance, in order to avoid the network congestion. Simulation results show that the quality of received video can gracefully adapt to network state, as compared with the ‘best-effort' manner. Also, by allowing the content provider to define prioritization of each audio-visual object, the adaptive transmission of object-based scalable video can be customized based on the content.
文摘Test Case Prioritization(TCP)techniques perform better than other regression test optimization techniques including Test Suite Reduction(TSR)and Test Case Selection(TCS).Many TCP techniques are available,and their performance is usually measured through a metric Average Percentage of Fault Detection(APFD).This metric is value-neutral because it only works well when all test cases have the same cost,and all faults have the same severity.Using APFD for performance evaluation of test case orders where test cases cost or faults severity varies is prone to produce false results.Therefore,using the right metric for performance evaluation of TCP techniques is very important to get reliable and correct results.In this paper,two value-based TCP techniques have been introduced using Genetic Algorithm(GA)including Value-Cognizant Fault Detection-Based TCP(VCFDB-TCP)and Value-Cognizant Requirements Coverage-Based TCP(VCRCB-TCP).Two novel value-based performance evaluation metrics are also introduced for value-based TCP including Average Percentage of Fault Detection per value(APFDv)and Average Percentage of Requirements Coverage per value(APRCv).Two case studies are performed to validate proposed techniques and performance evaluation metrics.The proposed GA-based techniques outperformed the existing state-of-the-art TCP techniques including Original Order(OO),Reverse Order(REV-O),Random Order(RO),and Greedy algorithm.
基金This research is funded by the Deanship of Scientific Research at Umm Al-Qura University,Grant Code:22UQU4281755DSR02.
文摘Software needs modifications and requires revisions regularly.Owing to these revisions,retesting software becomes essential to ensure that the enhancements made,have not affected its bug-free functioning.The time and cost incurred in this process,need to be reduced by the method of test case selection and prioritization.It is observed that many nature-inspired techniques are applied in this area.African Buffalo Optimization is one such approach,applied to regression test selection and prioritization.In this paper,the proposed work explains and proves the applicability of the African Buffalo Optimization approach to test case selection and prioritization.The proposed algorithm converges in polynomial time(O(n^(2))).In this paper,the empirical evaluation of applying African Buffalo Optimization for test case prioritization is done on sample data set with multiple iterations.An astounding 62.5%drop in size and a 48.57%drop in the runtime of the original test suite were recorded.The obtained results are compared with Ant Colony Optimization.The comparative analysis indicates that African Buffalo Optimization and Ant Colony Optimization exhibit similar fault detection capabilities(80%),and a reduction in the overall execution time and size of the resultant test suite.The results and analysis,hence,advocate and encourages the use of African Buffalo Optimization in the area of test case selection and prioritization.
基金This work was supported in part by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.NRF-2020R1A2C1013308).
文摘Automation software need to be continuously updated by addressing software bugs contained in their repositories.However,bugs have different levels of importance;hence,it is essential to prioritize bug reports based on their sever-ity and importance.Manually managing the deluge of incoming bug reports faces time and resource constraints from the development team and delays the resolu-tion of critical bugs.Therefore,bug report prioritization is vital.This study pro-poses a new model for bug prioritization based on average one dependence estimator;it prioritizes bug reports based on severity,which is determined by the number of attributes.The more the number of attributes,the more the severity.The proposed model is evaluated using precision,recall,F1-Score,accuracy,G-Measure,and Matthew’s correlation coefficient.Results of the proposed model are compared with those of the support vector machine(SVM)and Naive Bayes(NB)models.Eclipse and Mozilla datasetswere used as the sources of bug reports.The proposed model improved the bug repository management and out-performed the SVM and NB models.Additionally,the proposed model used a weaker attribute independence supposition than the former models,thereby improving prediction accuracy with minimal computational cost.
文摘Both unit and integration testing are incredibly crucial for almost any software application because each of them operates a distinct process to examine the product.Due to resource constraints,when software is subjected to modifications,the drastic increase in the count of test cases forces the testers to opt for a test optimization strategy.One such strategy is test case prioritization(TCP).Existing works have propounded various methodologies that re-order the system-level test cases intending to boost either the fault detection capabilities or the coverage efficacy at the earliest.Nonetheless,singularity in objective functions and the lack of dissimilitude among the re-ordered test sequences have degraded the cogency of their approaches.Considering such gaps and scenarios when the meteoric and continuous updations in the software make the intensive unit and integration testing process more fragile,this study has introduced a memetics-inspired methodology for TCP.The proposed structure is first embedded with diverse parameters,and then traditional steps of the shuffled-frog-leaping approach(SFLA)are followed to prioritize the test cases at unit and integration levels.On 5 standard test functions,a comparative analysis is conducted between the established algorithms and the proposed approach,where the latter enhances the coverage rate and fault detection of re-ordered test sets.Investigation results related to the mean average percentage of fault detection(APFD)confirmed that the proposed approach exceeds the memetic,basic multi-walk,PSO,and optimized multi-walk by 21.7%,13.99%,12.24%,and 11.51%,respectively.