This study explores the impact of hyperparameter optimization on machine learning models for predicting cardiovascular disease using data from an IoST(Internet of Sensing Things)device.Ten distinct machine learning ap...This study explores the impact of hyperparameter optimization on machine learning models for predicting cardiovascular disease using data from an IoST(Internet of Sensing Things)device.Ten distinct machine learning approaches were implemented and systematically evaluated before and after hyperparameter tuning.Significant improvements were observed across various models,with SVM and Neural Networks consistently showing enhanced performance metrics such as F1-Score,recall,and precision.The study underscores the critical role of tailored hyperparameter tuning in optimizing these models,revealing diverse outcomes among algorithms.Decision Trees and Random Forests exhibited stable performance throughout the evaluation.While enhancing accuracy,hyperparameter optimization also led to increased execution time.Visual representations and comprehensive results support the findings,confirming the hypothesis that optimizing parameters can effectively enhance predictive capabilities in cardiovascular disease.This research contributes to advancing the understanding and application of machine learning in healthcare,particularly in improving predictive accuracy for cardiovascular disease management and intervention strategies.展开更多
Real-time system timing analysis is crucial for estimating the worst-case execution time(WCET)of a program.To achieve this,static or dynamic analysis methods are used,along with targeted modeling of the actual hardwar...Real-time system timing analysis is crucial for estimating the worst-case execution time(WCET)of a program.To achieve this,static or dynamic analysis methods are used,along with targeted modeling of the actual hardware system.This literature review focuses on calculating WCET for multi-core processors,providing a survey of traditional methods used for static and dynamic analysis and highlighting the major challenges that arise from different program execution scenarios on multi-core platforms.This paper outlines the strengths and weaknesses of current methodologies and offers insights into prospective areas of research on multi-core analysis.By presenting a comprehensive analysis of the current state of research on multi-core processor analysis for WCET estimation,this review aims to serve as a valuable resource for researchers and practitioners in the field.展开更多
As data volume grows, many enterprises are considering using MapReduce for its simplicity. However, how to evaluate the performance improvement before deployment is still an issue. Current researches of MapReduce perf...As data volume grows, many enterprises are considering using MapReduce for its simplicity. However, how to evaluate the performance improvement before deployment is still an issue. Current researches of MapReduce performance are mainly based on monitoring and simulation, and lack mathematical models. In this paper, we present a simple but powerful performance model for the prediction of the execution time of a MapReduce program with limited resources. We study each component of MapReduce framework, and analyze the relation between the overall performance and the number of mappers and reducers based on our model. Two typical MapReduce programs are evaluated in a small cluster with 13 nodes. Experimental results show that the mathematical performance model can estimate the execution time of MapReduce programs reliably. According to our model, number of mappers and reducers can be tuned to form a better execution pipeline and lead to better performance. The model also points out potential bottlenecks of the framework and future improvement.展开更多
Emerging Internet of Things(IoT)applications require faster execution time and response time to achieve optimal performance.However,most IoT devices have limited or no computing capability to achieve such stringent ap...Emerging Internet of Things(IoT)applications require faster execution time and response time to achieve optimal performance.However,most IoT devices have limited or no computing capability to achieve such stringent application requirements.To this end,computation offloading in edge computing has been used for IoT systems to achieve the desired performance.Nevertheless,randomly offloading applications to any available edge without considering their resource demands,inter-application dependencies and edge resource availability may eventually result in execution delay and performance degradation.We introduce Edge-IoT,a machine learning-enabled orchestration framework in this paper,which utilizes the states of edge resources and application resource requirements to facilitate a resource-aware offloading scheme for minimizing the average latency.We further propose a variant bin-packing optimization model that co-locates applications firmly on edge resources to fully utilize available resources.Extensive experiments show the effectiveness and resource efficiency of the proposed approach.展开更多
Fog computing is an emerging paradigm of cloud computing which to meet the growing computation demand of mobile application. It can help mobile devices to overcome resource constraints by offloading the computationall...Fog computing is an emerging paradigm of cloud computing which to meet the growing computation demand of mobile application. It can help mobile devices to overcome resource constraints by offloading the computationally intensive tasks to cloud servers. The challenge of the cloud is to minimize the time of data transfer and task execution to the user, whose location changes owing to mobility, and the energy consumption for the mobile device. To provide satisfactory computation performance is particularly challenging in the fog computing environment. In this paper, we propose a novel fog computing model and offloading policy which can effectively bring the fog computing power closer to the mobile user. The fog computing model consist of remote cloud nodes and local cloud nodes, which is attached to wireless access infrastructure. And we give task offloading policy taking into account executi+on, energy consumption and other expenses. We finally evaluate the performance of our method through experimental simulations. The experimental results show that this method has a significant effect on reducing the execution time of tasks and energy consumption of mobile devices.展开更多
In this paper, an event-triggered sliding mode control approach for trajectory tracking problem of nonlinear input affine system with disturbance has been proposed. A second order robotic manipulator system has been m...In this paper, an event-triggered sliding mode control approach for trajectory tracking problem of nonlinear input affine system with disturbance has been proposed. A second order robotic manipulator system has been modeled into a general nonlinear input affine system. Initially, the global asymptotic stability is ensured with conventional periodic sampling approach for reference trajectory tracking. Then the proposed approach of event-triggered sliding mode control is discussed which guarantees semi-global uniform ultimate boundedness. The proposed control approach guarantees non-accumulation of control updates ensuring lower bounds on inter-event triggering instants avoiding Zeno behavior in presence of the disturbance. The system shows better performance in terms of reduced control updates, ensures system stability which further guarantees optimization of resource usage and cost. The simulation results are provided for validation of proposed methodology for tracking problem by a robotic manipulator. The number of aperiodic control updates is found to be approximately 44% and 61% in the presence of constant and time-varying disturbances respectively.展开更多
Generally,software testing is considered as a proficient technique to achieve improvement in quality and reliability of the software.But,the quality of test cases has a considerable influence on fault revealing capabi...Generally,software testing is considered as a proficient technique to achieve improvement in quality and reliability of the software.But,the quality of test cases has a considerable influence on fault revealing capability of software testing activity.Test Case Prioritization(TCP)remains a challenging issue since prioritizing test cases is unsatisfactory in terms of Average Percentage of Faults Detected(APFD)and time spent upon execution results.TCP ismainly intended to design a collection of test cases that can accomplish early optimization using preferred characteristics.The studies conducted earlier focused on prioritizing the available test cases in accelerating fault detection rate during software testing.In this aspect,the current study designs aModified Harris Hawks Optimization based TCP(MHHO-TCP)technique for software testing.The aim of the proposed MHHO-TCP technique is to maximize APFD and minimize the overall execution time.In addition,MHHO algorithm is designed to boost the exploration and exploitation abilities of conventional HHO algorithm.In order to validate the enhanced efficiency of MHHO-TCP technique,a wide range of simulations was conducted on different benchmark programs and the results were examined under several aspects.The experimental outcomes highlight the improved efficiency of MHHO-TCP technique over recent approaches under different measures.展开更多
There has been an explosion of cloud services as organizations take advantage of their continuity,predictability,as well as quality of service and it raises the concern about latency,energy-efficiency,and security.Thi...There has been an explosion of cloud services as organizations take advantage of their continuity,predictability,as well as quality of service and it raises the concern about latency,energy-efficiency,and security.This increase in demand requires new configurations of networks,products,and service operators.For this purpose,the software-defined network is an efficient technology that enables to support the future network functions along with the intelligent applications and packet optimization.This work analyzes the offline cloud scenario in which machines are efficiently deployed and scheduled for user processing requests.Performance is evaluated in terms of reducing bandwidth,task execution times and latencies,and increasing throughput.A minimum execution time algorithm is used to compute the completion time of all the available resources which are allocated to the virtual machine and lion optimization algorithm is applied to packets in a cloud environment.The proposed work is shown to improve the throughput and latency rate.展开更多
Capacitor voltage imbalance is a significant problem for three-level inverters.Due to the mid-point modulation of these inverter topologies,the neutral point potential moves up or down depending on the neutral point c...Capacitor voltage imbalance is a significant problem for three-level inverters.Due to the mid-point modulation of these inverter topologies,the neutral point potential moves up or down depending on the neutral point current direction creating imbalanced voltages among the two capacitors.This imbalanced capacitor voltage causes imbalanced voltage stress among the semiconductor devices and causes increase output voltage and current harmonics.This paper introduces a modified voltage balancing strategy using two-level space vector modulation.By decomposing the three-level space vector diagram into two-level space vector diagram and redistributing the dwell times of the two-level zero space vectors,the modified voltage balancing method ensures minimal NP voltage ripple.Compared to the commonly used NP voltage control method(using 3L SVM[9]),the proposed modified NP voltage control method offers a slightly higher neutral-point voltage ripple and output voltage harmonics but,it has much lower switching loss,code size and execution time.展开更多
Worst-case execution time (WCET) analysis of multi-threaded software is still a challenge. This comes mainly from the fact that synchronization has to be taken into account. In this paper, we focus on this issue and o...Worst-case execution time (WCET) analysis of multi-threaded software is still a challenge. This comes mainly from the fact that synchronization has to be taken into account. In this paper, we focus on this issue and on automatically calculating and incorporating stalling times (e.g. caused by lock contention) in a generic graph model. The idea that thread interleavings can be studied with a matrix calculus is novel in this research area. Our sparse matrix representations of the program are manipulated using an extended Kronecker algebra. The resulting graph represents multi-threaded programs similar as CFGs do for sequential programs. With this graph model, we are able to calculate the WCET of multi-threaded concurrent programs including stalling times which are due to synchronization. We employ a generating function-based approach for setting up data flow equations which are solved by well-known elimination-based dataflow analysis methods or an off-the-shelf equation solver. The WCET of multi-threaded programs can finally be calculated with a non-linear function solver.展开更多
This work explores the feasibility of a novel predictive control strategy on a power factor correction system. The proposed control strategy allows a significant reduction of the power losses respect to a classical pr...This work explores the feasibility of a novel predictive control strategy on a power factor correction system. The proposed control strategy allows a significant reduction of the power losses respect to a classical predictive control strategy working with a fixed execution time Ts. The proposed control strategy operates with a variable execution time T~, and it has been implemented using a low cost hardware platform based on TI~ TMS320F2812 DSP. The chosen platform is capable to execute a control strategy code with a variable execution time T,. This operation can be performed by setting in proper manner, the timer registers of one of two event manager A/B blocks present on the mentioned DSP (digital signal processor).展开更多
To overcome disadvantages of traditional worst-case execution time (WCET) analysis approaches, we propose a new WCET analysis approach based on independent paths for ARM programs. Based on the results of program flo...To overcome disadvantages of traditional worst-case execution time (WCET) analysis approaches, we propose a new WCET analysis approach based on independent paths for ARM programs. Based on the results of program flow analysis, it reduces and partitions the control flow graph of the program and obtains a directed graph. Using linear combinations of independent paths of the directed graph, a set of feasible paths can be generated that gives complete coverage in terms of the program paths considered. Their timing measurements and execution counts of program segments are derived from a limited number of measurements of an instrumented version of the program. After the timing measurement of the feasible paths are linearly expressed by the execution times of program seg-ments, a system of equations is derived as a constraint problem, from which we can obtain the execution times of program segments. By assigning the execution times of program segments to weights of edges in the directed graph, the WCET estimate can be calculated on the basis of graph-theoretical techniques. Comparing our WCET estimate with the WCET measurement obtained by the exhaustive measurement, the maximum error ratio is only 8.259 3 %. It is shown that the proposed approach is an effective way to obtain the safe and tight WCET estimate for ARM programs.展开更多
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU),Grant Number IMSIU-RG23151.
文摘This study explores the impact of hyperparameter optimization on machine learning models for predicting cardiovascular disease using data from an IoST(Internet of Sensing Things)device.Ten distinct machine learning approaches were implemented and systematically evaluated before and after hyperparameter tuning.Significant improvements were observed across various models,with SVM and Neural Networks consistently showing enhanced performance metrics such as F1-Score,recall,and precision.The study underscores the critical role of tailored hyperparameter tuning in optimizing these models,revealing diverse outcomes among algorithms.Decision Trees and Random Forests exhibited stable performance throughout the evaluation.While enhancing accuracy,hyperparameter optimization also led to increased execution time.Visual representations and comprehensive results support the findings,confirming the hypothesis that optimizing parameters can effectively enhance predictive capabilities in cardiovascular disease.This research contributes to advancing the understanding and application of machine learning in healthcare,particularly in improving predictive accuracy for cardiovascular disease management and intervention strategies.
基金supported by ZTE Industry-University-Institute Cooperation Funds under Grant No.2022ZTE09.
文摘Real-time system timing analysis is crucial for estimating the worst-case execution time(WCET)of a program.To achieve this,static or dynamic analysis methods are used,along with targeted modeling of the actual hardware system.This literature review focuses on calculating WCET for multi-core processors,providing a survey of traditional methods used for static and dynamic analysis and highlighting the major challenges that arise from different program execution scenarios on multi-core platforms.This paper outlines the strengths and weaknesses of current methodologies and offers insights into prospective areas of research on multi-core analysis.By presenting a comprehensive analysis of the current state of research on multi-core processor analysis for WCET estimation,this review aims to serve as a valuable resource for researchers and practitioners in the field.
基金supported by CHB Project "Unstructured Data Management System" under Grant No.2010ZX01042-002-003
文摘As data volume grows, many enterprises are considering using MapReduce for its simplicity. However, how to evaluate the performance improvement before deployment is still an issue. Current researches of MapReduce performance are mainly based on monitoring and simulation, and lack mathematical models. In this paper, we present a simple but powerful performance model for the prediction of the execution time of a MapReduce program with limited resources. We study each component of MapReduce framework, and analyze the relation between the overall performance and the number of mappers and reducers based on our model. Two typical MapReduce programs are evaluated in a small cluster with 13 nodes. Experimental results show that the mathematical performance model can estimate the execution time of MapReduce programs reliably. According to our model, number of mappers and reducers can be tuned to form a better execution pipeline and lead to better performance. The model also points out potential bottlenecks of the framework and future improvement.
基金supported by the National Natural Science Foundation of China under Grant Nos.61571401 and 61901416(part of the China Postdoctoral Science Foundation under Grant No.2021TQ0304)the Innovative Talent Colleges and the University of Henan Province under Grant No.18HASTIT021.
文摘Emerging Internet of Things(IoT)applications require faster execution time and response time to achieve optimal performance.However,most IoT devices have limited or no computing capability to achieve such stringent application requirements.To this end,computation offloading in edge computing has been used for IoT systems to achieve the desired performance.Nevertheless,randomly offloading applications to any available edge without considering their resource demands,inter-application dependencies and edge resource availability may eventually result in execution delay and performance degradation.We introduce Edge-IoT,a machine learning-enabled orchestration framework in this paper,which utilizes the states of edge resources and application resource requirements to facilitate a resource-aware offloading scheme for minimizing the average latency.We further propose a variant bin-packing optimization model that co-locates applications firmly on edge resources to fully utilize available resources.Extensive experiments show the effectiveness and resource efficiency of the proposed approach.
基金supported by the NSFC (61602126)the scientific and technological project of Henan province (162102210214)
文摘Fog computing is an emerging paradigm of cloud computing which to meet the growing computation demand of mobile application. It can help mobile devices to overcome resource constraints by offloading the computationally intensive tasks to cloud servers. The challenge of the cloud is to minimize the time of data transfer and task execution to the user, whose location changes owing to mobility, and the energy consumption for the mobile device. To provide satisfactory computation performance is particularly challenging in the fog computing environment. In this paper, we propose a novel fog computing model and offloading policy which can effectively bring the fog computing power closer to the mobile user. The fog computing model consist of remote cloud nodes and local cloud nodes, which is attached to wireless access infrastructure. And we give task offloading policy taking into account executi+on, energy consumption and other expenses. We finally evaluate the performance of our method through experimental simulations. The experimental results show that this method has a significant effect on reducing the execution time of tasks and energy consumption of mobile devices.
文摘In this paper, an event-triggered sliding mode control approach for trajectory tracking problem of nonlinear input affine system with disturbance has been proposed. A second order robotic manipulator system has been modeled into a general nonlinear input affine system. Initially, the global asymptotic stability is ensured with conventional periodic sampling approach for reference trajectory tracking. Then the proposed approach of event-triggered sliding mode control is discussed which guarantees semi-global uniform ultimate boundedness. The proposed control approach guarantees non-accumulation of control updates ensuring lower bounds on inter-event triggering instants avoiding Zeno behavior in presence of the disturbance. The system shows better performance in terms of reduced control updates, ensures system stability which further guarantees optimization of resource usage and cost. The simulation results are provided for validation of proposed methodology for tracking problem by a robotic manipulator. The number of aperiodic control updates is found to be approximately 44% and 61% in the presence of constant and time-varying disturbances respectively.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under Grant Number(RGP.1/127/42)Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R237),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Generally,software testing is considered as a proficient technique to achieve improvement in quality and reliability of the software.But,the quality of test cases has a considerable influence on fault revealing capability of software testing activity.Test Case Prioritization(TCP)remains a challenging issue since prioritizing test cases is unsatisfactory in terms of Average Percentage of Faults Detected(APFD)and time spent upon execution results.TCP ismainly intended to design a collection of test cases that can accomplish early optimization using preferred characteristics.The studies conducted earlier focused on prioritizing the available test cases in accelerating fault detection rate during software testing.In this aspect,the current study designs aModified Harris Hawks Optimization based TCP(MHHO-TCP)technique for software testing.The aim of the proposed MHHO-TCP technique is to maximize APFD and minimize the overall execution time.In addition,MHHO algorithm is designed to boost the exploration and exploitation abilities of conventional HHO algorithm.In order to validate the enhanced efficiency of MHHO-TCP technique,a wide range of simulations was conducted on different benchmark programs and the results were examined under several aspects.The experimental outcomes highlight the improved efficiency of MHHO-TCP technique over recent approaches under different measures.
基金This research was supported by the Sejong University Research Fund Korea and University of Shaqra,Saudi Arabia.
文摘There has been an explosion of cloud services as organizations take advantage of their continuity,predictability,as well as quality of service and it raises the concern about latency,energy-efficiency,and security.This increase in demand requires new configurations of networks,products,and service operators.For this purpose,the software-defined network is an efficient technology that enables to support the future network functions along with the intelligent applications and packet optimization.This work analyzes the offline cloud scenario in which machines are efficiently deployed and scheduled for user processing requests.Performance is evaluated in terms of reducing bandwidth,task execution times and latencies,and increasing throughput.A minimum execution time algorithm is used to compute the completion time of all the available resources which are allocated to the virtual machine and lion optimization algorithm is applied to packets in a cloud environment.The proposed work is shown to improve the throughput and latency rate.
文摘Capacitor voltage imbalance is a significant problem for three-level inverters.Due to the mid-point modulation of these inverter topologies,the neutral point potential moves up or down depending on the neutral point current direction creating imbalanced voltages among the two capacitors.This imbalanced capacitor voltage causes imbalanced voltage stress among the semiconductor devices and causes increase output voltage and current harmonics.This paper introduces a modified voltage balancing strategy using two-level space vector modulation.By decomposing the three-level space vector diagram into two-level space vector diagram and redistributing the dwell times of the two-level zero space vectors,the modified voltage balancing method ensures minimal NP voltage ripple.Compared to the commonly used NP voltage control method(using 3L SVM[9]),the proposed modified NP voltage control method offers a slightly higher neutral-point voltage ripple and output voltage harmonics but,it has much lower switching loss,code size and execution time.
文摘Worst-case execution time (WCET) analysis of multi-threaded software is still a challenge. This comes mainly from the fact that synchronization has to be taken into account. In this paper, we focus on this issue and on automatically calculating and incorporating stalling times (e.g. caused by lock contention) in a generic graph model. The idea that thread interleavings can be studied with a matrix calculus is novel in this research area. Our sparse matrix representations of the program are manipulated using an extended Kronecker algebra. The resulting graph represents multi-threaded programs similar as CFGs do for sequential programs. With this graph model, we are able to calculate the WCET of multi-threaded concurrent programs including stalling times which are due to synchronization. We employ a generating function-based approach for setting up data flow equations which are solved by well-known elimination-based dataflow analysis methods or an off-the-shelf equation solver. The WCET of multi-threaded programs can finally be calculated with a non-linear function solver.
文摘This work explores the feasibility of a novel predictive control strategy on a power factor correction system. The proposed control strategy allows a significant reduction of the power losses respect to a classical predictive control strategy working with a fixed execution time Ts. The proposed control strategy operates with a variable execution time T~, and it has been implemented using a low cost hardware platform based on TI~ TMS320F2812 DSP. The chosen platform is capable to execute a control strategy code with a variable execution time T,. This operation can be performed by setting in proper manner, the timer registers of one of two event manager A/B blocks present on the mentioned DSP (digital signal processor).
基金Supported by the National High Technology Research and Development Program of China(863 Program,2009AA011705)the National Natural Science Foundation of China(60903033)
文摘To overcome disadvantages of traditional worst-case execution time (WCET) analysis approaches, we propose a new WCET analysis approach based on independent paths for ARM programs. Based on the results of program flow analysis, it reduces and partitions the control flow graph of the program and obtains a directed graph. Using linear combinations of independent paths of the directed graph, a set of feasible paths can be generated that gives complete coverage in terms of the program paths considered. Their timing measurements and execution counts of program segments are derived from a limited number of measurements of an instrumented version of the program. After the timing measurement of the feasible paths are linearly expressed by the execution times of program seg-ments, a system of equations is derived as a constraint problem, from which we can obtain the execution times of program segments. By assigning the execution times of program segments to weights of edges in the directed graph, the WCET estimate can be calculated on the basis of graph-theoretical techniques. Comparing our WCET estimate with the WCET measurement obtained by the exhaustive measurement, the maximum error ratio is only 8.259 3 %. It is shown that the proposed approach is an effective way to obtain the safe and tight WCET estimate for ARM programs.