期刊文献+
共找到12篇文章
< 1 >
每页显示 20 50 100
Improving Prediction Efficiency of Machine Learning Models for Cardiovascular Disease in IoST-Based Systems through Hyperparameter Optimization
1
作者 Tajim Md.Niamat Ullah Akhund Waleed M.Al-Nuwaiser 《Computers, Materials & Continua》 SCIE EI 2024年第9期3485-3506,共22页
This study explores the impact of hyperparameter optimization on machine learning models for predicting cardiovascular disease using data from an IoST(Internet of Sensing Things)device.Ten distinct machine learning ap... This study explores the impact of hyperparameter optimization on machine learning models for predicting cardiovascular disease using data from an IoST(Internet of Sensing Things)device.Ten distinct machine learning approaches were implemented and systematically evaluated before and after hyperparameter tuning.Significant improvements were observed across various models,with SVM and Neural Networks consistently showing enhanced performance metrics such as F1-Score,recall,and precision.The study underscores the critical role of tailored hyperparameter tuning in optimizing these models,revealing diverse outcomes among algorithms.Decision Trees and Random Forests exhibited stable performance throughout the evaluation.While enhancing accuracy,hyperparameter optimization also led to increased execution time.Visual representations and comprehensive results support the findings,confirming the hypothesis that optimizing parameters can effectively enhance predictive capabilities in cardiovascular disease.This research contributes to advancing the understanding and application of machine learning in healthcare,particularly in improving predictive accuracy for cardiovascular disease management and intervention strategies. 展开更多
关键词 Internet of sensing things(IoST) machine learning hyperparameter optimization cardiovascular disease prediction execution time analysis performance analysis wilcoxon signed-rank test
下载PDF
Research on Multi-Core Processor Analysis for WCET Estimation
2
作者 LUO Haoran HU Shuisong +2 位作者 WANG Wenyong TANG Yuke ZHOU Junwei 《ZTE Communications》 2024年第1期87-94,共8页
Real-time system timing analysis is crucial for estimating the worst-case execution time(WCET)of a program.To achieve this,static or dynamic analysis methods are used,along with targeted modeling of the actual hardwar... Real-time system timing analysis is crucial for estimating the worst-case execution time(WCET)of a program.To achieve this,static or dynamic analysis methods are used,along with targeted modeling of the actual hardware system.This literature review focuses on calculating WCET for multi-core processors,providing a survey of traditional methods used for static and dynamic analysis and highlighting the major challenges that arise from different program execution scenarios on multi-core platforms.This paper outlines the strengths and weaknesses of current methodologies and offers insights into prospective areas of research on multi-core analysis.By presenting a comprehensive analysis of the current state of research on multi-core processor analysis for WCET estimation,this review aims to serve as a valuable resource for researchers and practitioners in the field. 展开更多
关键词 real-time system worst-case execution time(WCET) multi-core analysis
下载PDF
Reliable Estimation of Execution Time of MapReduce Program 被引量:1
3
作者 杨肖 孙建伶 《China Communications》 SCIE CSCD 2011年第6期11-18,共8页
As data volume grows, many enterprises are considering using MapReduce for its simplicity. However, how to evaluate the performance improvement before deployment is still an issue. Current researches of MapReduce perf... As data volume grows, many enterprises are considering using MapReduce for its simplicity. However, how to evaluate the performance improvement before deployment is still an issue. Current researches of MapReduce performance are mainly based on monitoring and simulation, and lack mathematical models. In this paper, we present a simple but powerful performance model for the prediction of the execution time of a MapReduce program with limited resources. We study each component of MapReduce framework, and analyze the relation between the overall performance and the number of mappers and reducers based on our model. Two typical MapReduce programs are evaluated in a small cluster with 13 nodes. Experimental results show that the mathematical performance model can estimate the execution time of MapReduce programs reliably. According to our model, number of mappers and reducers can be tuned to form a better execution pipeline and lead to better performance. The model also points out potential bottlenecks of the framework and future improvement. 展开更多
关键词 performance model MAPREDUCE execution time
下载PDF
Machine Learning Driven Latency Optimization for Internet of Things Applications in Edge Computing 被引量:1
4
作者 Uchechukwu AWADA ZHANG Jiankang +2 位作者 CHEN Sheng LI Shuangzhi YANG Shouyi 《ZTE Communications》 2023年第2期40-52,共13页
Emerging Internet of Things(IoT)applications require faster execution time and response time to achieve optimal performance.However,most IoT devices have limited or no computing capability to achieve such stringent ap... Emerging Internet of Things(IoT)applications require faster execution time and response time to achieve optimal performance.However,most IoT devices have limited or no computing capability to achieve such stringent application requirements.To this end,computation offloading in edge computing has been used for IoT systems to achieve the desired performance.Nevertheless,randomly offloading applications to any available edge without considering their resource demands,inter-application dependencies and edge resource availability may eventually result in execution delay and performance degradation.We introduce Edge-IoT,a machine learning-enabled orchestration framework in this paper,which utilizes the states of edge resources and application resource requirements to facilitate a resource-aware offloading scheme for minimizing the average latency.We further propose a variant bin-packing optimization model that co-locates applications firmly on edge resources to fully utilize available resources.Extensive experiments show the effectiveness and resource efficiency of the proposed approach. 展开更多
关键词 edge computing execution time IOT machine learning resource efficiency
下载PDF
Task Offloading Decision in Fog Computing System 被引量:6
5
作者 Qiliang Zhu Baojiang Si +1 位作者 Feifan Yang You Ma 《China Communications》 SCIE CSCD 2017年第11期59-68,共10页
Fog computing is an emerging paradigm of cloud computing which to meet the growing computation demand of mobile application. It can help mobile devices to overcome resource constraints by offloading the computationall... Fog computing is an emerging paradigm of cloud computing which to meet the growing computation demand of mobile application. It can help mobile devices to overcome resource constraints by offloading the computationally intensive tasks to cloud servers. The challenge of the cloud is to minimize the time of data transfer and task execution to the user, whose location changes owing to mobility, and the energy consumption for the mobile device. To provide satisfactory computation performance is particularly challenging in the fog computing environment. In this paper, we propose a novel fog computing model and offloading policy which can effectively bring the fog computing power closer to the mobile user. The fog computing model consist of remote cloud nodes and local cloud nodes, which is attached to wireless access infrastructure. And we give task offloading policy taking into account executi+on, energy consumption and other expenses. We finally evaluate the performance of our method through experimental simulations. The experimental results show that this method has a significant effect on reducing the execution time of tasks and energy consumption of mobile devices. 展开更多
关键词 fog computing task offioading energy consumption execution time
下载PDF
Event-Triggered Sliding Mode Control for Trajectory Tracking of Nonlinear Systems 被引量:6
6
作者 Aquib Mustafa Narendra K.Dhar Nishchal K Verma 《IEEE/CAA Journal of Automatica Sinica》 EI CSCD 2020年第1期307-314,共8页
In this paper, an event-triggered sliding mode control approach for trajectory tracking problem of nonlinear input affine system with disturbance has been proposed. A second order robotic manipulator system has been m... In this paper, an event-triggered sliding mode control approach for trajectory tracking problem of nonlinear input affine system with disturbance has been proposed. A second order robotic manipulator system has been modeled into a general nonlinear input affine system. Initially, the global asymptotic stability is ensured with conventional periodic sampling approach for reference trajectory tracking. Then the proposed approach of event-triggered sliding mode control is discussed which guarantees semi-global uniform ultimate boundedness. The proposed control approach guarantees non-accumulation of control updates ensuring lower bounds on inter-event triggering instants avoiding Zeno behavior in presence of the disturbance. The system shows better performance in terms of reduced control updates, ensures system stability which further guarantees optimization of resource usage and cost. The simulation results are provided for validation of proposed methodology for tracking problem by a robotic manipulator. The number of aperiodic control updates is found to be approximately 44% and 61% in the presence of constant and time-varying disturbances respectively. 展开更多
关键词 Event-trigger inter execution time STABILITY sliding mode control trajectory tracking
下载PDF
Modified Harris Hawks Optimization Based Test Case Prioritization for Software Testing 被引量:1
7
作者 Manar Ahmed Hamza Abdelzahir Abdelmaboud +5 位作者 Souad Larabi-Marie-Sainte Haya Mesfer Alshahrani Mesfer Al Duhayyim Hamza Awad Ibrahim Mohammed Rizwanullah Ishfaq Yaseen 《Computers, Materials & Continua》 SCIE EI 2022年第7期1951-1965,共15页
Generally,software testing is considered as a proficient technique to achieve improvement in quality and reliability of the software.But,the quality of test cases has a considerable influence on fault revealing capabi... Generally,software testing is considered as a proficient technique to achieve improvement in quality and reliability of the software.But,the quality of test cases has a considerable influence on fault revealing capability of software testing activity.Test Case Prioritization(TCP)remains a challenging issue since prioritizing test cases is unsatisfactory in terms of Average Percentage of Faults Detected(APFD)and time spent upon execution results.TCP ismainly intended to design a collection of test cases that can accomplish early optimization using preferred characteristics.The studies conducted earlier focused on prioritizing the available test cases in accelerating fault detection rate during software testing.In this aspect,the current study designs aModified Harris Hawks Optimization based TCP(MHHO-TCP)technique for software testing.The aim of the proposed MHHO-TCP technique is to maximize APFD and minimize the overall execution time.In addition,MHHO algorithm is designed to boost the exploration and exploitation abilities of conventional HHO algorithm.In order to validate the enhanced efficiency of MHHO-TCP technique,a wide range of simulations was conducted on different benchmark programs and the results were examined under several aspects.The experimental outcomes highlight the improved efficiency of MHHO-TCP technique over recent approaches under different measures. 展开更多
关键词 Software testing harris hawks optimization test case prioritization apfd execution time metaheuristics
下载PDF
Packet Optimization of Software Defined Network Using Lion Optimization 被引量:1
8
作者 Jagmeet Kaur Shakeel Ahmed +3 位作者 Yogesh Kumar A.Alaboudi N.Z.Jhanjhi Muhammad Fazal Ijaz 《Computers, Materials & Continua》 SCIE EI 2021年第11期2617-2633,共17页
There has been an explosion of cloud services as organizations take advantage of their continuity,predictability,as well as quality of service and it raises the concern about latency,energy-efficiency,and security.Thi... There has been an explosion of cloud services as organizations take advantage of their continuity,predictability,as well as quality of service and it raises the concern about latency,energy-efficiency,and security.This increase in demand requires new configurations of networks,products,and service operators.For this purpose,the software-defined network is an efficient technology that enables to support the future network functions along with the intelligent applications and packet optimization.This work analyzes the offline cloud scenario in which machines are efficiently deployed and scheduled for user processing requests.Performance is evaluated in terms of reducing bandwidth,task execution times and latencies,and increasing throughput.A minimum execution time algorithm is used to compute the completion time of all the available resources which are allocated to the virtual machine and lion optimization algorithm is applied to packets in a cloud environment.The proposed work is shown to improve the throughput and latency rate. 展开更多
关键词 Software-defined network cloud computing packet optimization energy efficiency lion optimization minimum execution time
下载PDF
A Modified Neutral-point Voltage Control Strategy for Three-level Inverters Based on Decomposition of Space Vector Diagram 被引量:1
9
作者 Dereje Woldegiorgis H.Alan Mantooth 《CES Transactions on Electrical Machines and Systems》 CSCD 2022年第2期124-134,共11页
Capacitor voltage imbalance is a significant problem for three-level inverters.Due to the mid-point modulation of these inverter topologies,the neutral point potential moves up or down depending on the neutral point c... Capacitor voltage imbalance is a significant problem for three-level inverters.Due to the mid-point modulation of these inverter topologies,the neutral point potential moves up or down depending on the neutral point current direction creating imbalanced voltages among the two capacitors.This imbalanced capacitor voltage causes imbalanced voltage stress among the semiconductor devices and causes increase output voltage and current harmonics.This paper introduces a modified voltage balancing strategy using two-level space vector modulation.By decomposing the three-level space vector diagram into two-level space vector diagram and redistributing the dwell times of the two-level zero space vectors,the modified voltage balancing method ensures minimal NP voltage ripple.Compared to the commonly used NP voltage control method(using 3L SVM[9]),the proposed modified NP voltage control method offers a slightly higher neutral-point voltage ripple and output voltage harmonics but,it has much lower switching loss,code size and execution time. 展开更多
关键词 Three-level inverter Neutral-point voltage control Execution time
下载PDF
A Generic Graph Model for WCET Analysis of Multi-Core Concurrent Applications
10
作者 Robert Mittermayr Johann Blieberger 《Journal of Software Engineering and Applications》 2016年第5期182-198,共17页
Worst-case execution time (WCET) analysis of multi-threaded software is still a challenge. This comes mainly from the fact that synchronization has to be taken into account. In this paper, we focus on this issue and o... Worst-case execution time (WCET) analysis of multi-threaded software is still a challenge. This comes mainly from the fact that synchronization has to be taken into account. In this paper, we focus on this issue and on automatically calculating and incorporating stalling times (e.g. caused by lock contention) in a generic graph model. The idea that thread interleavings can be studied with a matrix calculus is novel in this research area. Our sparse matrix representations of the program are manipulated using an extended Kronecker algebra. The resulting graph represents multi-threaded programs similar as CFGs do for sequential programs. With this graph model, we are able to calculate the WCET of multi-threaded concurrent programs including stalling times which are due to synchronization. We employ a generating function-based approach for setting up data flow equations which are solved by well-known elimination-based dataflow analysis methods or an off-the-shelf equation solver. The WCET of multi-threaded programs can finally be calculated with a non-linear function solver. 展开更多
关键词 Worst-Case Execution time Analysis Program Analysis CONCURRENCY Multi-Threaded Programs Kronecker Algebra
下载PDF
High Efficiency Predictive Control Strategy Applied to a Fower Factor Correction System
11
作者 Fernando Parillo 《Journal of Energy and Power Engineering》 2012年第11期1809-1815,共7页
This work explores the feasibility of a novel predictive control strategy on a power factor correction system. The proposed control strategy allows a significant reduction of the power losses respect to a classical pr... This work explores the feasibility of a novel predictive control strategy on a power factor correction system. The proposed control strategy allows a significant reduction of the power losses respect to a classical predictive control strategy working with a fixed execution time Ts. The proposed control strategy operates with a variable execution time T~, and it has been implemented using a low cost hardware platform based on TI~ TMS320F2812 DSP. The chosen platform is capable to execute a control strategy code with a variable execution time T,. This operation can be performed by setting in proper manner, the timer registers of one of two event manager A/B blocks present on the mentioned DSP (digital signal processor). 展开更多
关键词 DSP event manager A/B PFC variable execution time T system efficiency.
下载PDF
A Worst-Case Execution Time Analysis Approach Based on Independent Paths for ARM Programs
12
作者 KONG Liangliang JIANG Jianhui 《Wuhan University Journal of Natural Sciences》 CAS 2012年第5期391-399,共9页
To overcome disadvantages of traditional worst-case execution time (WCET) analysis approaches, we propose a new WCET analysis approach based on independent paths for ARM programs. Based on the results of program flo... To overcome disadvantages of traditional worst-case execution time (WCET) analysis approaches, we propose a new WCET analysis approach based on independent paths for ARM programs. Based on the results of program flow analysis, it reduces and partitions the control flow graph of the program and obtains a directed graph. Using linear combinations of independent paths of the directed graph, a set of feasible paths can be generated that gives complete coverage in terms of the program paths considered. Their timing measurements and execution counts of program segments are derived from a limited number of measurements of an instrumented version of the program. After the timing measurement of the feasible paths are linearly expressed by the execution times of program seg-ments, a system of equations is derived as a constraint problem, from which we can obtain the execution times of program segments. By assigning the execution times of program segments to weights of edges in the directed graph, the WCET estimate can be calculated on the basis of graph-theoretical techniques. Comparing our WCET estimate with the WCET measurement obtained by the exhaustive measurement, the maximum error ratio is only 8.259 3 %. It is shown that the proposed approach is an effective way to obtain the safe and tight WCET estimate for ARM programs. 展开更多
关键词 worst-case execution time independent path realtime system least squares ARM microprocessor
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部