期刊文献+
共找到329,639篇文章
< 1 2 250 >
每页显示 20 50 100
MCWOA Scheduler:Modified Chimp-Whale Optimization Algorithm for Task Scheduling in Cloud Computing 被引量:1
1
作者 Chirag Chandrashekar Pradeep Krishnadoss +1 位作者 Vijayakumar Kedalu Poornachary Balasundaram Ananthakrishnan 《Computers, Materials & Continua》 SCIE EI 2024年第2期2593-2616,共24页
Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay ... Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay can hamper the performance of IoT-enabled cloud platforms.However,efficient task scheduling can lower the cloud infrastructure’s energy consumption,thus maximizing the service provider’s revenue by decreasing user job processing times.The proposed Modified Chimp-Whale Optimization Algorithm called Modified Chimp-Whale Optimization Algorithm(MCWOA),combines elements of the Chimp Optimization Algorithm(COA)and the Whale Optimization Algorithm(WOA).To enhance MCWOA’s identification precision,the Sobol sequence is used in the population initialization phase,ensuring an even distribution of the population across the solution space.Moreover,the traditional MCWOA’s local search capabilities are augmented by incorporating the whale optimization algorithm’s bubble-net hunting and random search mechanisms into MCWOA’s position-updating process.This study demonstrates the effectiveness of the proposed approach using a two-story rigid frame and a simply supported beam model.Simulated outcomes reveal that the new method outperforms the original MCWOA,especially in multi-damage detection scenarios.MCWOA excels in avoiding false positives and enhancing computational speed,making it an optimal choice for structural damage detection.The efficiency of the proposed MCWOA is assessed against metrics such as energy usage,computational expense,task duration,and delay.The simulated data indicates that the new MCWOA outpaces other methods across all metrics.The study also references the Whale Optimization Algorithm(WOA),Chimp Algorithm(CA),Ant Lion Optimizer(ALO),Genetic Algorithm(GA)and Grey Wolf Optimizer(GWO). 展开更多
关键词 Cloud computing SCHEDULING chimp optimization algorithm whale optimization algorithm
下载PDF
Underwater four-quadrant dual-beam circumferential scanning laser fuze using nonlinear adaptive backscatter filter based on pauseable SAF-LMS algorithm 被引量:1
2
作者 Guangbo Xu Bingting Zha +2 位作者 Hailu Yuan Zhen Zheng He Zhang 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第7期1-13,共13页
The phenomenon of a target echo peak overlapping with the backscattered echo peak significantly undermines the detection range and precision of underwater laser fuzes.To overcome this issue,we propose a four-quadrant ... The phenomenon of a target echo peak overlapping with the backscattered echo peak significantly undermines the detection range and precision of underwater laser fuzes.To overcome this issue,we propose a four-quadrant dual-beam circumferential scanning laser fuze to distinguish various interference signals and provide more real-time data for the backscatter filtering algorithm.This enhances the algorithm loading capability of the fuze.In order to address the problem of insufficient filtering capacity in existing linear backscatter filtering algorithms,we develop a nonlinear backscattering adaptive filter based on the spline adaptive filter least mean square(SAF-LMS)algorithm.We also designed an algorithm pause module to retain the original trend of the target echo peak,improving the time discrimination accuracy and anti-interference capability of the fuze.Finally,experiments are conducted with varying signal-to-noise ratios of the original underwater target echo signals.The experimental results show that the average signal-to-noise ratio before and after filtering can be improved by more than31 d B,with an increase of up to 76%in extreme detection distance. 展开更多
关键词 Laser fuze Underwater laser detection Backscatter adaptive filter Spline least mean square algorithm Nonlinear filtering algorithm
下载PDF
Enhancing Cancer Classification through a Hybrid Bio-Inspired Evolutionary Algorithm for Biomarker Gene Selection 被引量:1
3
作者 Hala AlShamlan Halah AlMazrua 《Computers, Materials & Continua》 SCIE EI 2024年第4期675-694,共20页
In this study,our aim is to address the problem of gene selection by proposing a hybrid bio-inspired evolutionary algorithm that combines Grey Wolf Optimization(GWO)with Harris Hawks Optimization(HHO)for feature selec... In this study,our aim is to address the problem of gene selection by proposing a hybrid bio-inspired evolutionary algorithm that combines Grey Wolf Optimization(GWO)with Harris Hawks Optimization(HHO)for feature selection.Themotivation for utilizingGWOandHHOstems fromtheir bio-inspired nature and their demonstrated success in optimization problems.We aimto leverage the strengths of these algorithms to enhance the effectiveness of feature selection in microarray-based cancer classification.We selected leave-one-out cross-validation(LOOCV)to evaluate the performance of both two widely used classifiers,k-nearest neighbors(KNN)and support vector machine(SVM),on high-dimensional cancer microarray data.The proposed method is extensively tested on six publicly available cancer microarray datasets,and a comprehensive comparison with recently published methods is conducted.Our hybrid algorithm demonstrates its effectiveness in improving classification performance,Surpassing alternative approaches in terms of precision.The outcomes confirm the capability of our method to substantially improve both the precision and efficiency of cancer classification,thereby advancing the development ofmore efficient treatment strategies.The proposed hybridmethod offers a promising solution to the gene selection problem in microarray-based cancer classification.It improves the accuracy and efficiency of cancer diagnosis and treatment,and its superior performance compared to other methods highlights its potential applicability in realworld cancer classification tasks.By harnessing the complementary search mechanisms of GWO and HHO,we leverage their bio-inspired behavior to identify informative genes relevant to cancer diagnosis and treatment. 展开更多
关键词 Bio-inspired algorithms BIOINFORMATICS cancer classification evolutionary algorithm feature selection gene expression grey wolf optimizer harris hawks optimization k-nearest neighbor support vector machine
下载PDF
Rao Algorithms-Based Structure Optimization for Heterogeneous Wireless Sensor Networks 被引量:1
4
作者 Shereen K.Refaay Samia A.Ali +2 位作者 Moumen T.El-Melegy Louai A.Maghrabi Hamdy H.El-Sayed 《Computers, Materials & Continua》 SCIE EI 2024年第1期873-897,共25页
The structural optimization of wireless sensor networks is a critical issue because it impacts energy consumption and hence the network’s lifetime.Many studies have been conducted for homogeneous networks,but few hav... The structural optimization of wireless sensor networks is a critical issue because it impacts energy consumption and hence the network’s lifetime.Many studies have been conducted for homogeneous networks,but few have been performed for heterogeneouswireless sensor networks.This paper utilizes Rao algorithms to optimize the structure of heterogeneous wireless sensor networks according to node locations and their initial energies.The proposed algorithms lack algorithm-specific parameters and metaphorical connotations.The proposed algorithms examine the search space based on the relations of the population with the best,worst,and randomly assigned solutions.The proposed algorithms can be evaluated using any routing protocol,however,we have chosen the well-known routing protocols in the literature:Low Energy Adaptive Clustering Hierarchy(LEACH),Power-Efficient Gathering in Sensor Information Systems(PEAGSIS),Partitioned-based Energy-efficient LEACH(PE-LEACH),and the Power-Efficient Gathering in Sensor Information Systems Neural Network(PEAGSIS-NN)recent routing protocol.We compare our optimized method with the Jaya,the Particle Swarm Optimization-based Energy Efficient Clustering(PSO-EEC)protocol,and the hybrid Harmony Search Algorithm and PSO(HSA-PSO)algorithms.The efficiencies of our proposed algorithms are evaluated by conducting experiments in terms of the network lifetime(first dead node,half dead nodes,and last dead node),energy consumption,packets to cluster head,and packets to the base station.The experimental results were compared with those obtained using the Jaya optimization algorithm.The proposed algorithms exhibited the best performance.The proposed approach successfully prolongs the network lifetime by 71% for the PEAGSIS protocol,51% for the LEACH protocol,10% for the PE-LEACH protocol,and 73% for the PEGSIS-NN protocol;Moreover,it enhances other criteria such as energy conservation,fitness convergence,packets to cluster head,and packets to the base station. 展开更多
关键词 Wireless sensor networks Rao algorithms OPTIMIZATION LEACH PEAGSIS
下载PDF
Falcon Optimization Algorithm-Based Energy Efficient Communication Protocol for Cluster-Based Vehicular Networks 被引量:1
5
作者 Youseef Alotaibi B.Rajasekar +1 位作者 R.Jayalakshmi Surendran Rajendran 《Computers, Materials & Continua》 SCIE EI 2024年第3期4243-4262,共20页
Rapid development in Information Technology(IT)has allowed several novel application regions like large outdoor vehicular networks for Vehicle-to-Vehicle(V2V)transmission.Vehicular networks give a safe and more effect... Rapid development in Information Technology(IT)has allowed several novel application regions like large outdoor vehicular networks for Vehicle-to-Vehicle(V2V)transmission.Vehicular networks give a safe and more effective driving experience by presenting time-sensitive and location-aware data.The communication occurs directly between V2V and Base Station(BS)units such as the Road Side Unit(RSU),named as a Vehicle to Infrastructure(V2I).However,the frequent topology alterations in VANETs generate several problems with data transmission as the vehicle velocity differs with time.Therefore,the scheme of an effectual routing protocol for reliable and stable communications is significant.Current research demonstrates that clustering is an intelligent method for effectual routing in a mobile environment.Therefore,this article presents a Falcon Optimization Algorithm-based Energy Efficient Communication Protocol for Cluster-based Routing(FOA-EECPCR)technique in VANETS.The FOA-EECPCR technique intends to group the vehicles and determine the shortest route in the VANET.To accomplish this,the FOA-EECPCR technique initially clusters the vehicles using FOA with fitness functions comprising energy,distance,and trust level.For the routing process,the Sparrow Search Algorithm(SSA)is derived with a fitness function that encompasses two variables,namely,energy and distance.A series of experiments have been conducted to exhibit the enhanced performance of the FOA-EECPCR method.The experimental outcomes demonstrate the enhanced performance of the FOA-EECPCR approach over other current methods. 展开更多
关键词 Vehicular networks communication protocol CLUSTERING falcon optimization algorithm ROUTING
下载PDF
RRT Autonomous Detection Algorithm Based on Multiple Pilot Point Bias Strategy and Karto SLAM Algorithm
6
作者 Lieping Zhang Xiaoxu Shi +3 位作者 Liu Tang Yilin Wang Jiansheng Peng Jianchu Zou 《Computers, Materials & Continua》 SCIE EI 2024年第2期2111-2136,共26页
A Rapid-exploration Random Tree(RRT)autonomous detection algorithm based on the multi-guide-node deflection strategy and Karto Simultaneous Localization and Mapping(SLAM)algorithm was proposed to solve the problems of... A Rapid-exploration Random Tree(RRT)autonomous detection algorithm based on the multi-guide-node deflection strategy and Karto Simultaneous Localization and Mapping(SLAM)algorithm was proposed to solve the problems of low efficiency of detecting frontier boundary points and drift distortion in the process of map building in the traditional RRT algorithm in the autonomous detection strategy of mobile robot.Firstly,an RRT global frontier boundary point detection algorithm based on the multi-guide-node deflection strategy was put forward,which introduces the reference value of guide nodes’deflection probability into the random sampling function so that the global search tree can detect frontier boundary points towards the guide nodes according to random probability.After that,a new autonomous detection algorithm for mobile robots was proposed by combining the graph optimization-based Karto SLAM algorithm with the previously improved RRT algorithm.The algorithm simulation platform based on the Gazebo platform was built.The simulation results show that compared with the traditional RRT algorithm,the proposed RRT autonomous detection algorithm can effectively reduce the time of autonomous detection,plan the length of detection trajectory under the condition of high average detection coverage,and complete the task of autonomous detection mapping more efficiently.Finally,with the help of the ROS-based mobile robot experimental platform,the performance of the proposed algorithm was verified in the real environment of different obstacles.The experimental results show that in the actual environment of simple and complex obstacles,the proposed RRT autonomous detection algorithm was superior to the traditional RRT autonomous detection algorithm in the time of detection,length of detection trajectory,and average coverage,thus improving the efficiency and accuracy of autonomous detection. 展开更多
关键词 Autonomous detection RRT algorithm mobile robot ROS karto SLAM algorithm
下载PDF
Maximizing Resource Efficiency in Cloud Data Centers through Knowledge-Based Flower Pollination Algorithm (KB-FPA)
7
作者 Nidhika Chauhan Navneet Kaur +4 位作者 Kamaljit Singh Saini Sahil Verma Kavita Ruba Abu Khurma Pedro A.Castillo 《Computers, Materials & Continua》 SCIE EI 2024年第6期3757-3782,共26页
Cloud computing is a dynamic and rapidly evolving field,where the demand for resources fluctuates continuously.This paper delves into the imperative need for adaptability in the allocation of resources to applications... Cloud computing is a dynamic and rapidly evolving field,where the demand for resources fluctuates continuously.This paper delves into the imperative need for adaptability in the allocation of resources to applications and services within cloud computing environments.The motivation stems from the pressing issue of accommodating fluctuating levels of user demand efficiently.By adhering to the proposed resource allocation method,we aim to achieve a substantial reduction in energy consumption.This reduction hinges on the precise and efficient allocation of resources to the tasks that require those most,aligning with the broader goal of sustainable and eco-friendly cloud computing systems.To enhance the resource allocation process,we introduce a novel knowledge-based optimization algorithm.In this study,we rigorously evaluate its efficacy by comparing it to existing algorithms,including the Flower Pollination Algorithm(FPA),Spark Lion Whale Optimization(SLWO),and Firefly Algo-rithm.Our findings reveal that our proposed algorithm,Knowledge Based Flower Pollination Algorithm(KB-FPA),consistently outperforms these conventional methods in both resource allocation efficiency and energy consumption reduction.This paper underscores the profound significance of resource allocation in the realm of cloud computing.By addressing the critical issue of adaptability and energy efficiency,it lays the groundwork for a more sustainable future in cloud computing systems.Our contribution to the field lies in the introduction of a new resource allocation strategy,offering the potential for significantly improved efficiency and sustainability within cloud computing infrastructures. 展开更多
关键词 Cloud computing resource allocation energy consumption optimization algorithm flower pollination algorithm
下载PDF
Hybrid Prairie Dog and Beluga Whale Optimization Algorithm for Multi-Objective Load Balanced-Task Scheduling in Cloud Computing Environments
8
作者 K Ramya Senthilselvi Ayothi 《China Communications》 SCIE CSCD 2024年第7期307-324,共18页
The cloud computing technology is utilized for achieving resource utilization of remotebased virtual computer to facilitate the consumers with rapid and accurate massive data services.It utilizes on-demand resource pr... The cloud computing technology is utilized for achieving resource utilization of remotebased virtual computer to facilitate the consumers with rapid and accurate massive data services.It utilizes on-demand resource provisioning,but the necessitated constraints of rapid turnaround time,minimal execution cost,high rate of resource utilization and limited makespan transforms the Load Balancing(LB)process-based Task Scheduling(TS)problem into an NP-hard optimization issue.In this paper,Hybrid Prairie Dog and Beluga Whale Optimization Algorithm(HPDBWOA)is propounded for precise mapping of tasks to virtual machines with the due objective of addressing the dynamic nature of cloud environment.This capability of HPDBWOA helps in decreasing the SLA violations and Makespan with optimal resource management.It is modelled as a scheduling strategy which utilizes the merits of PDOA and BWOA for attaining reactive decisions making with respect to the process of assigning the tasks to virtual resources by considering their priorities into account.It addresses the problem of pre-convergence with wellbalanced exploration and exploitation to attain necessitated Quality of Service(QoS)for minimizing the waiting time incurred during TS process.It further balanced exploration and exploitation rates for reducing the makespan during the task allocation with complete awareness of VM state.The results of the proposed HPDBWOA confirmed minimized energy utilization of 32.18% and reduced cost of 28.94% better than approaches used for investigation.The statistical investigation of the proposed HPDBWOA conducted using ANOVA confirmed its efficacy over the benchmarked systems in terms of throughput,system,and response time. 展开更多
关键词 Beluga Whale Optimization algorithm(BWOA) cloud computing Improved Hopcroft-karp algorithm Infrastructure as a Service(IaaS) Prairie Dog Optimization algorithm(PDOA) Virtual Machine(VM)
下载PDF
Research on Total Electric Field Prediction Method of Ultra-High Voltage Direct Current Transmission Line Based on Stacking Algorithm
9
作者 Yinkong Wei Mucong Wu +3 位作者 Wei Wei Paulo R.F.Rocha Ziyi Cheng Weifang Yao 《Computer Systems Science & Engineering》 2024年第3期723-738,共16页
Ultra-high voltage(UHV)transmission lines are an important part of China’s power grid and are often surrounded by a complex electromagnetic environment.The ground total electric field is considered a main electromagn... Ultra-high voltage(UHV)transmission lines are an important part of China’s power grid and are often surrounded by a complex electromagnetic environment.The ground total electric field is considered a main electromagnetic environment indicator of UHV transmission lines and is currently employed for reliable long-term operation of the power grid.Yet,the accurate prediction of the ground total electric field remains a technical challenge.In this work,we collected the total electric field data from the Ningdong-Zhejiang±800 kV UHVDC transmission project,as of the Ling Shao line,and perform an outlier analysis of the total electric field data.We show that the Local Outlier Factor(LOF)elimination algorithm has a small average difference and overcomes the performance of Density-Based Spatial Clustering of Applications with Noise(DBSCAN)and Isolated Forest elimination algorithms.Moreover,the Stacking algorithm has been found to have superior prediction accuracy than a variety of similar prediction algorithms,including the traditional finite element.The low prediction error of the Stacking algorithm highlights the superior ability to accurately forecast the ground total electric field of UHVDC transmission lines. 展开更多
关键词 DC transmission line total electric field effective data multivariable outliers LOF algorithm Stacking algorithm
下载PDF
Hybrid Seagull and Whale Optimization Algorithm-Based Dynamic Clustering Protocol for Improving Network Longevity in Wireless Sensor Networks
10
作者 P.Vinoth Kumar K.Venkatesh 《China Communications》 SCIE CSCD 2024年第10期113-131,共19页
Energy efficiency is the prime concern in Wireless Sensor Networks(WSNs) as maximized energy consumption without essentially limits the energy stability and network lifetime. Clustering is the significant approach ess... Energy efficiency is the prime concern in Wireless Sensor Networks(WSNs) as maximized energy consumption without essentially limits the energy stability and network lifetime. Clustering is the significant approach essential for minimizing unnecessary transmission energy consumption with sustained network lifetime. This clustering process is identified as the Non-deterministic Polynomial(NP)-hard optimization problems which has the maximized probability of being solved through metaheuristic algorithms.This adoption of hybrid metaheuristic algorithm concentrates on the identification of the optimal or nearoptimal solutions which aids in better energy stability during Cluster Head(CH) selection. In this paper,Hybrid Seagull and Whale Optimization Algorithmbased Dynamic Clustering Protocol(HSWOA-DCP)is proposed with the exploitation benefits of WOA and exploration merits of SEOA to optimal CH selection for maintaining energy stability with prolonged network lifetime. This HSWOA-DCP adopted the modified version of SEagull Optimization Algorithm(SEOA) to handle the problem of premature convergence and computational accuracy which is maximally possible during CH selection. The inclusion of SEOA into WOA improved the global searching capability during the selection of CH and prevents worst fitness nodes from being selected as CH, since the spiral attacking behavior of SEOA is similar to the bubble-net characteristics of WOA. This CH selection integrates the spiral attacking principles of SEOA and contraction surrounding mechanism of WOA for improving computation accuracy to prevent frequent election process. It also included the strategy of levy flight strategy into SEOA for potentially avoiding premature convergence to attain better trade-off between the rate of exploration and exploitation in a more effective manner. The simulation results of the proposed HSWOADCP confirmed better network survivability rate, network residual energy and network overall throughput on par with the competitive CH selection schemes under different number of data transmission rounds.The statistical analysis of the proposed HSWOA-DCP scheme also confirmed its energy stability with respect to ANOVA test. 展开更多
关键词 CLUSTERING energy stability network lifetime seagull optimization algorithm(SEOA) whale optimization algorithm(WOA) wireless sensor networks(WSNs)
下载PDF
A Sharding Scheme Based on Graph Partitioning Algorithm for Public Blockchain
11
作者 Shujiang Xu Ziye Wang +4 位作者 Lianhai Wang Miodrag J.Mihaljevi′c Shuhui Zhang Wei Shao Qizheng Wang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3311-3327,共17页
Blockchain technology,with its attributes of decentralization,immutability,and traceability,has emerged as a powerful catalyst for enhancing traditional industries in terms of optimizing business processes.However,tra... Blockchain technology,with its attributes of decentralization,immutability,and traceability,has emerged as a powerful catalyst for enhancing traditional industries in terms of optimizing business processes.However,transaction performance and scalability has become the main challenges hindering the widespread adoption of blockchain.Due to its inability to meet the demands of high-frequency trading,blockchain cannot be adopted in many scenarios.To improve the transaction capacity,researchers have proposed some on-chain scaling technologies,including lightning networks,directed acyclic graph technology,state channels,and shardingmechanisms,inwhich sharding emerges as a potential scaling technology.Nevertheless,excessive cross-shard transactions and uneven shard workloads prevent the sharding mechanism from achieving the expected aim.This paper proposes a graphbased sharding scheme for public blockchain to efficiently balance the transaction distribution.Bymitigating crossshard transactions and evening-out workloads among shards,the scheme reduces transaction confirmation latency and enhances the transaction capacity of the blockchain.Therefore,the scheme can achieve a high-frequency transaction as well as a better blockchain scalability.Experiments results show that the scheme effectively reduces the cross-shard transaction ratio to a range of 35%-56%and significantly decreases the transaction confirmation latency to 6 s in a blockchain with no more than 25 shards. 展开更多
关键词 Blockchain sharding graph partitioning algorithm
下载PDF
STRONGLY CONVERGENT INERTIAL FORWARD-BACKWARD-FORWARD ALGORITHM WITHOUT ON-LINE RULE FOR VARIATIONAL INEQUALITIES
12
作者 姚永红 Abubakar ADAMU Yekini SHEHU 《Acta Mathematica Scientia》 SCIE CSCD 2024年第2期551-566,共16页
This paper studies a strongly convergent inertial forward-backward-forward algorithm for the variational inequality problem in Hilbert spaces.In our convergence analysis,we do not assume the on-line rule of the inerti... This paper studies a strongly convergent inertial forward-backward-forward algorithm for the variational inequality problem in Hilbert spaces.In our convergence analysis,we do not assume the on-line rule of the inertial parameters and the iterates,which have been assumed by several authors whenever a strongly convergent algorithm with an inertial extrapolation step is proposed for a variational inequality problem.Consequently,our proof arguments are different from what is obtainable in the relevant literature.Finally,we give numerical tests to confirm the theoretical analysis and show that our proposed algorithm is superior to related ones in the literature. 展开更多
关键词 forward-backward-forward algorithm inertial extrapolation variational inequality on-line rule
下载PDF
Stochastic Ranking Improved Teaching-Learning and Adaptive Grasshopper Optimization Algorithm-Based Clustering Scheme for Augmenting Network Lifetime in WSNs
13
作者 N Tamilarasan SB Lenin +1 位作者 P Mukunthan NC Sendhilkumar 《China Communications》 SCIE CSCD 2024年第9期159-178,共20页
In Wireless Sensor Networks(WSNs),Clustering process is widely utilized for increasing the lifespan with sustained energy stability during data transmission.Several clustering protocols were devised for extending netw... In Wireless Sensor Networks(WSNs),Clustering process is widely utilized for increasing the lifespan with sustained energy stability during data transmission.Several clustering protocols were devised for extending network lifetime,but most of them failed in handling the problem of fixed clustering,static rounds,and inadequate Cluster Head(CH)selection criteria which consumes more energy.In this paper,Stochastic Ranking Improved Teaching-Learning and Adaptive Grasshopper Optimization Algorithm(SRITL-AGOA)-based Clustering Scheme for energy stabilization and extending network lifespan.This SRITL-AGOA selected CH depending on the weightage of factors such as node mobility degree,neighbour's density distance to sink,single-hop or multihop communication and Residual Energy(RE)that directly influences the energy consumption of sensor nodes.In specific,Grasshopper Optimization Algorithm(GOA)is improved through tangent-based nonlinear strategy for enhancing the ability of global optimization.On the other hand,stochastic ranking and violation constraint handling strategies are embedded into Teaching-Learning-based Optimization Algorithm(TLOA)for improving its exploitation tendencies.Then,SR and VCH improved TLOA is embedded into the exploitation phase of AGOA for selecting better CH by maintaining better balance amid exploration and exploitation.Simulation results confirmed that the proposed SRITL-AGOA improved throughput by 21.86%,network stability by 18.94%,load balancing by 16.14%with minimized energy depletion by19.21%,compared to the competitive CH selection approaches. 展开更多
关键词 Adaptive Grasshopper Optimization algorithm(AGOA) Cluster Head(CH) network lifetime Teaching-Learning-based Optimization algorithm(TLOA) Wireless Sensor Networks(WSNs)
下载PDF
SFGA-CPA: A Novel Screening Correlation Power Analysis Framework Based on Genetic Algorithm
14
作者 Jiahui Liu Lang Li +1 位作者 Di Li Yu Ou 《Computers, Materials & Continua》 SCIE EI 2024年第6期4641-4657,共17页
Correlation power analysis(CPA)combined with genetic algorithms(GA)now achieves greater attack efficiency and can recover all subkeys simultaneously.However,two issues in GA-based CPA still need to be addressed:key de... Correlation power analysis(CPA)combined with genetic algorithms(GA)now achieves greater attack efficiency and can recover all subkeys simultaneously.However,two issues in GA-based CPA still need to be addressed:key degeneration and slow evolution within populations.These challenges significantly hinder key recovery efforts.This paper proposes a screening correlation power analysis framework combined with a genetic algorithm,named SFGA-CPA,to address these issues.SFGA-CPA introduces three operations designed to exploit CPA characteris-tics:propagative operation,constrained crossover,and constrained mutation.Firstly,the propagative operation accelerates population evolution by maximizing the number of correct bytes in each individual.Secondly,the constrained crossover and mutation operations effectively address key degeneration by preventing the compromise of correct bytes.Finally,an intelligent search method is proposed to identify optimal parameters,further improving attack efficiency.Experiments were conducted on both simulated environments and real power traces collected from the SAKURA-G platform.In the case of simulation,SFGA-CPA reduces the number of traces by 27.3%and 60%compared to CPA based on multiple screening methods(MS-CPA)and CPA based on simple GA method(SGA-CPA)when the success rate reaches 90%.Moreover,real experimental results on the SAKURA-G platform demonstrate that our approach outperforms other methods. 展开更多
关键词 Side-channel analysis correlation power analysis genetic algorithm CROSSOVER MUTATION
下载PDF
An Online Fake Review Detection Approach Using Famous Machine Learning Algorithms
15
作者 Asma Hassan Alshehri 《Computers, Materials & Continua》 SCIE EI 2024年第2期2767-2786,共20页
Online review platforms are becoming increasingly popular,encouraging dishonest merchants and service providers to deceive customers by creating fake reviews for their goods or services.Using Sybil accounts,bot farms,... Online review platforms are becoming increasingly popular,encouraging dishonest merchants and service providers to deceive customers by creating fake reviews for their goods or services.Using Sybil accounts,bot farms,and real account purchases,immoral actors demonize rivals and advertise their goods.Most academic and industry efforts have been aimed at detecting fake/fraudulent product or service evaluations for years.The primary hurdle to identifying fraudulent reviews is the lack of a reliable means to distinguish fraudulent reviews from real ones.This paper adopts a semi-supervised machine learning method to detect fake reviews on any website,among other things.Online reviews are classified using a semi-supervised approach(PU-learning)since there is a shortage of labeled data,and they are dynamic.Then,classification is performed using the machine learning techniques Support Vector Machine(SVM)and Nave Bayes.The performance of the suggested system has been compared with standard works,and experimental findings are assessed using several assessment metrics. 展开更多
关键词 SECURITY fake review semi-supervised learning ML algorithms review detection
下载PDF
Efficient 2-D MUSIC algorithm for super-resolution moving target tracking based on an FMCW radar
16
作者 Xuchong Yi Shuangxi Zhang Yuxuan Zhou 《Geodesy and Geodynamics》 EI CSCD 2024年第5期504-515,共12页
Frequency modulated continuous wave(FMCW)radar is an advantageous sensor scheme for target estimation and environmental perception.However,existing algorithms based on discrete Fourier transform(DFT),multiple signal c... Frequency modulated continuous wave(FMCW)radar is an advantageous sensor scheme for target estimation and environmental perception.However,existing algorithms based on discrete Fourier transform(DFT),multiple signal classification(MUSIC)and compressed sensing,etc.,cannot achieve both low complexity and high resolution simultaneously.This paper proposes an efficient 2-D MUSIC algorithm for super-resolution target estimation/tracking based on FMCW radar.Firstly,we enhance the efficiency of 2-D MUSIC azimuth-range spectrum estimation by incorporating 2-D DFT and multi-level resolution searching strategy.Secondly,we apply the gradient descent method to tightly integrate the spatial continuity of object motion into spectrum estimation when processing multi-epoch radar data,which improves the efficiency of continuous target tracking.These two approaches have improved the algorithm efficiency by nearly 2-4 orders of magnitude without losing accuracy and resolution.Simulation experiments are conducted to validate the effectiveness of the algorithm in both single-epoch estimation and multi-epoch tracking scenarios. 展开更多
关键词 2D-MUSIC FMCW radar Moving target tracking SUPER-RESOLUTION algorithm optimization
下载PDF
Uniaxial Compressive Strength Prediction for Rock Material in Deep Mine Using Boosting-Based Machine Learning Methods and Optimization Algorithms
17
作者 Junjie Zhao Diyuan Li +1 位作者 Jingtai Jiang Pingkuang Luo 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期275-304,共30页
Traditional laboratory tests for measuring rock uniaxial compressive strength(UCS)are tedious and timeconsuming.There is a pressing need for more effective methods to determine rock UCS,especially in deep mining envir... Traditional laboratory tests for measuring rock uniaxial compressive strength(UCS)are tedious and timeconsuming.There is a pressing need for more effective methods to determine rock UCS,especially in deep mining environments under high in-situ stress.Thus,this study aims to develop an advanced model for predicting the UCS of rockmaterial in deepmining environments by combining three boosting-basedmachine learning methods with four optimization algorithms.For this purpose,the Lead-Zinc mine in Southwest China is considered as the case study.Rock density,P-wave velocity,and point load strength index are used as input variables,and UCS is regarded as the output.Subsequently,twelve hybrid predictive models are obtained.Root mean square error(RMSE),mean absolute error(MAE),coefficient of determination(R2),and the proportion of the mean absolute percentage error less than 20%(A-20)are selected as the evaluation metrics.Experimental results showed that the hybridmodel consisting of the extreme gradient boostingmethod and the artificial bee colony algorithm(XGBoost-ABC)achieved satisfactory results on the training dataset and exhibited the best generalization performance on the testing dataset.The values of R2,A-20,RMSE,and MAE on the training dataset are 0.98,1.0,3.11 MPa,and 2.23MPa,respectively.The highest values of R2 and A-20(0.93 and 0.96),and the smallest RMSE and MAE values of 4.78 MPa and 3.76MPa,are observed on the testing dataset.The proposed hybrid model can be considered a reliable and effective method for predicting rock UCS in deep mines. 展开更多
关键词 Uniaxial compression strength strength prediction machine learning optimization algorithm
下载PDF
Application of DSAPSO Algorithm in Distribution Network Reconfiguration with Distributed Generation
18
作者 Caixia Tao Shize Yang Taiguo Li 《Energy Engineering》 EI 2024年第1期187-201,共15页
With the current integration of distributed energy resources into the grid,the structure of distribution networks is becoming more complex.This complexity significantly expands the solution space in the optimization p... With the current integration of distributed energy resources into the grid,the structure of distribution networks is becoming more complex.This complexity significantly expands the solution space in the optimization process for network reconstruction using intelligent algorithms.Consequently,traditional intelligent algorithms frequently encounter insufficient search accuracy and become trapped in local optima.To tackle this issue,a more advanced particle swarm optimization algorithm is proposed.To address the varying emphases at different stages of the optimization process,a dynamic strategy is implemented to regulate the social and self-learning factors.The Metropolis criterion is introduced into the simulated annealing algorithm to occasionally accept suboptimal solutions,thereby mitigating premature convergence in the population optimization process.The inertia weight is adjusted using the logistic mapping technique to maintain a balance between the algorithm’s global and local search abilities.The incorporation of the Pareto principle involves the consideration of network losses and voltage deviations as objective functions.A fuzzy membership function is employed for selecting the results.Simulation analysis is carried out on the restructuring of the distribution network,using the IEEE-33 node system and the IEEE-69 node system as examples,in conjunction with the integration of distributed energy resources.The findings demonstrate that,in comparison to other intelligent optimization algorithms,the proposed enhanced algorithm demonstrates a shorter convergence time and effectively reduces active power losses within the network.Furthermore,it enhances the amplitude of node voltages,thereby improving the stability of distribution network operations and power supply quality.Additionally,the algorithm exhibits a high level of generality and applicability. 展开更多
关键词 Reconfiguration of distribution network distributed generation particle swarm optimization algorithm simulated annealing algorithm active network loss
下载PDF
Heterogeneous Task Allocation Model and Algorithm for Intelligent Connected Vehicles
19
作者 Neng Wan Guangping Zeng Xianwei Zhou 《Computers, Materials & Continua》 SCIE EI 2024年第9期4281-4302,共22页
With the development of vehicles towards intelligence and connectivity,vehicular data is diversifying and growing dramatically.A task allocation model and algorithm for heterogeneous Intelligent Connected Vehicle(ICV)... With the development of vehicles towards intelligence and connectivity,vehicular data is diversifying and growing dramatically.A task allocation model and algorithm for heterogeneous Intelligent Connected Vehicle(ICV)applications are proposed for the dispersed computing network composed of heterogeneous task vehicles and Network Computing Points(NCPs).Considering the amount of task data and the idle resources of NCPs,a computing resource scheduling model for NCPs is established.Taking the heterogeneous task execution delay threshold as a constraint,the optimization problem is described as the problem of maximizing the utilization of computing resources by NCPs.The proposed problem is proven to be NP-hard by using the method of reduction to a 0-1 knapsack problem.A many-to-many matching algorithm based on resource preferences is proposed.The algorithm first establishes the mutual preference lists based on the adaptability of the task requirements and the resources provided by NCPs.This enables the filtering out of un-schedulable NCPs in the initial stage of matching,reducing the solution space dimension.To solve the matching problem between ICVs and NCPs,a new manyto-many matching algorithm is proposed to obtain a unique and stable optimal matching result.The simulation results demonstrate that the proposed scheme can improve the resource utilization of NCPs by an average of 9.6%compared to the reference scheme,and the total performance can be improved by up to 15.9%. 展开更多
关键词 Task allocation intelligent connected vehicles dispersed computing matching algorithm
下载PDF
Data-Driven Learning Control Algorithms for Unachievable Tracking Problems
20
作者 Zeyi Zhang Hao Jiang +1 位作者 Dong Shen Samer S.Saab 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第1期205-218,共14页
For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to in... For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to investigate solutions using the Ptype learning control scheme. Initially, we demonstrate the necessity of gradient information for achieving the best approximation.Subsequently, we propose an input-output-driven learning gain design to handle the imprecise gradients of a class of uncertain systems. However, it is discovered that the desired performance may not be attainable when faced with incomplete information.To address this issue, an extended iterative learning control scheme is introduced. In this scheme, the tracking errors are modified through output data sampling, which incorporates lowmemory footprints and offers flexibility in learning gain design.The input sequence is shown to converge towards the desired input, resulting in an output that is closest to the given reference in the least square sense. Numerical simulations are provided to validate the theoretical findings. 展开更多
关键词 Data-driven algorithms incomplete information iterative learning control gradient information unachievable problems
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部