The Cross-domain Heuristic Search Challenge(CHeSC)is a competition focused on creating efficient search algorithms adaptable to diverse problem domains.Selection hyper-heuristics are a class of algorithms that dynamic...The Cross-domain Heuristic Search Challenge(CHeSC)is a competition focused on creating efficient search algorithms adaptable to diverse problem domains.Selection hyper-heuristics are a class of algorithms that dynamically choose heuristics during the search process.Numerous selection hyper-heuristics have different imple-mentation strategies.However,comparisons between them are lacking in the literature,and previous works have not highlighted the beneficial and detrimental implementation methods of different components.The question is how to effectively employ them to produce an efficient search heuristic.Furthermore,the algorithms that competed in the inaugural CHeSC have not been collectively reviewed.This work conducts a review analysis of the top twenty competitors from this competition to identify effective and ineffective strategies influencing algorithmic performance.A summary of the main characteristics and classification of the algorithms is presented.The analysis underlines efficient and inefficient methods in eight key components,including search points,search phases,heuristic selection,move acceptance,feedback,Tabu mechanism,restart mechanism,and low-level heuristic parameter control.This review analyzes the components referencing the competition’s final leaderboard and discusses future research directions for these components.The effective approaches,identified as having the highest quality index,are mixed search point,iterated search phases,relay hybridization selection,threshold acceptance,mixed learning,Tabu heuristics,stochastic restart,and dynamic parameters.Findings are also compared with recent trends in hyper-heuristics.This work enhances the understanding of selection hyper-heuristics,offering valuable insights for researchers and practitioners aiming to develop effective search algorithms for diverse problem domains.展开更多
We consider the problem of restoring images corrupted by Poisson noise. Under the framework of maximum a posteriori estimator, the problem can be converted into a minimization problem where the objective function is c...We consider the problem of restoring images corrupted by Poisson noise. Under the framework of maximum a posteriori estimator, the problem can be converted into a minimization problem where the objective function is composed of a Kullback-Leibler(KL)-divergence term for the Poisson noise and a total variation(TV) regularization term. Due to the logarithm function in the KL-divergence term, the non-differentiability of TV term and the positivity constraint on the images, it is not easy to design stable and efficiency algorithm for the problem. Recently, many researchers proposed to solve the problem by alternating direction method of multipliers(ADMM). Since the approach introduces some auxiliary variables and requires the solution of some linear systems, the iterative procedure can be complicated. Here we formulate the problem as two new constrained minimax problems and solve them by Chambolle-Pock's first order primal-dual approach. The convergence of our approach is guaranteed by their theory. Comparing with ADMM approaches, our approach requires about half of the auxiliary variables and is matrix-inversion free. Numerical results show that our proposed algorithms are efficient and outperform the ADMM approach.展开更多
In the present paper we present a class of polynomial primal-dual interior-point algorithms for semidefmite optimization based on a kernel function. This kernel function is not a so-called self-regular function due to...In the present paper we present a class of polynomial primal-dual interior-point algorithms for semidefmite optimization based on a kernel function. This kernel function is not a so-called self-regular function due to its growth term increasing linearly. Some new analysis tools were developed which can be used to deal with complexity "analysis of the algorithms which use analogous strategy in [5] to design the search directions for the Newton system. The complexity bounds for the algorithms with large- and small-update methodswere obtained, namely,O(qn^(p+q/q(P+1)log n/ε and O(q^2√n)log n/ε,respectlvely.展开更多
In this paper, primal-dual interior-point algorithm with dynamic step size is implemented for linear programming (LP) problems. The algorithms are based on a few kernel functions, including both serf-regular functio...In this paper, primal-dual interior-point algorithm with dynamic step size is implemented for linear programming (LP) problems. The algorithms are based on a few kernel functions, including both serf-regular functions and non-serf-regular ones. The dynamic step size is compared with fixed step size for the algorithms in inner iteration of Newton step. Numerical tests show that the algorithms with dynaraic step size are more efficient than those with fixed step size.展开更多
The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of...The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms.展开更多
Neutron computed tomography(NCT)is widely used as a noninvasive measurement technique in nuclear engineering,thermal hydraulics,and cultural heritage.The neutron source intensity of NCT is usually low and the scan tim...Neutron computed tomography(NCT)is widely used as a noninvasive measurement technique in nuclear engineering,thermal hydraulics,and cultural heritage.The neutron source intensity of NCT is usually low and the scan time is long,resulting in a projection image containing severe noise.To reduce the scanning time and increase the image reconstruction quality,an effective reconstruction algorithm must be selected.In CT image reconstruction,the reconstruction algorithms can be divided into three categories:analytical algorithms,iterative algorithms,and deep learning.Because the analytical algorithm requires complete projection data,it is not suitable for reconstruction in harsh environments,such as strong radia-tion,high temperature,and high pressure.Deep learning requires large amounts of data and complex models,which cannot be easily deployed,as well as has a high computational complexity and poor interpretability.Therefore,this paper proposes the OS-SART-PDTV iterative algorithm,which uses the ordered subset simultaneous algebraic reconstruction technique(OS-SART)algorithm to reconstruct the image and the first-order primal–dual algorithm to solve the total variation(PDTV),for sparse-view NCT three-dimensional reconstruction.The novel algorithm was compared with other algorithms(FBP,OS-SART-TV,OS-SART-AwTV,and OS-SART-FGPTV)by simulating the experimental data and actual neutron projection experiments.The reconstruction results demonstrate that the proposed algorithm outperforms the FBP,OS-SART-TV,OS-SART-AwTV,and OS-SART-FGPTV algorithms in terms of preserving edge structure,denoising,and suppressing artifacts.展开更多
The structural optimization of wireless sensor networks is a critical issue because it impacts energy consumption and hence the network’s lifetime.Many studies have been conducted for homogeneous networks,but few hav...The structural optimization of wireless sensor networks is a critical issue because it impacts energy consumption and hence the network’s lifetime.Many studies have been conducted for homogeneous networks,but few have been performed for heterogeneouswireless sensor networks.This paper utilizes Rao algorithms to optimize the structure of heterogeneous wireless sensor networks according to node locations and their initial energies.The proposed algorithms lack algorithm-specific parameters and metaphorical connotations.The proposed algorithms examine the search space based on the relations of the population with the best,worst,and randomly assigned solutions.The proposed algorithms can be evaluated using any routing protocol,however,we have chosen the well-known routing protocols in the literature:Low Energy Adaptive Clustering Hierarchy(LEACH),Power-Efficient Gathering in Sensor Information Systems(PEAGSIS),Partitioned-based Energy-efficient LEACH(PE-LEACH),and the Power-Efficient Gathering in Sensor Information Systems Neural Network(PEAGSIS-NN)recent routing protocol.We compare our optimized method with the Jaya,the Particle Swarm Optimization-based Energy Efficient Clustering(PSO-EEC)protocol,and the hybrid Harmony Search Algorithm and PSO(HSA-PSO)algorithms.The efficiencies of our proposed algorithms are evaluated by conducting experiments in terms of the network lifetime(first dead node,half dead nodes,and last dead node),energy consumption,packets to cluster head,and packets to the base station.The experimental results were compared with those obtained using the Jaya optimization algorithm.The proposed algorithms exhibited the best performance.The proposed approach successfully prolongs the network lifetime by 71% for the PEAGSIS protocol,51% for the LEACH protocol,10% for the PE-LEACH protocol,and 73% for the PEGSIS-NN protocol;Moreover,it enhances other criteria such as energy conservation,fitness convergence,packets to cluster head,and packets to the base station.展开更多
Two existing methods for solving a class of fuzzy linear programming (FLP) problems involving symmetric trapezoidal fuzzy numbers without converting them to crisp linear programming problems are the fuzzy primal simpl...Two existing methods for solving a class of fuzzy linear programming (FLP) problems involving symmetric trapezoidal fuzzy numbers without converting them to crisp linear programming problems are the fuzzy primal simplex method proposed by Ganesan and Veeramani [1] and the fuzzy dual simplex method proposed by Ebrahimnejad and Nasseri [2]. The former method is not applicable when a primal basic feasible solution is not easily at hand and the later method needs to an initial dual basic feasible solution. In this paper, we develop a novel approach namely the primal-dual simplex algorithm to overcome mentioned shortcomings. A numerical example is given to illustrate the proposed approach.展开更多
This study is trying to address the critical need for efficient routing in Mobile Ad Hoc Networks(MANETs)from dynamic topologies that pose great challenges because of the mobility of nodes.Themain objective was to del...This study is trying to address the critical need for efficient routing in Mobile Ad Hoc Networks(MANETs)from dynamic topologies that pose great challenges because of the mobility of nodes.Themain objective was to delve into and refine the application of the Dijkstra’s algorithm in this context,a method conventionally esteemed for its efficiency in static networks.Thus,this paper has carried out a comparative theoretical analysis with the Bellman-Ford algorithm,considering adaptation to the dynamic network conditions that are typical for MANETs.This paper has shown through detailed algorithmic analysis that Dijkstra’s algorithm,when adapted for dynamic updates,yields a very workable solution to the problem of real-time routing in MANETs.The results indicate that with these changes,Dijkstra’s algorithm performs much better computationally and 30%better in routing optimization than Bellman-Ford when working with configurations of sparse networks.The theoretical framework adapted,with the adaptation of the Dijkstra’s algorithm for dynamically changing network topologies,is novel in this work and quite different from any traditional application.The adaptation should offer more efficient routing and less computational overhead,most apt in the limited resource environment of MANETs.Thus,from these findings,one may derive a conclusion that the proposed version of Dijkstra’s algorithm is the best and most feasible choice of the routing protocol for MANETs given all pertinent key performance and resource consumption indicators and further that the proposed method offers a marked improvement over traditional methods.This paper,therefore,operationalizes the theoretical model into practical scenarios and also further research with empirical simulations to understand more about its operational effectiveness.展开更多
In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding ...In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding the risk of data loss and data overlapping.The development of data flow scheduling approaches in the cloud environment taking security parameters into account is insufficient.In our work,we propose a data scheduling model for the cloud environment.Themodel is made up of three parts that together help dispatch user data flow to the appropriate cloudVMs.The first component is the Collector Agent whichmust periodically collect information on the state of the network links.The second one is the monitoring agent which must then analyze,classify,and make a decision on the state of the link and finally transmit this information to the scheduler.The third one is the scheduler who must consider previous information to transfer user data,including fair distribution and reliable paths.It should be noted that each part of the proposedmodel requires the development of its algorithms.In this article,we are interested in the development of data transfer algorithms,including fairness distribution with the consideration of a stable link state.These algorithms are based on the grouping of transmitted files and the iterative method.The proposed algorithms showthe performances to obtain an approximate solution to the studied problem which is an NP-hard(Non-Polynomial solution)problem.The experimental results show that the best algorithm is the half-grouped minimum excluding(HME),with a percentage of 91.3%,an average deviation of 0.042,and an execution time of 0.001 s.展开更多
Reducing the vulnerability of a platform,i.e.,the risk of being affected by hostile objects,is of paramount importance in the design process of vehicles,especially aircraft.A simple and effective way to decrease vulne...Reducing the vulnerability of a platform,i.e.,the risk of being affected by hostile objects,is of paramount importance in the design process of vehicles,especially aircraft.A simple and effective way to decrease vulnerability is to introduce protective structures to intercept and possibly stop threats.However,this type of solution can lead to a significant increase in weight,affecting the performance of the aircraft.For this reason,it is crucial to study possible solutions that allow reducing the vulnerability of the aircraft while containing the increase in structural weight.One possible strategy is to optimize the topology of protective solutions to find the optimal balance between vulnerability and the weight of the added structures.Among the many optimization techniques available in the literature for this purpose,multiobjective genetic algorithms stand out as promising tools.In this context,this work proposes the use of a in-house software for vulnerability calculation to guide the process of topology optimization through multi-objective genetic algorithms,aiming to simultaneously minimize the weight of protective structures and vulnerability.In addition to the use of the in-house software,which itself represents a novelty in the field of topology optimization of structures,the method incorporates a custom mutation function within the genetic algorithm,specifically developed using a graph-based approach to ensure the continuity of the generated structures.The tool developed for this work is capable of generating protections with optimized layouts considering two different types of impacting objects,namely bullets and fragments from detonating objects.The software outputs a set of non-dominated solutions describing different topologies that the user can choose from.展开更多
Online review platforms are becoming increasingly popular,encouraging dishonest merchants and service providers to deceive customers by creating fake reviews for their goods or services.Using Sybil accounts,bot farms,...Online review platforms are becoming increasingly popular,encouraging dishonest merchants and service providers to deceive customers by creating fake reviews for their goods or services.Using Sybil accounts,bot farms,and real account purchases,immoral actors demonize rivals and advertise their goods.Most academic and industry efforts have been aimed at detecting fake/fraudulent product or service evaluations for years.The primary hurdle to identifying fraudulent reviews is the lack of a reliable means to distinguish fraudulent reviews from real ones.This paper adopts a semi-supervised machine learning method to detect fake reviews on any website,among other things.Online reviews are classified using a semi-supervised approach(PU-learning)since there is a shortage of labeled data,and they are dynamic.Then,classification is performed using the machine learning techniques Support Vector Machine(SVM)and Nave Bayes.The performance of the suggested system has been compared with standard works,and experimental findings are assessed using several assessment metrics.展开更多
Traditional laboratory tests for measuring rock uniaxial compressive strength(UCS)are tedious and timeconsuming.There is a pressing need for more effective methods to determine rock UCS,especially in deep mining envir...Traditional laboratory tests for measuring rock uniaxial compressive strength(UCS)are tedious and timeconsuming.There is a pressing need for more effective methods to determine rock UCS,especially in deep mining environments under high in-situ stress.Thus,this study aims to develop an advanced model for predicting the UCS of rockmaterial in deepmining environments by combining three boosting-basedmachine learning methods with four optimization algorithms.For this purpose,the Lead-Zinc mine in Southwest China is considered as the case study.Rock density,P-wave velocity,and point load strength index are used as input variables,and UCS is regarded as the output.Subsequently,twelve hybrid predictive models are obtained.Root mean square error(RMSE),mean absolute error(MAE),coefficient of determination(R2),and the proportion of the mean absolute percentage error less than 20%(A-20)are selected as the evaluation metrics.Experimental results showed that the hybridmodel consisting of the extreme gradient boostingmethod and the artificial bee colony algorithm(XGBoost-ABC)achieved satisfactory results on the training dataset and exhibited the best generalization performance on the testing dataset.The values of R2,A-20,RMSE,and MAE on the training dataset are 0.98,1.0,3.11 MPa,and 2.23MPa,respectively.The highest values of R2 and A-20(0.93 and 0.96),and the smallest RMSE and MAE values of 4.78 MPa and 3.76MPa,are observed on the testing dataset.The proposed hybrid model can be considered a reliable and effective method for predicting rock UCS in deep mines.展开更多
For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to in...For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to investigate solutions using the Ptype learning control scheme. Initially, we demonstrate the necessity of gradient information for achieving the best approximation.Subsequently, we propose an input-output-driven learning gain design to handle the imprecise gradients of a class of uncertain systems. However, it is discovered that the desired performance may not be attainable when faced with incomplete information.To address this issue, an extended iterative learning control scheme is introduced. In this scheme, the tracking errors are modified through output data sampling, which incorporates lowmemory footprints and offers flexibility in learning gain design.The input sequence is shown to converge towards the desired input, resulting in an output that is closest to the given reference in the least square sense. Numerical simulations are provided to validate the theoretical findings.展开更多
This study delineates the development of the optimization framework for the preliminary design phase of Floating Offshore Wind Turbines(FOWTs),and the central challenge addressed is the optimization of the FOWT platfo...This study delineates the development of the optimization framework for the preliminary design phase of Floating Offshore Wind Turbines(FOWTs),and the central challenge addressed is the optimization of the FOWT platform dimensional parameters in relation to motion responses.Although the three-dimensional potential flow(TDPF)panel method is recognized for its precision in calculating FOWT motion responses,its computational intensity necessitates an alternative approach for efficiency.Herein,a novel application of varying fidelity frequency-domain computational strategies is introduced,which synthesizes the strip theory with the TDPF panel method to strike a balance between computational speed and accuracy.The Co-Kriging algorithm is employed to forge a surrogate model that amalgamates these computational strategies.Optimization objectives are centered on the platform’s motion response in heave and pitch directions under general sea conditions.The steel usage,the range of design variables,and geometric considerations are optimization constraints.The angle of the pontoons,the number of columns,the radius of the central column and the parameters of the mooring lines are optimization constants.This informed the structuring of a multi-objective optimization model utilizing the Non-dominated Sorting Genetic Algorithm Ⅱ(NSGA-Ⅱ)algorithm.For the case of the IEA UMaine VolturnUS-S Reference Platform,Pareto fronts are discerned based on the above framework and delineate the relationship between competing motion response objectives.The efficacy of final designs is substantiated through the time-domain calculation model,which ensures that the motion responses in extreme sea conditions are superior to those of the initial design.展开更多
Boosting algorithms have been widely utilized in the development of landslide susceptibility mapping(LSM)studies.However,these algorithms possess distinct computational strategies and hyperparameters,making it challen...Boosting algorithms have been widely utilized in the development of landslide susceptibility mapping(LSM)studies.However,these algorithms possess distinct computational strategies and hyperparameters,making it challenging to propose an ideal LSM model.To investigate the impact of different boosting algorithms and hyperparameter optimization algorithms on LSM,this study constructed a geospatial database comprising 12 conditioning factors,such as elevation,stratum,and annual average rainfall.The XGBoost(XGB),LightGBM(LGBM),and CatBoost(CB)algorithms were employed to construct the LSM model.Furthermore,the Bayesian optimization(BO),particle swarm optimization(PSO),and Hyperband optimization(HO)algorithms were applied to optimizing the LSM model.The boosting algorithms exhibited varying performances,with CB demonstrating the highest precision,followed by LGBM,and XGB showing poorer precision.Additionally,the hyperparameter optimization algorithms displayed different performances,with HO outperforming PSO and BO showing poorer performance.The HO-CB model achieved the highest precision,boasting an accuracy of 0.764,an F1-score of 0.777,an area under the curve(AUC)value of 0.837 for the training set,and an AUC value of 0.863 for the test set.The model was interpreted using SHapley Additive exPlanations(SHAP),revealing that slope,curvature,topographic wetness index(TWI),degree of relief,and elevation significantly influenced landslides in the study area.This study offers a scientific reference for LSM and disaster prevention research.This study examines the utilization of various boosting algorithms and hyperparameter optimization algorithms in Wanzhou District.It proposes the HO-CB-SHAP framework as an effective approach to accurately forecast landslide disasters and interpret LSM models.However,limitations exist concerning the generalizability of the model and the data processing,which require further exploration in subsequent studies.展开更多
The classical Pauli particle(CPP) serves as a slow manifold, substituting the conventional guiding center dynamics. Based on the CPP, we utilize the averaged vector field(AVF) method in the computations of drift orbit...The classical Pauli particle(CPP) serves as a slow manifold, substituting the conventional guiding center dynamics. Based on the CPP, we utilize the averaged vector field(AVF) method in the computations of drift orbits. Demonstrating significantly higher efficiency, this advanced method is capable of accomplishing the simulation in less than one-third of the time of directly computing the guiding center motion. In contrast to the CPP-based Boris algorithm, this approach inherits the advantages of the AVF method, yielding stable trajectories even achieved with a tenfold time step and reducing the energy error by two orders of magnitude. By comparing these two CPP algorithms with the traditional RK4 method, the numerical results indicate a remarkable performance in terms of both the computational efficiency and error elimination. Moreover, we verify the properties of slow manifold integrators and successfully observe the bounce on both sides of the limiting slow manifold with deliberately chosen perturbed initial conditions. To evaluate the practical value of the methods, we conduct simulations in non-axisymmetric perturbation magnetic fields as part of the experiments,demonstrating that our CPP-based AVF method can handle simulations under complex magnetic field configurations with high accuracy, which the CPP-based Boris algorithm lacks. Through numerical experiments, we demonstrate that the CPP can replace guiding center dynamics in using energy-preserving algorithms for computations, providing a new, efficient, as well as stable approach for applying structure-preserving algorithms in plasma simulations.展开更多
Real-world engineering design problems with complex objective functions under some constraints are relatively difficult problems to solve.Such design problems are widely experienced in many engineering fields,such as ...Real-world engineering design problems with complex objective functions under some constraints are relatively difficult problems to solve.Such design problems are widely experienced in many engineering fields,such as industry,automotive,construction,machinery,and interdisciplinary research.However,there are established optimization techniques that have shown effectiveness in addressing these types of issues.This research paper gives a comparative study of the implementation of seventeen new metaheuristic methods in order to optimize twelve distinct engineering design issues.The algorithms used in the study are listed as:transient search optimization(TSO),equilibrium optimizer(EO),grey wolf optimizer(GWO),moth-flame optimization(MFO),whale optimization algorithm(WOA),slimemould algorithm(SMA),harris hawks optimization(HHO),chimp optimization algorithm(COA),coot optimization algorithm(COOT),multi-verse optimization(MVO),arithmetic optimization algorithm(AOA),aquila optimizer(AO),sine cosine algorithm(SCA),smell agent optimization(SAO),and seagull optimization algorithm(SOA),pelican optimization algorithm(POA),and coati optimization algorithm(CA).As far as we know,there is no comparative analysis of recent and popular methods against the concrete conditions of real-world engineering problems.Hence,a remarkable research guideline is presented in the study for researchersworking in the fields of engineering and artificial intelligence,especiallywhen applying the optimization methods that have emerged recently.Future research can rely on this work for a literature search on comparisons of metaheuristic optimization methods in real-world problems under similar conditions.展开更多
In the digital music landscape, the accuracy and response speed of music recommendation systems (MRS) are crucial for user experience optimization. Traditional MRS often relies on the use of high-performance servers f...In the digital music landscape, the accuracy and response speed of music recommendation systems (MRS) are crucial for user experience optimization. Traditional MRS often relies on the use of high-performance servers for large-scale training to produce recommendation results, which may result in the inability to achieve music recommendation in some areas due to substandard hardware conditions. This study evaluates the adaptability of four popular machine learning algorithms (K-means clustering, fuzzy C-means (FCM) clustering, hierarchical clustering, and self-organizing map (SOM)) on low-computing servers. Our comparative analysis highlights that while K-means and FCM are robust in high-performance settings, they underperform in low-power scenarios where SOM excels, delivering fast and reliable recommendations with minimal computational overhead. This research addresses a gap in the literature by providing a detailed comparative analysis of MRS algorithms, offering practical insights for implementing adaptive MRS in technologically diverse environments. We conclude with strategic recommendations for emerging streaming services in resource-constrained settings, emphasizing the need for scalable solutions that balance cost and performance. This study advocates an adaptive selection of recommendation algorithms to manage operational costs effectively and accommodate growth.展开更多
Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led...Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led to a significant increase in the user demand for services.However,in cloud environments efficient load balancing is essential to ensure optimal performance and resource utilization.This systematic review targets a detailed description of load balancing techniques including static and dynamic load balancing algorithms.Specifically,metaheuristic-based dynamic load balancing algorithms are identified as the optimal solution in case of increased traffic.In a cloud-based context,this paper describes load balancing measurements,including the benefits and drawbacks associated with the selected load balancing techniques.It also summarizes the algorithms based on implementation,time complexity,adaptability,associated issue(s),and targeted QoS parameters.Additionally,the analysis evaluates the tools and instruments utilized in each investigated study.Moreover,comparative analysis among static,traditional dynamic and metaheuristic algorithms based on response time by using the CloudSim simulation tool is also performed.Finally,the key open problems and potential directions for the state-of-the-art metaheuristic-based approaches are also addressed.展开更多
基金funded by Ministry of Higher Education(MoHE)Malaysia,under Transdisciplinary Research Grant Scheme(TRGS/1/2019/UKM/01/4/2).
文摘The Cross-domain Heuristic Search Challenge(CHeSC)is a competition focused on creating efficient search algorithms adaptable to diverse problem domains.Selection hyper-heuristics are a class of algorithms that dynamically choose heuristics during the search process.Numerous selection hyper-heuristics have different imple-mentation strategies.However,comparisons between them are lacking in the literature,and previous works have not highlighted the beneficial and detrimental implementation methods of different components.The question is how to effectively employ them to produce an efficient search heuristic.Furthermore,the algorithms that competed in the inaugural CHeSC have not been collectively reviewed.This work conducts a review analysis of the top twenty competitors from this competition to identify effective and ineffective strategies influencing algorithmic performance.A summary of the main characteristics and classification of the algorithms is presented.The analysis underlines efficient and inefficient methods in eight key components,including search points,search phases,heuristic selection,move acceptance,feedback,Tabu mechanism,restart mechanism,and low-level heuristic parameter control.This review analyzes the components referencing the competition’s final leaderboard and discusses future research directions for these components.The effective approaches,identified as having the highest quality index,are mixed search point,iterated search phases,relay hybridization selection,threshold acceptance,mixed learning,Tabu heuristics,stochastic restart,and dynamic parameters.Findings are also compared with recent trends in hyper-heuristics.This work enhances the understanding of selection hyper-heuristics,offering valuable insights for researchers and practitioners aiming to develop effective search algorithms for diverse problem domains.
基金supported by National Natural Science Foundation of China(Grant Nos.1136103011271049 and 11271049)+5 种基金the Project Sponsored by the Scientific Research Foundation for the Returned Overseas Chinese ScholarsState Education Ministry(Grant Nos.CUHK400412HKBU502814211911and 12302714)Hong Kong Research Grants Council(Grant No.Ao E/M-05/12)FRGs of Hong Kong Baptist University
文摘We consider the problem of restoring images corrupted by Poisson noise. Under the framework of maximum a posteriori estimator, the problem can be converted into a minimization problem where the objective function is composed of a Kullback-Leibler(KL)-divergence term for the Poisson noise and a total variation(TV) regularization term. Due to the logarithm function in the KL-divergence term, the non-differentiability of TV term and the positivity constraint on the images, it is not easy to design stable and efficiency algorithm for the problem. Recently, many researchers proposed to solve the problem by alternating direction method of multipliers(ADMM). Since the approach introduces some auxiliary variables and requires the solution of some linear systems, the iterative procedure can be complicated. Here we formulate the problem as two new constrained minimax problems and solve them by Chambolle-Pock's first order primal-dual approach. The convergence of our approach is guaranteed by their theory. Comparing with ADMM approaches, our approach requires about half of the auxiliary variables and is matrix-inversion free. Numerical results show that our proposed algorithms are efficient and outperform the ADMM approach.
文摘In the present paper we present a class of polynomial primal-dual interior-point algorithms for semidefmite optimization based on a kernel function. This kernel function is not a so-called self-regular function due to its growth term increasing linearly. Some new analysis tools were developed which can be used to deal with complexity "analysis of the algorithms which use analogous strategy in [5] to design the search directions for the Newton system. The complexity bounds for the algorithms with large- and small-update methodswere obtained, namely,O(qn^(p+q/q(P+1)log n/ε and O(q^2√n)log n/ε,respectlvely.
基金Project supported by Dutch Organization for Scientific Research(Grant No .613 .000 .010)
文摘In this paper, primal-dual interior-point algorithm with dynamic step size is implemented for linear programming (LP) problems. The algorithms are based on a few kernel functions, including both serf-regular functions and non-serf-regular ones. The dynamic step size is compared with fixed step size for the algorithms in inner iteration of Newton step. Numerical tests show that the algorithms with dynaraic step size are more efficient than those with fixed step size.
基金supported by the Knut and Alice Wallenberg Foundationthe Swedish Foundation for Strategic Research+1 种基金the Swedish Research Councilthe National Natural Science Foundation of China(62133003,61991403,61991404,61991400)。
文摘The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms.
基金supported by the National Key Research and Development Program of China(No.2022YFB1902700)the Joint Fund of Ministry of Education for Equipment Pre-research(No.8091B042203)+5 种基金the National Natural Science Foundation of China(No.11875129)the Fund of the State Key Laboratory of Intense Pulsed Radiation Simulation and Effect(No.SKLIPR1810)the Fund of Innovation Center of Radiation Application(No.KFZC2020020402)the Fund of the State Key Laboratory of Nuclear Physics and Technology,Peking University(No.NPT2023KFY06)the Joint Innovation Fund of China National Uranium Co.,Ltd.,State Key Laboratory of Nuclear Resources and Environment,East China University of Technology(No.2022NRE-LH-02)the Fundamental Research Funds for the Central Universities(No.2023JG001).
文摘Neutron computed tomography(NCT)is widely used as a noninvasive measurement technique in nuclear engineering,thermal hydraulics,and cultural heritage.The neutron source intensity of NCT is usually low and the scan time is long,resulting in a projection image containing severe noise.To reduce the scanning time and increase the image reconstruction quality,an effective reconstruction algorithm must be selected.In CT image reconstruction,the reconstruction algorithms can be divided into three categories:analytical algorithms,iterative algorithms,and deep learning.Because the analytical algorithm requires complete projection data,it is not suitable for reconstruction in harsh environments,such as strong radia-tion,high temperature,and high pressure.Deep learning requires large amounts of data and complex models,which cannot be easily deployed,as well as has a high computational complexity and poor interpretability.Therefore,this paper proposes the OS-SART-PDTV iterative algorithm,which uses the ordered subset simultaneous algebraic reconstruction technique(OS-SART)algorithm to reconstruct the image and the first-order primal–dual algorithm to solve the total variation(PDTV),for sparse-view NCT three-dimensional reconstruction.The novel algorithm was compared with other algorithms(FBP,OS-SART-TV,OS-SART-AwTV,and OS-SART-FGPTV)by simulating the experimental data and actual neutron projection experiments.The reconstruction results demonstrate that the proposed algorithm outperforms the FBP,OS-SART-TV,OS-SART-AwTV,and OS-SART-FGPTV algorithms in terms of preserving edge structure,denoising,and suppressing artifacts.
文摘The structural optimization of wireless sensor networks is a critical issue because it impacts energy consumption and hence the network’s lifetime.Many studies have been conducted for homogeneous networks,but few have been performed for heterogeneouswireless sensor networks.This paper utilizes Rao algorithms to optimize the structure of heterogeneous wireless sensor networks according to node locations and their initial energies.The proposed algorithms lack algorithm-specific parameters and metaphorical connotations.The proposed algorithms examine the search space based on the relations of the population with the best,worst,and randomly assigned solutions.The proposed algorithms can be evaluated using any routing protocol,however,we have chosen the well-known routing protocols in the literature:Low Energy Adaptive Clustering Hierarchy(LEACH),Power-Efficient Gathering in Sensor Information Systems(PEAGSIS),Partitioned-based Energy-efficient LEACH(PE-LEACH),and the Power-Efficient Gathering in Sensor Information Systems Neural Network(PEAGSIS-NN)recent routing protocol.We compare our optimized method with the Jaya,the Particle Swarm Optimization-based Energy Efficient Clustering(PSO-EEC)protocol,and the hybrid Harmony Search Algorithm and PSO(HSA-PSO)algorithms.The efficiencies of our proposed algorithms are evaluated by conducting experiments in terms of the network lifetime(first dead node,half dead nodes,and last dead node),energy consumption,packets to cluster head,and packets to the base station.The experimental results were compared with those obtained using the Jaya optimization algorithm.The proposed algorithms exhibited the best performance.The proposed approach successfully prolongs the network lifetime by 71% for the PEAGSIS protocol,51% for the LEACH protocol,10% for the PE-LEACH protocol,and 73% for the PEGSIS-NN protocol;Moreover,it enhances other criteria such as energy conservation,fitness convergence,packets to cluster head,and packets to the base station.
文摘Two existing methods for solving a class of fuzzy linear programming (FLP) problems involving symmetric trapezoidal fuzzy numbers without converting them to crisp linear programming problems are the fuzzy primal simplex method proposed by Ganesan and Veeramani [1] and the fuzzy dual simplex method proposed by Ebrahimnejad and Nasseri [2]. The former method is not applicable when a primal basic feasible solution is not easily at hand and the later method needs to an initial dual basic feasible solution. In this paper, we develop a novel approach namely the primal-dual simplex algorithm to overcome mentioned shortcomings. A numerical example is given to illustrate the proposed approach.
基金supported by Northern Border University,Arar,Kingdom of Saudi Arabia,through the Project Number“NBU-FFR-2024-2248-03”.
文摘This study is trying to address the critical need for efficient routing in Mobile Ad Hoc Networks(MANETs)from dynamic topologies that pose great challenges because of the mobility of nodes.Themain objective was to delve into and refine the application of the Dijkstra’s algorithm in this context,a method conventionally esteemed for its efficiency in static networks.Thus,this paper has carried out a comparative theoretical analysis with the Bellman-Ford algorithm,considering adaptation to the dynamic network conditions that are typical for MANETs.This paper has shown through detailed algorithmic analysis that Dijkstra’s algorithm,when adapted for dynamic updates,yields a very workable solution to the problem of real-time routing in MANETs.The results indicate that with these changes,Dijkstra’s algorithm performs much better computationally and 30%better in routing optimization than Bellman-Ford when working with configurations of sparse networks.The theoretical framework adapted,with the adaptation of the Dijkstra’s algorithm for dynamically changing network topologies,is novel in this work and quite different from any traditional application.The adaptation should offer more efficient routing and less computational overhead,most apt in the limited resource environment of MANETs.Thus,from these findings,one may derive a conclusion that the proposed version of Dijkstra’s algorithm is the best and most feasible choice of the routing protocol for MANETs given all pertinent key performance and resource consumption indicators and further that the proposed method offers a marked improvement over traditional methods.This paper,therefore,operationalizes the theoretical model into practical scenarios and also further research with empirical simulations to understand more about its operational effectiveness.
基金the deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number(IFP-2022-34).
文摘In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding the risk of data loss and data overlapping.The development of data flow scheduling approaches in the cloud environment taking security parameters into account is insufficient.In our work,we propose a data scheduling model for the cloud environment.Themodel is made up of three parts that together help dispatch user data flow to the appropriate cloudVMs.The first component is the Collector Agent whichmust periodically collect information on the state of the network links.The second one is the monitoring agent which must then analyze,classify,and make a decision on the state of the link and finally transmit this information to the scheduler.The third one is the scheduler who must consider previous information to transfer user data,including fair distribution and reliable paths.It should be noted that each part of the proposedmodel requires the development of its algorithms.In this article,we are interested in the development of data transfer algorithms,including fairness distribution with the consideration of a stable link state.These algorithms are based on the grouping of transmitted files and the iterative method.The proposed algorithms showthe performances to obtain an approximate solution to the studied problem which is an NP-hard(Non-Polynomial solution)problem.The experimental results show that the best algorithm is the half-grouped minimum excluding(HME),with a percentage of 91.3%,an average deviation of 0.042,and an execution time of 0.001 s.
文摘Reducing the vulnerability of a platform,i.e.,the risk of being affected by hostile objects,is of paramount importance in the design process of vehicles,especially aircraft.A simple and effective way to decrease vulnerability is to introduce protective structures to intercept and possibly stop threats.However,this type of solution can lead to a significant increase in weight,affecting the performance of the aircraft.For this reason,it is crucial to study possible solutions that allow reducing the vulnerability of the aircraft while containing the increase in structural weight.One possible strategy is to optimize the topology of protective solutions to find the optimal balance between vulnerability and the weight of the added structures.Among the many optimization techniques available in the literature for this purpose,multiobjective genetic algorithms stand out as promising tools.In this context,this work proposes the use of a in-house software for vulnerability calculation to guide the process of topology optimization through multi-objective genetic algorithms,aiming to simultaneously minimize the weight of protective structures and vulnerability.In addition to the use of the in-house software,which itself represents a novelty in the field of topology optimization of structures,the method incorporates a custom mutation function within the genetic algorithm,specifically developed using a graph-based approach to ensure the continuity of the generated structures.The tool developed for this work is capable of generating protections with optimized layouts considering two different types of impacting objects,namely bullets and fragments from detonating objects.The software outputs a set of non-dominated solutions describing different topologies that the user can choose from.
文摘Online review platforms are becoming increasingly popular,encouraging dishonest merchants and service providers to deceive customers by creating fake reviews for their goods or services.Using Sybil accounts,bot farms,and real account purchases,immoral actors demonize rivals and advertise their goods.Most academic and industry efforts have been aimed at detecting fake/fraudulent product or service evaluations for years.The primary hurdle to identifying fraudulent reviews is the lack of a reliable means to distinguish fraudulent reviews from real ones.This paper adopts a semi-supervised machine learning method to detect fake reviews on any website,among other things.Online reviews are classified using a semi-supervised approach(PU-learning)since there is a shortage of labeled data,and they are dynamic.Then,classification is performed using the machine learning techniques Support Vector Machine(SVM)and Nave Bayes.The performance of the suggested system has been compared with standard works,and experimental findings are assessed using several assessment metrics.
基金supported by the National Natural Science Foundation of China(Grant No.52374153).
文摘Traditional laboratory tests for measuring rock uniaxial compressive strength(UCS)are tedious and timeconsuming.There is a pressing need for more effective methods to determine rock UCS,especially in deep mining environments under high in-situ stress.Thus,this study aims to develop an advanced model for predicting the UCS of rockmaterial in deepmining environments by combining three boosting-basedmachine learning methods with four optimization algorithms.For this purpose,the Lead-Zinc mine in Southwest China is considered as the case study.Rock density,P-wave velocity,and point load strength index are used as input variables,and UCS is regarded as the output.Subsequently,twelve hybrid predictive models are obtained.Root mean square error(RMSE),mean absolute error(MAE),coefficient of determination(R2),and the proportion of the mean absolute percentage error less than 20%(A-20)are selected as the evaluation metrics.Experimental results showed that the hybridmodel consisting of the extreme gradient boostingmethod and the artificial bee colony algorithm(XGBoost-ABC)achieved satisfactory results on the training dataset and exhibited the best generalization performance on the testing dataset.The values of R2,A-20,RMSE,and MAE on the training dataset are 0.98,1.0,3.11 MPa,and 2.23MPa,respectively.The highest values of R2 and A-20(0.93 and 0.96),and the smallest RMSE and MAE values of 4.78 MPa and 3.76MPa,are observed on the testing dataset.The proposed hybrid model can be considered a reliable and effective method for predicting rock UCS in deep mines.
基金supported by the National Natural Science Foundation of China (62173333, 12271522)Beijing Natural Science Foundation (Z210002)the Research Fund of Renmin University of China (2021030187)。
文摘For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to investigate solutions using the Ptype learning control scheme. Initially, we demonstrate the necessity of gradient information for achieving the best approximation.Subsequently, we propose an input-output-driven learning gain design to handle the imprecise gradients of a class of uncertain systems. However, it is discovered that the desired performance may not be attainable when faced with incomplete information.To address this issue, an extended iterative learning control scheme is introduced. In this scheme, the tracking errors are modified through output data sampling, which incorporates lowmemory footprints and offers flexibility in learning gain design.The input sequence is shown to converge towards the desired input, resulting in an output that is closest to the given reference in the least square sense. Numerical simulations are provided to validate the theoretical findings.
基金financially supported by the National Natural Science Foundation of China(Grant No.52371261)the Science and Technology Projects of Liaoning Province(Grant No.2023011352-JH1/110).
文摘This study delineates the development of the optimization framework for the preliminary design phase of Floating Offshore Wind Turbines(FOWTs),and the central challenge addressed is the optimization of the FOWT platform dimensional parameters in relation to motion responses.Although the three-dimensional potential flow(TDPF)panel method is recognized for its precision in calculating FOWT motion responses,its computational intensity necessitates an alternative approach for efficiency.Herein,a novel application of varying fidelity frequency-domain computational strategies is introduced,which synthesizes the strip theory with the TDPF panel method to strike a balance between computational speed and accuracy.The Co-Kriging algorithm is employed to forge a surrogate model that amalgamates these computational strategies.Optimization objectives are centered on the platform’s motion response in heave and pitch directions under general sea conditions.The steel usage,the range of design variables,and geometric considerations are optimization constraints.The angle of the pontoons,the number of columns,the radius of the central column and the parameters of the mooring lines are optimization constants.This informed the structuring of a multi-objective optimization model utilizing the Non-dominated Sorting Genetic Algorithm Ⅱ(NSGA-Ⅱ)algorithm.For the case of the IEA UMaine VolturnUS-S Reference Platform,Pareto fronts are discerned based on the above framework and delineate the relationship between competing motion response objectives.The efficacy of final designs is substantiated through the time-domain calculation model,which ensures that the motion responses in extreme sea conditions are superior to those of the initial design.
基金funded by the Natural Science Foundation of Chongqing(Grants No.CSTB2022NSCQ-MSX0594)the Humanities and Social Sciences Research Project of the Ministry of Education(Grants No.16YJCZH061).
文摘Boosting algorithms have been widely utilized in the development of landslide susceptibility mapping(LSM)studies.However,these algorithms possess distinct computational strategies and hyperparameters,making it challenging to propose an ideal LSM model.To investigate the impact of different boosting algorithms and hyperparameter optimization algorithms on LSM,this study constructed a geospatial database comprising 12 conditioning factors,such as elevation,stratum,and annual average rainfall.The XGBoost(XGB),LightGBM(LGBM),and CatBoost(CB)algorithms were employed to construct the LSM model.Furthermore,the Bayesian optimization(BO),particle swarm optimization(PSO),and Hyperband optimization(HO)algorithms were applied to optimizing the LSM model.The boosting algorithms exhibited varying performances,with CB demonstrating the highest precision,followed by LGBM,and XGB showing poorer precision.Additionally,the hyperparameter optimization algorithms displayed different performances,with HO outperforming PSO and BO showing poorer performance.The HO-CB model achieved the highest precision,boasting an accuracy of 0.764,an F1-score of 0.777,an area under the curve(AUC)value of 0.837 for the training set,and an AUC value of 0.863 for the test set.The model was interpreted using SHapley Additive exPlanations(SHAP),revealing that slope,curvature,topographic wetness index(TWI),degree of relief,and elevation significantly influenced landslides in the study area.This study offers a scientific reference for LSM and disaster prevention research.This study examines the utilization of various boosting algorithms and hyperparameter optimization algorithms in Wanzhou District.It proposes the HO-CB-SHAP framework as an effective approach to accurately forecast landslide disasters and interpret LSM models.However,limitations exist concerning the generalizability of the model and the data processing,which require further exploration in subsequent studies.
基金supported by National Natural Science Foundation of China (Nos. 11975068 and 11925501)the National Key R&D Program of China (No. 2022YFE03090000)the Fundamental Research Funds for the Central Universities (No. DUT22ZD215)。
文摘The classical Pauli particle(CPP) serves as a slow manifold, substituting the conventional guiding center dynamics. Based on the CPP, we utilize the averaged vector field(AVF) method in the computations of drift orbits. Demonstrating significantly higher efficiency, this advanced method is capable of accomplishing the simulation in less than one-third of the time of directly computing the guiding center motion. In contrast to the CPP-based Boris algorithm, this approach inherits the advantages of the AVF method, yielding stable trajectories even achieved with a tenfold time step and reducing the energy error by two orders of magnitude. By comparing these two CPP algorithms with the traditional RK4 method, the numerical results indicate a remarkable performance in terms of both the computational efficiency and error elimination. Moreover, we verify the properties of slow manifold integrators and successfully observe the bounce on both sides of the limiting slow manifold with deliberately chosen perturbed initial conditions. To evaluate the practical value of the methods, we conduct simulations in non-axisymmetric perturbation magnetic fields as part of the experiments,demonstrating that our CPP-based AVF method can handle simulations under complex magnetic field configurations with high accuracy, which the CPP-based Boris algorithm lacks. Through numerical experiments, we demonstrate that the CPP can replace guiding center dynamics in using energy-preserving algorithms for computations, providing a new, efficient, as well as stable approach for applying structure-preserving algorithms in plasma simulations.
文摘Real-world engineering design problems with complex objective functions under some constraints are relatively difficult problems to solve.Such design problems are widely experienced in many engineering fields,such as industry,automotive,construction,machinery,and interdisciplinary research.However,there are established optimization techniques that have shown effectiveness in addressing these types of issues.This research paper gives a comparative study of the implementation of seventeen new metaheuristic methods in order to optimize twelve distinct engineering design issues.The algorithms used in the study are listed as:transient search optimization(TSO),equilibrium optimizer(EO),grey wolf optimizer(GWO),moth-flame optimization(MFO),whale optimization algorithm(WOA),slimemould algorithm(SMA),harris hawks optimization(HHO),chimp optimization algorithm(COA),coot optimization algorithm(COOT),multi-verse optimization(MVO),arithmetic optimization algorithm(AOA),aquila optimizer(AO),sine cosine algorithm(SCA),smell agent optimization(SAO),and seagull optimization algorithm(SOA),pelican optimization algorithm(POA),and coati optimization algorithm(CA).As far as we know,there is no comparative analysis of recent and popular methods against the concrete conditions of real-world engineering problems.Hence,a remarkable research guideline is presented in the study for researchersworking in the fields of engineering and artificial intelligence,especiallywhen applying the optimization methods that have emerged recently.Future research can rely on this work for a literature search on comparisons of metaheuristic optimization methods in real-world problems under similar conditions.
文摘In the digital music landscape, the accuracy and response speed of music recommendation systems (MRS) are crucial for user experience optimization. Traditional MRS often relies on the use of high-performance servers for large-scale training to produce recommendation results, which may result in the inability to achieve music recommendation in some areas due to substandard hardware conditions. This study evaluates the adaptability of four popular machine learning algorithms (K-means clustering, fuzzy C-means (FCM) clustering, hierarchical clustering, and self-organizing map (SOM)) on low-computing servers. Our comparative analysis highlights that while K-means and FCM are robust in high-performance settings, they underperform in low-power scenarios where SOM excels, delivering fast and reliable recommendations with minimal computational overhead. This research addresses a gap in the literature by providing a detailed comparative analysis of MRS algorithms, offering practical insights for implementing adaptive MRS in technologically diverse environments. We conclude with strategic recommendations for emerging streaming services in resource-constrained settings, emphasizing the need for scalable solutions that balance cost and performance. This study advocates an adaptive selection of recommendation algorithms to manage operational costs effectively and accommodate growth.
文摘Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led to a significant increase in the user demand for services.However,in cloud environments efficient load balancing is essential to ensure optimal performance and resource utilization.This systematic review targets a detailed description of load balancing techniques including static and dynamic load balancing algorithms.Specifically,metaheuristic-based dynamic load balancing algorithms are identified as the optimal solution in case of increased traffic.In a cloud-based context,this paper describes load balancing measurements,including the benefits and drawbacks associated with the selected load balancing techniques.It also summarizes the algorithms based on implementation,time complexity,adaptability,associated issue(s),and targeted QoS parameters.Additionally,the analysis evaluates the tools and instruments utilized in each investigated study.Moreover,comparative analysis among static,traditional dynamic and metaheuristic algorithms based on response time by using the CloudSim simulation tool is also performed.Finally,the key open problems and potential directions for the state-of-the-art metaheuristic-based approaches are also addressed.