Artificial rabbits optimization(ARO)is a recently proposed biology-based optimization algorithm inspired by the detour foraging and random hiding behavior of rabbits in nature.However,for solving optimization problems...Artificial rabbits optimization(ARO)is a recently proposed biology-based optimization algorithm inspired by the detour foraging and random hiding behavior of rabbits in nature.However,for solving optimization problems,the ARO algorithm shows slow convergence speed and can fall into local minima.To overcome these drawbacks,this paper proposes chaotic opposition-based learning ARO(COARO),an improved version of the ARO algorithm that incorporates opposition-based learning(OBL)and chaotic local search(CLS)techniques.By adding OBL to ARO,the convergence speed of the algorithm increases and it explores the search space better.Chaotic maps in CLS provide rapid convergence by scanning the search space efficiently,since their ergodicity and non-repetitive properties.The proposed COARO algorithm has been tested using thirty-three distinct benchmark functions.The outcomes have been compared with the most recent optimization algorithms.Additionally,the COARO algorithm’s problem-solving capabilities have been evaluated using six different engineering design problems and compared with various other algorithms.This study also introduces a binary variant of the continuous COARO algorithm,named BCOARO.The performance of BCOARO was evaluated on the breast cancer dataset.The effectiveness of BCOARO has been compared with different feature selection algorithms.The proposed BCOARO outperforms alternative algorithms,according to the findings obtained for real applications in terms of accuracy performance,and fitness value.Extensive experiments show that the COARO and BCOARO algorithms achieve promising results compared to other metaheuristic algorithms.展开更多
The flying foxes optimization(FFO)algorithm,as a newly introduced metaheuristic algorithm,is inspired by the survival tactics of flying foxes in heat wave environments.FFO preferentially selects the best-performing in...The flying foxes optimization(FFO)algorithm,as a newly introduced metaheuristic algorithm,is inspired by the survival tactics of flying foxes in heat wave environments.FFO preferentially selects the best-performing individuals.This tendency will cause the newly generated solution to remain closely tied to the candidate optimal in the search area.To address this issue,the paper introduces an opposition-based learning-based search mechanism for FFO algorithm(IFFO).Firstly,this paper introduces niching techniques to improve the survival list method,which not only focuses on the adaptability of individuals but also considers the population’s crowding degree to enhance the global search capability.Secondly,an initialization strategy of opposition-based learning is used to perturb the initial population and elevate its quality.Finally,to verify the superiority of the improved search mechanism,IFFO,FFO and the cutting-edge metaheuristic algorithms are compared and analyzed using a set of test functions.The results prove that compared with other algorithms,IFFO is characterized by its rapid convergence,precise results and robust stability.展开更多
As a new bionic algorithm,Spider Monkey Optimization(SMO)has been widely used in various complex optimization problems in recent years.However,the new space exploration power of SMO is limited and the diversity of the...As a new bionic algorithm,Spider Monkey Optimization(SMO)has been widely used in various complex optimization problems in recent years.However,the new space exploration power of SMO is limited and the diversity of the population in SMO is not abundant.Thus,this paper focuses on how to reconstruct SMO to improve its performance,and a novel spider monkey optimization algorithm with opposition-based learning and orthogonal experimental design(SMO^(3))is developed.A position updatingmethod based on the historical optimal domain and particle swarmfor Local Leader Phase(LLP)andGlobal Leader Phase(GLP)is presented to improve the diversity of the population of SMO.Moreover,an opposition-based learning strategy based on self-extremum is proposed to avoid suffering from premature convergence and getting stuck at locally optimal values.Also,a local worst individual elimination method based on orthogonal experimental design is used for helping the SMO algorithm eliminate the poor individuals in time.Furthermore,an extended SMO^(3)named CSMO^(3)is investigated to deal with constrained optimization problems.The proposed algorithm is applied to both unconstrained and constrained functions which include the CEC2006 benchmark set and three engineering problems.Experimental results show that the performance of the proposed algorithm is better than three well-known SMO algorithms and other evolutionary algorithms in unconstrained and constrained problems.展开更多
Gorilla troops optimizer(GTO)is a newly developed meta-heuristic algorithm,which is inspired by the collective lifestyle and social intelligence of gorillas.Similar to othermetaheuristics,the convergence accuracy and ...Gorilla troops optimizer(GTO)is a newly developed meta-heuristic algorithm,which is inspired by the collective lifestyle and social intelligence of gorillas.Similar to othermetaheuristics,the convergence accuracy and stability of GTOwill deterioratewhen the optimization problems to be solved becomemore complex and flexible.To overcome these defects and achieve better performance,this paper proposes an improved gorilla troops optimizer(IGTO).First,Circle chaotic mapping is introduced to initialize the positions of gorillas,which facilitates the population diversity and establishes a good foundation for global search.Then,in order to avoid getting trapped in the local optimum,the lens opposition-based learning mechanism is adopted to expand the search ranges.Besides,a novel local search-based algorithm,namely adaptiveβ-hill climbing,is amalgamated with GTO to increase the final solution precision.Attributed to three improvements,the exploration and exploitation capabilities of the basic GTOare greatly enhanced.The performance of the proposed algorithm is comprehensively evaluated and analyzed on 19 classical benchmark functions.The numerical and statistical results demonstrate that IGTO can provide better solution quality,local optimumavoidance,and robustness compared with the basic GTOand five other wellknown algorithms.Moreover,the applicability of IGTOis further proved through resolving four engineering design problems and training multilayer perceptron.The experimental results suggest that IGTO exhibits remarkable competitive performance and promising prospects in real-world tasks.展开更多
Efficient speed controllers for dynamic driving tasks in autonomous vehicles are crucial for ensuring safety and reliability.This study proposes a novel approach for designing a fractional order proportional-integral-...Efficient speed controllers for dynamic driving tasks in autonomous vehicles are crucial for ensuring safety and reliability.This study proposes a novel approach for designing a fractional order proportional-integral-derivative(FOPID)controller that utilizes a modified elite opposition-based artificial hummingbird algorithm(m-AHA)for optimal parameter tuning.Our approach outperforms existing optimization techniques on benchmark functions,and we demonstrate its effectiveness in controlling cruise control systems with increased flexibility and precision.Our study contributes to the advancement of autonomous vehicle technology by introducing a novel and efficient method for FOPID controller design that can enhance the driving experience while ensuring safety and reliability.We highlight the significance of our findings by demonstrating how our approach can improve the performance,safety,and reliability of autonomous vehicles.This study’s contributions are particularly relevant in the context of the growing demand for autonomous vehicles and the need for advanced control techniques to ensure their safe operation.Our research provides a promising avenue for further research and development in this area.展开更多
Hybridizing metaheuristic algorithms involves synergistically combining different optimization techniques to effectively address complex and challenging optimization problems.This approach aims to leverage the strengt...Hybridizing metaheuristic algorithms involves synergistically combining different optimization techniques to effectively address complex and challenging optimization problems.This approach aims to leverage the strengths of multiple algorithms,enhancing solution quality,convergence speed,and robustness,thereby offering a more versatile and efficient means of solving intricate real-world optimization tasks.In this paper,we introduce a hybrid algorithm that amalgamates three distinct metaheuristics:the Beluga Whale Optimization(BWO),the Honey Badger Algorithm(HBA),and the Jellyfish Search(JS)optimizer.The proposed hybrid algorithm will be referred to as BHJO.Through this fusion,the BHJO algorithm aims to leverage the strengths of each optimizer.Before this hybridization,we thoroughly examined the exploration and exploitation capabilities of the BWO,HBA,and JS metaheuristics,as well as their ability to strike a balance between exploration and exploitation.This meticulous analysis allowed us to identify the pros and cons of each algorithm,enabling us to combine them in a novel hybrid approach that capitalizes on their respective strengths for enhanced optimization performance.In addition,the BHJO algorithm incorporates Opposition-Based Learning(OBL)to harness the advantages offered by this technique,leveraging its diverse exploration,accelerated convergence,and improved solution quality to enhance the overall performance and effectiveness of the hybrid algorithm.Moreover,the performance of the BHJO algorithm was evaluated across a range of both unconstrained and constrained optimization problems,providing a comprehensive assessment of its efficacy and applicability in diverse problem domains.Similarly,the BHJO algorithm was subjected to a comparative analysis with several renowned algorithms,where mean and standard deviation values were utilized as evaluation metrics.This rigorous comparison aimed to assess the performance of the BHJOalgorithmabout its counterparts,shedding light on its effectiveness and reliability in solving optimization problems.Finally,the obtained numerical statistics underwent rigorous analysis using the Friedman post hoc Dunn’s test.The resulting numerical values revealed the BHJO algorithm’s competitiveness in tackling intricate optimization problems,affirming its capability to deliver favorable outcomes in challenging scenarios.展开更多
Chimp Optimization Algorithm(ChOA)is one of the most efficient recent optimization algorithms,which proved its ability to deal with different problems in various do-mains.However,ChOA suffers from the weakness of the ...Chimp Optimization Algorithm(ChOA)is one of the most efficient recent optimization algorithms,which proved its ability to deal with different problems in various do-mains.However,ChOA suffers from the weakness of the local search technique which leads to a loss of diversity,getting stuck in a local minimum,and procuring premature convergence.In response to these defects,this paper proposes an improved ChOA algorithm based on using Opposition-based learning(OBL)to enhance the choice of better solutions,written as OChOA.Then,utilizing Reinforcement Learning(RL)to improve the local research technique of OChOA,called RLOChOA.This way effectively avoids the algorithm falling into local optimum.The performance of the proposed RLOChOA algorithm is evaluated using the Friedman rank test on a set of CEC 2015 and CEC 2017 benchmark functions problems and a set of CEC 2011 real-world problems.Numerical results and statistical experiments show that RLOChOA provides better solution quality,convergence accuracy and stability compared with other state-of-the-art algorithms.展开更多
Harris Hawks Optimization(HHO)is a novel meta-heuristic algorithm that imitates the predation characteristics of Harris Hawk and combines Lévy flight to solve complex multidimensional problems.Nevertheless,the ba...Harris Hawks Optimization(HHO)is a novel meta-heuristic algorithm that imitates the predation characteristics of Harris Hawk and combines Lévy flight to solve complex multidimensional problems.Nevertheless,the basic HHO algorithm still has certain limitations,including the tendency to fall into the local optima and poor convergence accuracy.Coot Bird Optimization(CBO)is another new swarm-based optimization algorithm.CBO originates from the regular and irregular motion of a bird called Coot on the water’s surface.Although the framework of CBO is slightly complicated,it has outstanding exploration potential and excellent capability to avoid falling into local optimal solutions.This paper proposes a novel enhanced hybrid algorithm based on the basic HHO and CBO named Enhanced Harris Hawks Optimization Integrated with Coot Bird Optimization(EHHOCBO).EHHOCBO can provide higher-quality solutions for numerical optimization problems.It first embeds the leadership mechanism of CBO into the population initialization process of HHO.This way can take full advantage of the valuable solution information to provide a good foundation for the global search of the hybrid algorithm.Secondly,the Ensemble Mutation Strategy(EMS)is introduced to generate the mutant candidate positions for consideration,further improving the hybrid algorithm’s exploration trend and population diversity.To further reduce the likelihood of falling into the local optima and speed up the convergence,Refracted Opposition-Based Learning(ROBL)is adopted to update the current optimal solution in the swarm.Using 23 classical benchmark functions and the IEEE CEC2017 test suite,the performance of the proposed EHHOCBO is comprehensively evaluated and compared with eight other basic meta-heuristic algorithms and six improved variants.Experimental results show that EHHOCBO can achieve better solution accuracy,faster convergence speed,and a more robust ability to jump out of local optima than other advanced optimizers in most test cases.Finally,EHHOCBOis applied to address four engineering design problems.Our findings indicate that the proposed method also provides satisfactory performance regarding the convergence accuracy of the optimal global solution.展开更多
The original whale optimization algorithm(WOA)has a low initial population quality and tends to converge to local optimal solutions.To address these challenges,this paper introduces an improved whale optimization algo...The original whale optimization algorithm(WOA)has a low initial population quality and tends to converge to local optimal solutions.To address these challenges,this paper introduces an improved whale optimization algorithm called OLCHWOA,incorporating a chaos mechanism and an opposition-based learning strategy.This algorithm introduces chaotic initialization and opposition-based initialization operators during the population initialization phase,thereby enhancing the quality of the initial whale population.Additionally,including an elite opposition-based learning operator significantly improves the algorithm’s global search capabilities during iterations.The work and contributions of this paper are primarily reflected in two aspects.Firstly,an improved whale algorithm with enhanced development capabilities and a wide range of application scenarios is proposed.Secondly,the proposed OLCHWOA is used to optimize the hyperparameters of the Long Short-Term Memory(LSTM)networks.Subsequently,a prediction model for Realized Volatility(RV)based on OLCHWOA-LSTM is proposed to optimize hyperparameters automatically.To evaluate the performance of OLCHWOA,a series of comparative experiments were conducted using a variety of advanced algorithms.These experiments included 38 standard test functions from CEC2013 and CEC2019 and three constrained engineering design problems.The experimental results show that OLCHWOA ranks first in accuracy and stability under the same maximum fitness function calls budget.Additionally,the China Securities Index 300(CSI 300)dataset is used to evaluate the effectiveness of the proposed OLCHWOA-LSTM model in predicting RV.The comparison results with the other eight models show that the proposed model has the highest accuracy and goodness of fit in predicting RV.This further confirms that OLCHWOA effectively addresses real-world optimization problems.展开更多
基金funded by Firat University Scientific Research Projects Management Unit for the scientific research project of Feyza AltunbeyÖzbay,numbered MF.23.49.
文摘Artificial rabbits optimization(ARO)is a recently proposed biology-based optimization algorithm inspired by the detour foraging and random hiding behavior of rabbits in nature.However,for solving optimization problems,the ARO algorithm shows slow convergence speed and can fall into local minima.To overcome these drawbacks,this paper proposes chaotic opposition-based learning ARO(COARO),an improved version of the ARO algorithm that incorporates opposition-based learning(OBL)and chaotic local search(CLS)techniques.By adding OBL to ARO,the convergence speed of the algorithm increases and it explores the search space better.Chaotic maps in CLS provide rapid convergence by scanning the search space efficiently,since their ergodicity and non-repetitive properties.The proposed COARO algorithm has been tested using thirty-three distinct benchmark functions.The outcomes have been compared with the most recent optimization algorithms.Additionally,the COARO algorithm’s problem-solving capabilities have been evaluated using six different engineering design problems and compared with various other algorithms.This study also introduces a binary variant of the continuous COARO algorithm,named BCOARO.The performance of BCOARO was evaluated on the breast cancer dataset.The effectiveness of BCOARO has been compared with different feature selection algorithms.The proposed BCOARO outperforms alternative algorithms,according to the findings obtained for real applications in terms of accuracy performance,and fitness value.Extensive experiments show that the COARO and BCOARO algorithms achieve promising results compared to other metaheuristic algorithms.
基金support from the Ningxia Natural Science Foundation Project(2023AAC03361).
文摘The flying foxes optimization(FFO)algorithm,as a newly introduced metaheuristic algorithm,is inspired by the survival tactics of flying foxes in heat wave environments.FFO preferentially selects the best-performing individuals.This tendency will cause the newly generated solution to remain closely tied to the candidate optimal in the search area.To address this issue,the paper introduces an opposition-based learning-based search mechanism for FFO algorithm(IFFO).Firstly,this paper introduces niching techniques to improve the survival list method,which not only focuses on the adaptability of individuals but also considers the population’s crowding degree to enhance the global search capability.Secondly,an initialization strategy of opposition-based learning is used to perturb the initial population and elevate its quality.Finally,to verify the superiority of the improved search mechanism,IFFO,FFO and the cutting-edge metaheuristic algorithms are compared and analyzed using a set of test functions.The results prove that compared with other algorithms,IFFO is characterized by its rapid convergence,precise results and robust stability.
基金supported by the First Batch of Teaching Reform Projects of Zhejiang Higher Education“14th Five-Year Plan”(jg20220434)Special Scientific Research Project for Space Debris and Near-Earth Asteroid Defense(KJSP2020020202)+1 种基金Natural Science Foundation of Zhejiang Province(LGG19F030010)National Natural Science Foundation of China(61703183).
文摘As a new bionic algorithm,Spider Monkey Optimization(SMO)has been widely used in various complex optimization problems in recent years.However,the new space exploration power of SMO is limited and the diversity of the population in SMO is not abundant.Thus,this paper focuses on how to reconstruct SMO to improve its performance,and a novel spider monkey optimization algorithm with opposition-based learning and orthogonal experimental design(SMO^(3))is developed.A position updatingmethod based on the historical optimal domain and particle swarmfor Local Leader Phase(LLP)andGlobal Leader Phase(GLP)is presented to improve the diversity of the population of SMO.Moreover,an opposition-based learning strategy based on self-extremum is proposed to avoid suffering from premature convergence and getting stuck at locally optimal values.Also,a local worst individual elimination method based on orthogonal experimental design is used for helping the SMO algorithm eliminate the poor individuals in time.Furthermore,an extended SMO^(3)named CSMO^(3)is investigated to deal with constrained optimization problems.The proposed algorithm is applied to both unconstrained and constrained functions which include the CEC2006 benchmark set and three engineering problems.Experimental results show that the performance of the proposed algorithm is better than three well-known SMO algorithms and other evolutionary algorithms in unconstrained and constrained problems.
基金This work is financially supported by the Fundamental Research Funds for the Central Universities under Grant 2572014BB06.
文摘Gorilla troops optimizer(GTO)is a newly developed meta-heuristic algorithm,which is inspired by the collective lifestyle and social intelligence of gorillas.Similar to othermetaheuristics,the convergence accuracy and stability of GTOwill deterioratewhen the optimization problems to be solved becomemore complex and flexible.To overcome these defects and achieve better performance,this paper proposes an improved gorilla troops optimizer(IGTO).First,Circle chaotic mapping is introduced to initialize the positions of gorillas,which facilitates the population diversity and establishes a good foundation for global search.Then,in order to avoid getting trapped in the local optimum,the lens opposition-based learning mechanism is adopted to expand the search ranges.Besides,a novel local search-based algorithm,namely adaptiveβ-hill climbing,is amalgamated with GTO to increase the final solution precision.Attributed to three improvements,the exploration and exploitation capabilities of the basic GTOare greatly enhanced.The performance of the proposed algorithm is comprehensively evaluated and analyzed on 19 classical benchmark functions.The numerical and statistical results demonstrate that IGTO can provide better solution quality,local optimumavoidance,and robustness compared with the basic GTOand five other wellknown algorithms.Moreover,the applicability of IGTOis further proved through resolving four engineering design problems and training multilayer perceptron.The experimental results suggest that IGTO exhibits remarkable competitive performance and promising prospects in real-world tasks.
文摘Efficient speed controllers for dynamic driving tasks in autonomous vehicles are crucial for ensuring safety and reliability.This study proposes a novel approach for designing a fractional order proportional-integral-derivative(FOPID)controller that utilizes a modified elite opposition-based artificial hummingbird algorithm(m-AHA)for optimal parameter tuning.Our approach outperforms existing optimization techniques on benchmark functions,and we demonstrate its effectiveness in controlling cruise control systems with increased flexibility and precision.Our study contributes to the advancement of autonomous vehicle technology by introducing a novel and efficient method for FOPID controller design that can enhance the driving experience while ensuring safety and reliability.We highlight the significance of our findings by demonstrating how our approach can improve the performance,safety,and reliability of autonomous vehicles.This study’s contributions are particularly relevant in the context of the growing demand for autonomous vehicles and the need for advanced control techniques to ensure their safe operation.Our research provides a promising avenue for further research and development in this area.
基金funded by the Researchers Supporting Program at King Saud University(RSPD2024R809).
文摘Hybridizing metaheuristic algorithms involves synergistically combining different optimization techniques to effectively address complex and challenging optimization problems.This approach aims to leverage the strengths of multiple algorithms,enhancing solution quality,convergence speed,and robustness,thereby offering a more versatile and efficient means of solving intricate real-world optimization tasks.In this paper,we introduce a hybrid algorithm that amalgamates three distinct metaheuristics:the Beluga Whale Optimization(BWO),the Honey Badger Algorithm(HBA),and the Jellyfish Search(JS)optimizer.The proposed hybrid algorithm will be referred to as BHJO.Through this fusion,the BHJO algorithm aims to leverage the strengths of each optimizer.Before this hybridization,we thoroughly examined the exploration and exploitation capabilities of the BWO,HBA,and JS metaheuristics,as well as their ability to strike a balance between exploration and exploitation.This meticulous analysis allowed us to identify the pros and cons of each algorithm,enabling us to combine them in a novel hybrid approach that capitalizes on their respective strengths for enhanced optimization performance.In addition,the BHJO algorithm incorporates Opposition-Based Learning(OBL)to harness the advantages offered by this technique,leveraging its diverse exploration,accelerated convergence,and improved solution quality to enhance the overall performance and effectiveness of the hybrid algorithm.Moreover,the performance of the BHJO algorithm was evaluated across a range of both unconstrained and constrained optimization problems,providing a comprehensive assessment of its efficacy and applicability in diverse problem domains.Similarly,the BHJO algorithm was subjected to a comparative analysis with several renowned algorithms,where mean and standard deviation values were utilized as evaluation metrics.This rigorous comparison aimed to assess the performance of the BHJOalgorithmabout its counterparts,shedding light on its effectiveness and reliability in solving optimization problems.Finally,the obtained numerical statistics underwent rigorous analysis using the Friedman post hoc Dunn’s test.The resulting numerical values revealed the BHJO algorithm’s competitiveness in tackling intricate optimization problems,affirming its capability to deliver favorable outcomes in challenging scenarios.
文摘Chimp Optimization Algorithm(ChOA)is one of the most efficient recent optimization algorithms,which proved its ability to deal with different problems in various do-mains.However,ChOA suffers from the weakness of the local search technique which leads to a loss of diversity,getting stuck in a local minimum,and procuring premature convergence.In response to these defects,this paper proposes an improved ChOA algorithm based on using Opposition-based learning(OBL)to enhance the choice of better solutions,written as OChOA.Then,utilizing Reinforcement Learning(RL)to improve the local research technique of OChOA,called RLOChOA.This way effectively avoids the algorithm falling into local optimum.The performance of the proposed RLOChOA algorithm is evaluated using the Friedman rank test on a set of CEC 2015 and CEC 2017 benchmark functions problems and a set of CEC 2011 real-world problems.Numerical results and statistical experiments show that RLOChOA provides better solution quality,convergence accuracy and stability compared with other state-of-the-art algorithms.
基金supported by the National Natural Science Foundation of China under Grant 52075090Key Research and Development Program Projects of Heilongjiang Province under Grant GA21A403+1 种基金the Fundamental Research Funds for the Central Universities under Grant 2572021BF01Natural Science Foundation of Heilongjiang Province under Grant YQ2021E002.
文摘Harris Hawks Optimization(HHO)is a novel meta-heuristic algorithm that imitates the predation characteristics of Harris Hawk and combines Lévy flight to solve complex multidimensional problems.Nevertheless,the basic HHO algorithm still has certain limitations,including the tendency to fall into the local optima and poor convergence accuracy.Coot Bird Optimization(CBO)is another new swarm-based optimization algorithm.CBO originates from the regular and irregular motion of a bird called Coot on the water’s surface.Although the framework of CBO is slightly complicated,it has outstanding exploration potential and excellent capability to avoid falling into local optimal solutions.This paper proposes a novel enhanced hybrid algorithm based on the basic HHO and CBO named Enhanced Harris Hawks Optimization Integrated with Coot Bird Optimization(EHHOCBO).EHHOCBO can provide higher-quality solutions for numerical optimization problems.It first embeds the leadership mechanism of CBO into the population initialization process of HHO.This way can take full advantage of the valuable solution information to provide a good foundation for the global search of the hybrid algorithm.Secondly,the Ensemble Mutation Strategy(EMS)is introduced to generate the mutant candidate positions for consideration,further improving the hybrid algorithm’s exploration trend and population diversity.To further reduce the likelihood of falling into the local optima and speed up the convergence,Refracted Opposition-Based Learning(ROBL)is adopted to update the current optimal solution in the swarm.Using 23 classical benchmark functions and the IEEE CEC2017 test suite,the performance of the proposed EHHOCBO is comprehensively evaluated and compared with eight other basic meta-heuristic algorithms and six improved variants.Experimental results show that EHHOCBO can achieve better solution accuracy,faster convergence speed,and a more robust ability to jump out of local optima than other advanced optimizers in most test cases.Finally,EHHOCBOis applied to address four engineering design problems.Our findings indicate that the proposed method also provides satisfactory performance regarding the convergence accuracy of the optimal global solution.
基金The National Natural Science Foundation of China(Grant No.81973791)funded this research.
文摘The original whale optimization algorithm(WOA)has a low initial population quality and tends to converge to local optimal solutions.To address these challenges,this paper introduces an improved whale optimization algorithm called OLCHWOA,incorporating a chaos mechanism and an opposition-based learning strategy.This algorithm introduces chaotic initialization and opposition-based initialization operators during the population initialization phase,thereby enhancing the quality of the initial whale population.Additionally,including an elite opposition-based learning operator significantly improves the algorithm’s global search capabilities during iterations.The work and contributions of this paper are primarily reflected in two aspects.Firstly,an improved whale algorithm with enhanced development capabilities and a wide range of application scenarios is proposed.Secondly,the proposed OLCHWOA is used to optimize the hyperparameters of the Long Short-Term Memory(LSTM)networks.Subsequently,a prediction model for Realized Volatility(RV)based on OLCHWOA-LSTM is proposed to optimize hyperparameters automatically.To evaluate the performance of OLCHWOA,a series of comparative experiments were conducted using a variety of advanced algorithms.These experiments included 38 standard test functions from CEC2013 and CEC2019 and three constrained engineering design problems.The experimental results show that OLCHWOA ranks first in accuracy and stability under the same maximum fitness function calls budget.Additionally,the China Securities Index 300(CSI 300)dataset is used to evaluate the effectiveness of the proposed OLCHWOA-LSTM model in predicting RV.The comparison results with the other eight models show that the proposed model has the highest accuracy and goodness of fit in predicting RV.This further confirms that OLCHWOA effectively addresses real-world optimization problems.