Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to tr...Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.展开更多
Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero....Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.展开更多
In this paper, we present m time secant like multi projection algorithm for sparse unconstrained minimization problem. We prove this method are all q superlinearly convergent to the solution about m≥1 . At last, we f...In this paper, we present m time secant like multi projection algorithm for sparse unconstrained minimization problem. We prove this method are all q superlinearly convergent to the solution about m≥1 . At last, we from some numerical results, discuss how to choose the number m to determine the approximating matrix properly in practical use.展开更多
The deep deterministic policy gradient(DDPG)algo-rithm is an off-policy method that combines two mainstream reinforcement learning methods based on value iteration and policy iteration.Using the DDPG algorithm,agents ...The deep deterministic policy gradient(DDPG)algo-rithm is an off-policy method that combines two mainstream reinforcement learning methods based on value iteration and policy iteration.Using the DDPG algorithm,agents can explore and summarize the environment to achieve autonomous deci-sions in the continuous state space and action space.In this paper,a cooperative defense with DDPG via swarms of unmanned aerial vehicle(UAV)is developed and validated,which has shown promising practical value in the effect of defending.We solve the sparse rewards problem of reinforcement learning pair in a long-term task by building the reward function of UAV swarms and optimizing the learning process of artificial neural network based on the DDPG algorithm to reduce the vibration in the learning process.The experimental results show that the DDPG algorithm can guide the UAVs swarm to perform the defense task efficiently,meeting the requirements of a UAV swarm for non-centralization,autonomy,and promoting the intelligent development of UAVs swarm as well as the decision-making process.展开更多
In this paper,by combining the inertial technique and the gradient descent method with Polyak's stepsizes,we propose a novel inertial self-adaptive gradient algorithm to solve the split feasi-bility problem in Hil...In this paper,by combining the inertial technique and the gradient descent method with Polyak's stepsizes,we propose a novel inertial self-adaptive gradient algorithm to solve the split feasi-bility problem in Hilbert spaces and prove some strong and weak convergence theorems of our method under standard assumptions.We examine the performance of our method on the sparse recovery prob-lem beside an example in an infinite dimensional Hilbert space with synthetic data and give some numerical results to show the potential applicability of the proposed method and comparisons with related methods emphasize it further.展开更多
In this paper, an algorithm for computing some of the largest (smallest) generalized eigenvalues with corresponding eigenvectors of a sparse symmetric positive definite matrix pencil is presented. The algorithm uses a...In this paper, an algorithm for computing some of the largest (smallest) generalized eigenvalues with corresponding eigenvectors of a sparse symmetric positive definite matrix pencil is presented. The algorithm uses an iteration function and inverse power iteration process to get the largest one first, then executes m - 1 Lanczos-like steps to get initial approximations of the next m - 1 ones, without computing any Ritz pair, for which a procedure combining Rayleigh quotient iteration with shifted inverse power iteration is used to obtain more accurate eigenvalues and eigenvectors. This algorithm keep the advantages of preserving sparsity of the original matrices as in Lanczos method and RQI and converges with a higher rate than the method described in [12] and provides a simple technique to compute initial approximate pairs which are guaranteed to converge to the wanted m largest eigenpairs using RQL. In addition, it avoids some of the disadvantages of Lanczos and RQI, for solving extreme eigenproblems. When symmetric positive definite lira ear systems must be solved in the process, an algebraic multilevel iteration method (AMLI) is: applied. The algorithm is fully parallelizable. [ABSTRACT FROM AUTHOR]展开更多
A QP-free, truncated hybrid (QPFTH) method was proposed and developed in [6] forsolving sparse large-scale nonlinear programming problems. In the hybrid method, a truncatedNewton method is combined with the method of ...A QP-free, truncated hybrid (QPFTH) method was proposed and developed in [6] forsolving sparse large-scale nonlinear programming problems. In the hybrid method, a truncatedNewton method is combined with the method of multiplier. In every iteration level, either atruncated solution for a symmetric system of linear equations is determined by CG algorithmor an unconstrained subproblem is solved by the limited memory BFGS algorithm such thatthe hybrid algorithm is suitable to large-scale problems. In this paper, the consistency in thehybrid method and a steplength procedure are discussed and developed. The global convergenceof QPFTH method is proved and the two-step Q-quadratic convergence rate is further analyzed.展开更多
基金support by the Open Project of Xiangjiang Laboratory(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28,ZK21-07)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(CX20230074)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJZ03)the Science and Technology Innovation Program of Humnan Province(2023RC1002).
文摘Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.
基金supported by the Scientific Research Project of Xiang Jiang Lab(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(ZC23112101-10)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJ-Z03)the Science and Technology Innovation Program of Humnan Province(2023RC1002)。
文摘Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.
文摘In this paper, we present m time secant like multi projection algorithm for sparse unconstrained minimization problem. We prove this method are all q superlinearly convergent to the solution about m≥1 . At last, we from some numerical results, discuss how to choose the number m to determine the approximating matrix properly in practical use.
基金supported by the Key Research and Development Program of Shaanxi(2022GY-089)the Natural Science Basic Research Program of Shaanxi(2022JQ-593).
文摘The deep deterministic policy gradient(DDPG)algo-rithm is an off-policy method that combines two mainstream reinforcement learning methods based on value iteration and policy iteration.Using the DDPG algorithm,agents can explore and summarize the environment to achieve autonomous deci-sions in the continuous state space and action space.In this paper,a cooperative defense with DDPG via swarms of unmanned aerial vehicle(UAV)is developed and validated,which has shown promising practical value in the effect of defending.We solve the sparse rewards problem of reinforcement learning pair in a long-term task by building the reward function of UAV swarms and optimizing the learning process of artificial neural network based on the DDPG algorithm to reduce the vibration in the learning process.The experimental results show that the DDPG algorithm can guide the UAVs swarm to perform the defense task efficiently,meeting the requirements of a UAV swarm for non-centralization,autonomy,and promoting the intelligent development of UAVs swarm as well as the decision-making process.
基金funded by University of Transport and Communications (UTC) under Grant Number T2023-CB-001
文摘In this paper,by combining the inertial technique and the gradient descent method with Polyak's stepsizes,we propose a novel inertial self-adaptive gradient algorithm to solve the split feasi-bility problem in Hilbert spaces and prove some strong and weak convergence theorems of our method under standard assumptions.We examine the performance of our method on the sparse recovery prob-lem beside an example in an infinite dimensional Hilbert space with synthetic data and give some numerical results to show the potential applicability of the proposed method and comparisons with related methods emphasize it further.
文摘In this paper, an algorithm for computing some of the largest (smallest) generalized eigenvalues with corresponding eigenvectors of a sparse symmetric positive definite matrix pencil is presented. The algorithm uses an iteration function and inverse power iteration process to get the largest one first, then executes m - 1 Lanczos-like steps to get initial approximations of the next m - 1 ones, without computing any Ritz pair, for which a procedure combining Rayleigh quotient iteration with shifted inverse power iteration is used to obtain more accurate eigenvalues and eigenvectors. This algorithm keep the advantages of preserving sparsity of the original matrices as in Lanczos method and RQI and converges with a higher rate than the method described in [12] and provides a simple technique to compute initial approximate pairs which are guaranteed to converge to the wanted m largest eigenpairs using RQL. In addition, it avoids some of the disadvantages of Lanczos and RQI, for solving extreme eigenproblems. When symmetric positive definite lira ear systems must be solved in the process, an algebraic multilevel iteration method (AMLI) is: applied. The algorithm is fully parallelizable. [ABSTRACT FROM AUTHOR]
文摘A QP-free, truncated hybrid (QPFTH) method was proposed and developed in [6] forsolving sparse large-scale nonlinear programming problems. In the hybrid method, a truncatedNewton method is combined with the method of multiplier. In every iteration level, either atruncated solution for a symmetric system of linear equations is determined by CG algorithmor an unconstrained subproblem is solved by the limited memory BFGS algorithm such thatthe hybrid algorithm is suitable to large-scale problems. In this paper, the consistency in thehybrid method and a steplength procedure are discussed and developed. The global convergenceof QPFTH method is proved and the two-step Q-quadratic convergence rate is further analyzed.