期刊文献+
共找到7篇文章
< 1 >
每页显示 20 50 100
Enhancing Evolutionary Algorithms With Pattern Mining for Sparse Large-Scale Multi-Objective Optimization Problems
1
作者 Sheng Qi Rui Wang +3 位作者 Tao Zhang Weixiong Huang Fan Yu Ling Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第8期1786-1801,共16页
Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to tr... Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges. 展开更多
关键词 Evolutionary algorithms pattern mining sparse large-scale multi-objective problems(SLMOPs) sparse large-scale optimization.
下载PDF
A Two-Layer Encoding Learning Swarm Optimizer Based on Frequent Itemsets for Sparse Large-Scale Multi-Objective Optimization
2
作者 Sheng Qi Rui Wang +3 位作者 Tao Zhang Xu Yang Ruiqing Sun Ling Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第6期1342-1357,共16页
Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.... Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed. 展开更多
关键词 Evolutionary algorithms learning swarm optimiza-tion sparse large-scale optimization sparse large-scale multi-objec-tive problems two-layer encoding.
下载PDF
M-TIMES SECANT-LIKE MULTI-PROJCTION METHOD FOR SPARSE MINIMIZATION PROBLEM
3
作者 林正华 宋岱才 赵立芹 《Numerical Mathematics A Journal of Chinese Universities(English Series)》 SCIE 2001年第1期26-36,共11页
In this paper, we present m time secant like multi projection algorithm for sparse unconstrained minimization problem. We prove this method are all q superlinearly convergent to the solution about m≥1 . At last, we f... In this paper, we present m time secant like multi projection algorithm for sparse unconstrained minimization problem. We prove this method are all q superlinearly convergent to the solution about m≥1 . At last, we from some numerical results, discuss how to choose the number m to determine the approximating matrix properly in practical use. 展开更多
关键词 sparse optimization problem superlinear convergence sparse symmetric Broyden method m time secant-like multi projection method.
下载PDF
A New Inertial Self-adaptive Gradient Algorithm for the Split Feasibility Problem and an Application to the Sparse Recovery Problem
4
作者 Nguyen The VINH Pham Thi HOAI +1 位作者 Le Anh DUNG Yeol Je CHO 《Acta Mathematica Sinica,English Series》 SCIE CSCD 2023年第12期2489-2506,共18页
In this paper,by combining the inertial technique and the gradient descent method with Polyak's stepsizes,we propose a novel inertial self-adaptive gradient algorithm to solve the split feasi-bility problem in Hil... In this paper,by combining the inertial technique and the gradient descent method with Polyak's stepsizes,we propose a novel inertial self-adaptive gradient algorithm to solve the split feasi-bility problem in Hilbert spaces and prove some strong and weak convergence theorems of our method under standard assumptions.We examine the performance of our method on the sparse recovery prob-lem beside an example in an infinite dimensional Hilbert space with synthetic data and give some numerical results to show the potential applicability of the proposed method and comparisons with related methods emphasize it further. 展开更多
关键词 Split feasibility problem CQ algorithm Hilbert space sparse recovery problem
原文传递
A UAV collaborative defense scheme driven by DDPG algorithm
5
作者 ZHANG Yaozhong WU Zhuoran +1 位作者 XIONG Zhenkai CHEN Long 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2023年第5期1211-1224,共14页
The deep deterministic policy gradient(DDPG)algo-rithm is an off-policy method that combines two mainstream reinforcement learning methods based on value iteration and policy iteration.Using the DDPG algorithm,agents ... The deep deterministic policy gradient(DDPG)algo-rithm is an off-policy method that combines two mainstream reinforcement learning methods based on value iteration and policy iteration.Using the DDPG algorithm,agents can explore and summarize the environment to achieve autonomous deci-sions in the continuous state space and action space.In this paper,a cooperative defense with DDPG via swarms of unmanned aerial vehicle(UAV)is developed and validated,which has shown promising practical value in the effect of defending.We solve the sparse rewards problem of reinforcement learning pair in a long-term task by building the reward function of UAV swarms and optimizing the learning process of artificial neural network based on the DDPG algorithm to reduce the vibration in the learning process.The experimental results show that the DDPG algorithm can guide the UAVs swarm to perform the defense task efficiently,meeting the requirements of a UAV swarm for non-centralization,autonomy,and promoting the intelligent development of UAVs swarm as well as the decision-making process. 展开更多
关键词 deep deterministic policy gradient(DDPG)algorithm unmanned aerial vehicles(UAVs)swarm task decision making deep reinforcement learning sparse reward problem
下载PDF
A PROCESS FOR SOLVING A FEW EXTREME EIGENPAIRS OF LARGE SPARSE POSITIVE DEFINITE GENERALIZED EIGENVALUE PROBLEM
6
作者 Chong-hua Yu O.Axelsson 《Journal of Computational Mathematics》 SCIE CSCD 2000年第4期387-402,共16页
In this paper, an algorithm for computing some of the largest (smallest) generalized eigenvalues with corresponding eigenvectors of a sparse symmetric positive definite matrix pencil is presented. The algorithm uses a... In this paper, an algorithm for computing some of the largest (smallest) generalized eigenvalues with corresponding eigenvectors of a sparse symmetric positive definite matrix pencil is presented. The algorithm uses an iteration function and inverse power iteration process to get the largest one first, then executes m - 1 Lanczos-like steps to get initial approximations of the next m - 1 ones, without computing any Ritz pair, for which a procedure combining Rayleigh quotient iteration with shifted inverse power iteration is used to obtain more accurate eigenvalues and eigenvectors. This algorithm keep the advantages of preserving sparsity of the original matrices as in Lanczos method and RQI and converges with a higher rate than the method described in [12] and provides a simple technique to compute initial approximate pairs which are guaranteed to converge to the wanted m largest eigenpairs using RQL. In addition, it avoids some of the disadvantages of Lanczos and RQI, for solving extreme eigenproblems. When symmetric positive definite lira ear systems must be solved in the process, an algebraic multilevel iteration method (AMLI) is: applied. The algorithm is fully parallelizable. [ABSTRACT FROM AUTHOR] 展开更多
关键词 EIGENVALUE sparse problem
原文传递
GLOBAL CONVERGENCE OF QPFTH METHOD FOR LARGE-SCALE NONLINEAR SPARSE CONSTRAINED OPTIMIZATION
7
作者 倪勤 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 1998年第3期271-283,共13页
A QP-free, truncated hybrid (QPFTH) method was proposed and developed in [6] forsolving sparse large-scale nonlinear programming problems. In the hybrid method, a truncatedNewton method is combined with the method of ... A QP-free, truncated hybrid (QPFTH) method was proposed and developed in [6] forsolving sparse large-scale nonlinear programming problems. In the hybrid method, a truncatedNewton method is combined with the method of multiplier. In every iteration level, either atruncated solution for a symmetric system of linear equations is determined by CG algorithmor an unconstrained subproblem is solved by the limited memory BFGS algorithm such thatthe hybrid algorithm is suitable to large-scale problems. In this paper, the consistency in thehybrid method and a steplength procedure are discussed and developed. The global convergenceof QPFTH method is proved and the two-step Q-quadratic convergence rate is further analyzed. 展开更多
关键词 Largesscale optimization global convergence sparse problem
全文增补中
上一页 1 下一页 到第
使用帮助 返回顶部